Nazo
2[H]4U
- Joined
- Apr 2, 2002
- Messages
- 3,672
So, first of all, my system specs in case it matters:
CPU: Ryzen 5 5600X currently set to run at a fixed 4.4GHz.
And of course the RAM: Mushkin Redline Lumina DDR4 4000MHz (PC4-32000) 1.35V
To begin with I want to explain the situation. TLD[n]R people skip down please rather than complain. Now, first of all I want to clarify: I am not at all overclocking this RAM beyond its official specifications. (Ok, I recognize that anything beyond 3200MHz at 1.2V is "overclocking" according to the official DDR4 standard, but I am running this RAM at the official timings and voltage specified by Mushkin.) I've really struggled with this particular system getting just a handful of games to run as smoothly as I really kind of need them to and have chased down a number of different things regarding performance, reliability, etc trying to smooth everything out. One thing I noticed a little while back that had a HUGE effect in this regard was a memory "overclock" function in my motherboard which is confusingly incorrectly labeled because what it actually does is disable the standard downscaling when the system is underutilized done as part of standard powersaving features. Disabling this had a huge effect on general gaming performance. (And as a reminder, even though it says "overclock" it actually just means "don't underclock.")
A bit of an explanation on that: I think a part of some of the problems I've had in some things with this system is too many things incorrectly interpreting as low load situations during an actual game. In part, games just really aren't generally pushing this CPU all that hard most of the time, then suddenly have a sudden demand in some situations (such as sudden terrain generation in things with flight mechanics where it may have to all of a sudden load a large area.) I noticed the CPU was underclocked most of the time in games with the stock PBO and turbo, then would have to climb quickly during those moments with a small delay in the process of actually going up, thus noticeably more hitching during those sudden quick demand moments. I set the CPU to a fixed frequency and it helped enormously. I think something similar has been going on with the RAM (probably specifically tied to CPU usage since I think the actual system doesn't have good metrics on memory demand? In which case the RAM would be down quite a lot in fact.) I saw a significant decrease in hitching in gaming in general by changing this setting.
---
(Ok, TLD[n]R people can start paying attention here):
This brings me back to wondering if the issue is more subtle, such as latencies. To this end I'm taking another look at my RAM -- especially in light of the aforementioned issue where the RAM was underclocking and setting RAM "overclock" significantly helped with hitching and such. One thing I never was able to work out was disabling "gear down". Supposedly the difference is very subtle and typically doesn't matter. Supposedly. Yet I'm seeing somewhat inconsistent explanations for it. People basically say that all it truly does is essentially even out odd numbered timings. For example, a CL15 would become CL16. However, my stock timings are 18-22-22-42 -- all even. Yet, if I disable gear down with full stock settings on this RAM my system won't even post. Since the timings shouldn't even be affected supposedly, this is obviously not 100% true. Now, of course, I was also trying to use a CMD rate of 1T for obvious reasons. But here's the thing: setting CMD to 2T still resulted in massive instability. In fact, at full stock speed it's amazing it even posted because I actually got a BSOD probably less than a second after starting up Prime95. Oof. Even setting a lower frequency like 3600MHz still was massively unstable (though at least not BSOD unstable, just immediate errors in Prime95.) Now, with a bit of tweaking and lowering the RAM all the way down to 3200MHz (and appropriately matching FCLK -- btw, to clarify, the 2000 FCLK matching the 4000MHz stock speed is 100% stable without gear down disabled) I can finally get it to post and pass Prime95 for at least more than a few minutes (much more testing needed yet, it may still be unstable.) That's with a 2T CMD rate. I tested a few speeds such as 3600 (and supposedly gear down isn't even needed at that speed) but had to go as low as 3200MHz in my initial testing at least (might be able to get a bit higher, not sure, but this may turn out to be unstable in longer testing anyway, so much more testing needed still.)
So here's what I'm really trying to figure out here: explanation of how gear down really works is that it essentially runs the speed at half clocks for "latching" (not 100% what that really means in this regard.) Doubly confusing because that would mean it would be more a frequency-related thing than a timing-related thing, no? Except I guess it's only for specific points in its operation, which is why it affects timing? Here's the thing: if all gear down did was force even timings, then booting with the stock (already even) timings would essentially mean it already is "disabled" for all intents and purposes and even if the CMD rate counts towards this since it is uneven then when I set CMD to 2T gear down disabled would be 100% stable at the settings that are 100% stable with it enabled. Yet it's not, so obviously there is something else at play here. It makes me wonder if something is causing it to "gear down" more than it really is meant to, perhaps effectively lowering the actual effective RAM speed down to 2000MHz in real usage. If so, I would benefit enormously in terms of many things that could bottleneck there getting a stable system with gear down disabled. The RAM is supposed to be fairly decent (a decent Hynix for instance) but perhaps just isn't as 100% at its rated specs as it should be (though even if not it wouldn't be RMAable considering.)
I'm not really 100% sure how to benchmark just the RAM in this scenario. Especially since the bigger question is probably latencies of operations rather than bandwidth which would be even harder to benchmark anyway. That makes testing much more difficult (since monitoring performance during gaming can be much more subjective than objective due to perceptions, even if there are metrics that can theoretically be objectively benchmarked.) After stability testing I'll try both ways to see which seems to play better in actual real world results gaming-wise (after all, bandwidth-speaking the drop to 3200MHz is fairly significant -- however, if that's actually an effective increase from 2000MHz then it's very significant in the other direction!) It may still not pass even at 3200MHz though. And, as I said before, I don't want to raise voltage (lost too many system components due to that over the many many years I've been overclocking things. They all ultimately began producing errors or failures far too quickly for my uses since I keep most computer components longer than many people seem to.)
I'm wondering if someone who understands it better can tell me what is better for real life actual usage (particularly in gaming where latencies and bandwidth both play roles) given that if I can even get it stable with geardown disabled it may have to be at a significantly lower speed. So far in my initial gaming tests I haven't seen a huge difference, but I'm still trying to pin down bottlenecks in a few (like 7 Days to Die) which have been very stubborn, so I want to eliminate as many variables as possible.
CPU: Ryzen 5 5600X currently set to run at a fixed 4.4GHz.
And of course the RAM: Mushkin Redline Lumina DDR4 4000MHz (PC4-32000) 1.35V
To begin with I want to explain the situation. TLD[n]R people skip down please rather than complain. Now, first of all I want to clarify: I am not at all overclocking this RAM beyond its official specifications. (Ok, I recognize that anything beyond 3200MHz at 1.2V is "overclocking" according to the official DDR4 standard, but I am running this RAM at the official timings and voltage specified by Mushkin.) I've really struggled with this particular system getting just a handful of games to run as smoothly as I really kind of need them to and have chased down a number of different things regarding performance, reliability, etc trying to smooth everything out. One thing I noticed a little while back that had a HUGE effect in this regard was a memory "overclock" function in my motherboard which is confusingly incorrectly labeled because what it actually does is disable the standard downscaling when the system is underutilized done as part of standard powersaving features. Disabling this had a huge effect on general gaming performance. (And as a reminder, even though it says "overclock" it actually just means "don't underclock.")
A bit of an explanation on that: I think a part of some of the problems I've had in some things with this system is too many things incorrectly interpreting as low load situations during an actual game. In part, games just really aren't generally pushing this CPU all that hard most of the time, then suddenly have a sudden demand in some situations (such as sudden terrain generation in things with flight mechanics where it may have to all of a sudden load a large area.) I noticed the CPU was underclocked most of the time in games with the stock PBO and turbo, then would have to climb quickly during those moments with a small delay in the process of actually going up, thus noticeably more hitching during those sudden quick demand moments. I set the CPU to a fixed frequency and it helped enormously. I think something similar has been going on with the RAM (probably specifically tied to CPU usage since I think the actual system doesn't have good metrics on memory demand? In which case the RAM would be down quite a lot in fact.) I saw a significant decrease in hitching in gaming in general by changing this setting.
---
(Ok, TLD[n]R people can start paying attention here):
This brings me back to wondering if the issue is more subtle, such as latencies. To this end I'm taking another look at my RAM -- especially in light of the aforementioned issue where the RAM was underclocking and setting RAM "overclock" significantly helped with hitching and such. One thing I never was able to work out was disabling "gear down". Supposedly the difference is very subtle and typically doesn't matter. Supposedly. Yet I'm seeing somewhat inconsistent explanations for it. People basically say that all it truly does is essentially even out odd numbered timings. For example, a CL15 would become CL16. However, my stock timings are 18-22-22-42 -- all even. Yet, if I disable gear down with full stock settings on this RAM my system won't even post. Since the timings shouldn't even be affected supposedly, this is obviously not 100% true. Now, of course, I was also trying to use a CMD rate of 1T for obvious reasons. But here's the thing: setting CMD to 2T still resulted in massive instability. In fact, at full stock speed it's amazing it even posted because I actually got a BSOD probably less than a second after starting up Prime95. Oof. Even setting a lower frequency like 3600MHz still was massively unstable (though at least not BSOD unstable, just immediate errors in Prime95.) Now, with a bit of tweaking and lowering the RAM all the way down to 3200MHz (and appropriately matching FCLK -- btw, to clarify, the 2000 FCLK matching the 4000MHz stock speed is 100% stable without gear down disabled) I can finally get it to post and pass Prime95 for at least more than a few minutes (much more testing needed yet, it may still be unstable.) That's with a 2T CMD rate. I tested a few speeds such as 3600 (and supposedly gear down isn't even needed at that speed) but had to go as low as 3200MHz in my initial testing at least (might be able to get a bit higher, not sure, but this may turn out to be unstable in longer testing anyway, so much more testing needed still.)
So here's what I'm really trying to figure out here: explanation of how gear down really works is that it essentially runs the speed at half clocks for "latching" (not 100% what that really means in this regard.) Doubly confusing because that would mean it would be more a frequency-related thing than a timing-related thing, no? Except I guess it's only for specific points in its operation, which is why it affects timing? Here's the thing: if all gear down did was force even timings, then booting with the stock (already even) timings would essentially mean it already is "disabled" for all intents and purposes and even if the CMD rate counts towards this since it is uneven then when I set CMD to 2T gear down disabled would be 100% stable at the settings that are 100% stable with it enabled. Yet it's not, so obviously there is something else at play here. It makes me wonder if something is causing it to "gear down" more than it really is meant to, perhaps effectively lowering the actual effective RAM speed down to 2000MHz in real usage. If so, I would benefit enormously in terms of many things that could bottleneck there getting a stable system with gear down disabled. The RAM is supposed to be fairly decent (a decent Hynix for instance) but perhaps just isn't as 100% at its rated specs as it should be (though even if not it wouldn't be RMAable considering.)
I'm not really 100% sure how to benchmark just the RAM in this scenario. Especially since the bigger question is probably latencies of operations rather than bandwidth which would be even harder to benchmark anyway. That makes testing much more difficult (since monitoring performance during gaming can be much more subjective than objective due to perceptions, even if there are metrics that can theoretically be objectively benchmarked.) After stability testing I'll try both ways to see which seems to play better in actual real world results gaming-wise (after all, bandwidth-speaking the drop to 3200MHz is fairly significant -- however, if that's actually an effective increase from 2000MHz then it's very significant in the other direction!) It may still not pass even at 3200MHz though. And, as I said before, I don't want to raise voltage (lost too many system components due to that over the many many years I've been overclocking things. They all ultimately began producing errors or failures far too quickly for my uses since I keep most computer components longer than many people seem to.)
I'm wondering if someone who understands it better can tell me what is better for real life actual usage (particularly in gaming where latencies and bandwidth both play roles) given that if I can even get it stable with geardown disabled it may have to be at a significantly lower speed. So far in my initial gaming tests I haven't seen a huge difference, but I'm still trying to pin down bottlenecks in a few (like 7 Days to Die) which have been very stubborn, so I want to eliminate as many variables as possible.
Last edited: