The RTS Test: Supreme Commander 1066 vs 1333 vs 1600 vs 1866 vs 2133 Mhz RAM on Z77

spacing guild

[H]ard|Gawd
Joined
Dec 9, 2010
Messages
1,356
In my never ending search for faster late-game speed in my favorite real-time strategy games, very little information is available, outside of synthetic, non-game related benchmarks, on RAM speed in gaming.

The conventional wisdom i've seen so far is that anything above 1600 MHZ is generally not necessary for just gaming. But those for those of us who play real time strategy games, where the games will involve hours, and GBs of RAM and thousands of units in play at any one time, I wanted to see if there was some benefit to RAM faster than 1600 Mhz.

So keeping this as brief as possible, I'll put out the numbers...you guys can debate the merits or lack thereof if you wish. I am really putting this out there for people like me who want a better idea what the benefits of fast RAM on the Z77 platform are.

The test is:

Supreme Commander: Forged Alliance. With MadBoris' Core Maximizer to evenly distribute the use of all 4 cores in a quad-core rig.

Two CPUs : i5 2500k, i5 3570k overclocked to 4.3 GHz
Asus Sabertooth Z77 motherboard
GTX 670 video card.
8gb Sniper Series GSkill 2133 Mhz RAM
Crucial M4 256 GB SSD

I recorded a 30 minute timedemo @ 1280 X 1024 windowed at the lowest possible graphical settings. The demo involved setting-up 8 players on the largest Sup Com map, Betrayal Ocean. After the A.I. ran for 30 minutes, I replayed the demo
@ 10X speed for each test and recorded the time with a stopwatch it took to reach various game times in the recorded game. To be clear, the recording isn't an actual recording...the timedemo just replays the game automatically. Obviously, the game slows dramatically, as the units pile-up. The demo peaked @ 1.2 GB of RAM used. A 4 GB flag was used on the SupremeCommander.exe on Windows 7 64-bit OS.

All CPU speeds are @ 4.3 Ghz. RAM speeds are as listed.

Timings are 9-9-9-24 2T @ 1648 Mhz and below. 9-11-10-28 1T @ 1854 Mhz
and 9-11-10-28 2T @ 2133 Mhz

In-game time is listed on the far right. Royal Blue= In game time.


2500k/1098 MHz 2500k/1374 mhz 2500k/1648Mhz 3570k/1374 Mhz 3570k/1648 mhz 3570k/1854 Mhz 3570k/2133 Mhz




7:00 -- 1:24 1:23 1:15 1:15 1:13 1:11 1:06
10:00-- 2:34 2:29 2:18 2:18 2:14 2:12 2:01
12:30 -- 4:03 3:54 3:35 3:35 3:29 3:27 3:12
17:00-- 7:53 7:52 7:07 6:57 6:466:48 6:05
20:00 -- 11:15 11:05 10:11 9:56 9:39 9:42 8:42
30:00 -- 27:21 26:50 24:14 23:53 23:25 23:23 20:10


As you can see...going from the baseline of an i5 2500k @ 4.3 Ghz with RAM running @ 1098 Mhz to an i5 3570k at 4.3 Ghz with RAM running @ 2133 Mhz, I was able to run a 30 minute time demo about 26 percent faster.

So all you heavy RTS gamers...get that fast RAM!
 
Last edited:
Cool test. I didn't think the difference would be huge. I was wrong!
 
Supreme Commander is a really underrated benchmark. Yeah, it's old now, but It's probably the most CPU and memory intensive game there is once you're in late game with all those AI churning away and spamming units.

Another thing to look at is the sim rate of the game. I don't remember how you pull that up, but since the simulation is desync'd from the framerate, there's a separate number for that.
 
Surprising results. Still pretty sure I haven't played a smooth game of Supcom yet, lol

Thanks for the effort!
 
Supreme Commander is a really underrated benchmark. Yeah, it's old now, but It's probably the most CPU and memory intensive game there is once you're in late game with all those AI churning away and spamming units.

Another thing to look at is the sim rate of the game. I don't remember how you pull that up, but since the simulation is desync'd from the framerate, there's a separate number for that.

The *original* Supreme Commander was, in fact, one of the few games that was quad-core-aware *and* x64-aware out of the gate.

I knew it leveraged multiple cores simply due to the changes personally when I went from P4 to Celeron DC (E1200); however, I got knocked off my pins again when going from E3400 to Q6600.

I actually had trouble swallowing how large the performance gain was, because it defied all logic.

Yes; I added more cores. Yes, there was more cache available per core. However, the core speed dropped. Performance should have stayed flat or dropped.

It did neither.

Updating it with the full patch set and the Forged Alliance updates merely made the affinity all the more obvious - even with as old a quad as Kentsfield.

Remember how I've been saying that Ivy Bridge (and specifically i5-3570K) costs no more (in absolute terms) than Q6600 new did? Here are two differences between Q6600 and i5-3570K:

1. Yes - i5-3570K has a smaller cache overall than Q6600 (6136k for IB vs. 8192K for Kentsfield); however

2. i5-3570K is clocked faster stock (3.4 GHz) vs. Q6600 (2.4 GHz). Even assuming NO change in core efficiency, Ivy Bridge should beat Kentsfield (stock vs. stock - no other changes). Naturally, the idea that Ivy Bridge is not as efficient as Kentsfield on a per-core basis is laughable.

If anything, the inefficiencies of the LGA775 MCH held the Q-series back - which the Core i-series (from first generation forward) have been proving with a jackhammer.
 
What happens when you test with the graphics settings you actually use when playing?
 
Nice test. The 3570 results look strange to me. Going from 1374MHz to 1854MHz has almost no impact on speed, and then going to 2133MHz has a huge jump.

One possible explanation is the CPU is bottlenecked until you go to 2133MHz - any other thoughts?
 
What happens when you test with the graphics settings you actually use when playing?

I agree with this question -- if I was going to critique the OP, I'd say that you complained about synthetic benchmarks and then made yourself a synthetic benchmark.

You've shown that there is a difference in RAM speeds -- but I think the real question is whether or not that difference matters at real-world settings.
 
What happens when you test with the graphics settings you actually use when playing?

I used the lowest graphical settings as a control on the experiment. My experience with Sup Com is the effect on graphics and game speed is negligible. As long as you have a decent graphics card of the last three years, and everything else being equal, it doesn't matter.

IPC, memory speed and available memory size is what gets Sup Com moving. If you run out of RAM, which easy to do in Sup Com with only 2 Gb allocated, the game simply shuts down. That's why I use a 4 GB flag on most of my games. I've played online with people using laptops and you can tell who has the fast computers and who doesn't using the ren_shownetworkstats command.

If seven players available game speed is +1 and the eighth player's game speed is -4 because they have a weak system, guess how fast the game runs online in real time?

I agree with this question -- if I was going to critique the OP, I'd say that you complained about synthetic benchmarks and then made yourself a synthetic benchmark.

You've shown that there is a difference in RAM speeds -- but I think the real question is whether or not that difference matters at real-world settings.


I've only seen one review site ever run a CPU-RAM-dependent gaming benchmark like this. [H]ard came close to this kind of evaluation years ago. Simply because there aren't very many hardcore RTS gamers. There might be a few hundred who still play Sup Com online regularly. I'm not talking about StarCraft. I'm talking about five thousand and more units at play. As far as I know, only Sins of a Solar Empire or Sup Com employs this many units. And this isn't a synthetic benchmark, this is a real-time in game benchmark that I measured independent from the game using a stopwatch. I got the idea from Hardware Canucks, as they are the only one to do it and they haven't updated their chart in years.

Unfortunately Sins only really uses one core of a quad-core CPU, so basically Sup Com with support of four cores, is just about the only game I know of, where RAM speed can be measured with tangible effects in RTS. Perhaps it can be measured in Starcraft also, I don't play it so I don't know.

It would be interesting to see if there are measurable effects in other games, like Skyrim, where perhaps walking into an area of a map where there are many NPC's causes lag.

Something like that might be hard to quantify, And in all reality, the real-time difference between slower ram and fast ram in my test is hardly noticeable. I had to speed up the demo to see the difference.

But I upgraded to X79 for quad channel and bought Corsair Dominators just the same.:D
 
Last edited:
Here is the X79 update. Again, i7 3820 @ 4.3 Ghz RAM @ 2133 Mhz.

As you can see, Ivy Z77 defeats quad-channel Sandy X79 but still very good. I can only hope the Ivy-E CPUs will be out soon. But I hear they won't be out until Q3 2013. Later than Haswell. :confused:

7:00 1:08
10:00 2:07
12:30 3:18
17:00 6:32
20:00 9:28
30:00 22:20


Ivy Z77 4.3 Ghz 2133 Mhz RAM

1:06
2:01
3:12
6:05
8:42
20:10
 
Last edited:
I want to share your replay on the Forged Alliance Forever forum to get as many benchmarks as possible for different processors and configurations. And I also want to run it myself.

I'm building a new gaming rig, and I'd like to see what works best for the money. Since there are hardly any simspeed benchmarks for modern processors, I'd like to use your replay, so we can compare the new data with your data. If you want, I will share results from other configurations here on the forum.
 
Might have to give this a try on a couple random rts games. Sounds like a decent way to benchmark. Would creating a custom match and seeing how many units I can dump on the map a good test? I might be limited by my hd 6750 though, I'm sure to hit a bottleneck somewhere.


Posted from Hardforum.com App for Android
 
Might have to give this a try on a couple random rts games. Sounds like a decent way to benchmark. Would creating a custom match and seeing how many units I can dump on the map a good test? I might be limited by my hd 6750 though, I'm sure to hit a bottleneck somewhere.


Posted from Hardforum.com App for Android

It's why even the original Supreme Command (as long as you have the entire patch set) is still relevant, and that's if you have ANY multi-core CPU. Few other games take advantage of a multi-core CPU (regardless of who made it OR its age) - and Supreme Commander is how old?

I don't see the HD6750 as being any sort of holdback; my HD5450 largely isn't (not in single-player, anyway) - but that is because Supreme Commander is still a DX9c title, taking exactly NO advantage of DX11 whatever. (Where large RAM loadouts may prove an advantage is in MP - especially when you have at least two sides with large numbers of units - however, that would be where large amounts of bandwidth would be advantageous.)
 
So the Diff CPU has nothing to contribute to it?

Comparing the 2500K at 1648 to the 3570K at 1648 the difference was between 24.14 and 23.25, purely from the different CPU.

At 1374Mhz it was 26.50 to 23.53.

It seems the CPU achitecture has more to contibute than the RAM speed itself, but the ram speed does effect the performance within each architecture.

Would be interesting to see how the multipliers/ bus speed was during these tests, or if the OP just increased the memory frequency ratio.
 
I want to share your replay on the Forged Alliance Forever forum to get as many benchmarks as possible for different processors and configurations. And I also want to run it myself.

I'm building a new gaming rig, and I'd like to see what works best for the money. Since there are hardly any simspeed benchmarks for modern processors, I'd like to use your replay, so we can compare the new data with your data. If you want, I will share results from other configurations here on the forum.

Nice that the SupCom community has taken an interest in my test. My main rig is down for maintenance right now. I'm 99 percent sure I have the saved demo I created ... somewhere on it.

As soon as I locate it, I'll be in touch. Share the data as you will. :)
 
I got 64gb of 1866 on a x79 quadchsnnel. I never notice any difference when I do cenibench or Maya or 3d studio max bench marks if I overclock all render times are the same. games most games do not change performance.
 
Last edited:
Hello guys,
although this thread is quite old and the OP may doesnt answer me but I gonna ask anyway.
I also want to get a new processor and it would be very nice if you could give me the replay or publish it so I could compare my own processor's performance with the intels you posted or the phenom II x4 of my brother. This kind of benchmark convienced me because SupCom is still one of the most CPU dependent games I know ;)

I hope you see this message and can send me the replay/file.
Thanks for your effort :)
 
Does Supreme Commmander 2 work the same as the original? Tried to figure out what I could get with faster RAM in BF4, but its hard to replicate each run.
 
hardware will never catch up with supcom, we will just keep increasing display real estate and playing larger games until we have to build a dyson sphere around the sun to power it all
 
I dunno what you guys are talking about. over at FAF we've even upped the unit cap to 1500 each and 16 players max, because of consistent good performance.

there are those who run it on atom or some shit and ruin the game for everyone else but say you have everyone on i5 and above and an ethernet internet conection you can expect the game to never go under normal game speed all length.

hardware has more than caught up with supcomFA. it may still multi-core very poorly but it's not at all difficult to get good performance on it nowdays
 
but it's not at all difficult to get good performance on it nowdays
Of course it isn't difficult nowadays, because everything in this thread you resurrected for no good reason is positively ancient.

ThreadNecro.gif
 
Back
Top