Which is faster, 9x333 or 8x375 on a q6600? Results inside

graysky

Gawd
Joined
May 6, 2007
Messages
620
What is a better overclock?

Good question. Most people believe that a higher FSB and lower multiplier are better since this maximizes the bandwidth on the FSB. Or is a low bus rate and higher multiplier better? Or is there no difference? I looked at three different settings on my Q6600:

9x333 = 3.0 GHz (DRAM was 667 MHz)
8x375 = 3.0 GHz (DRAM was 750 MHz)
7x428 = 3.0 GHz (DRAM was 856 MHz)

The DRAM:CPU ratio was 1:1 for each test and the voltage and timings were held constant; voltage was 2.25V and timings were 4-4-4-12-4-20-10-10-10-11.

After the same experiments, at each of these settings, I concluded that there is no difference for real world applications. If you use a synthetic benchmark, like Sandra, you will see faster memory reads/writes, etc. with the higher FSB values -- so what. These high FSB settings are great if all you do with your machine is run synthetic benchmarks. But the higher FSB values come at the cost of higher voltages for the board which equate to higher temps.

I think that FSB bandwidth is simply not the bottle neck in a modern system... at least when starting at 333. Perhaps you would see a difference if starting slower. In other words, a 333 MHz FSB quad pumped to 1333 MHz is more than sufficient for today’s applications; when I increased it to 375 MHz (1500 MHz quad pumped) I saw no real-world change; same result when I pushed it up to 428 MHz (1712 MHz quad pumped). Don’t believe me? Read this thread wherein x264.exe (a video encoder) is used at different FSB and multiplier values. Have a close look at the 3rd table in that thread and note the FPS (frames per second) numbers are nearly identical for a chip clocked at the same clockrate with different FSB speeds. This was found to be true of C2Q as well as C2D chips.

You can do a similar test for yourself with applications you commonly use on your machine. Time them with a stop watch if the application doesn’t report its own benchmarks like x264 does.

Some "Real-World" Application Based Tests

Three different 3.0 GHz settings on a Q6600 system were tested with some apps including: lameenc, super pi, x264, winrar, and the trial version of photoshop. Here are the details:

Test O/C 1: 9x333 = 3.0 GHz
ooey0.gif


Test O/C 2: 8x375 = 3.0 GHz
8x375vv7.gif


Test O/C 3: 7x428 = 3.0 GHz
7x428de3.gif


Result: I could not measure a difference between a FSB of 333 MHz, 375 MHz, or 428 MHz using these application based, "real-world" benchmarks.

Since 428 MHz is about 28 % faster than 333 MHz, you’d think that if the FSB was indeed the bottle neck, the higher values would have given faster results. I believe that the bottleneck for most apps is the hard drive.

Description of Experiments and Raw Data

Lame version 3.97 – Encoded the same test file (about 60 MB wav) with these commandline options:
Code:
lame -V 2 --vbr-new test.wav
(which is equivalent to the old –-alt-preset fast standard) a total of 10 times and averaged play/CPU data as the benchmark.

Super Pi version 1.1 – Ran both the 1M and 2M tests and compared the reported total number of seconds to calculate as the benchmark.

x264 version 0.54.620 – Ran a 2-pass encode on the same MPEG-2 (480x480 DVD source) file twice and averaged the FPS1 and FPS2 numbers as the benchmark. In case you’re wondering, here is the commandline options for this encode, pass1:
Code:
x264 --pass 1 --bitrate 1000 --stats "C:\work\test-NEW.stats" --bframes 3 --b-pyramid --direct auto --subme 1 --analyse none --vbv-maxrate 25000 --me dia --merange 12 --threads auto --thread-input --progress --no-psnr --no-ssim --output NUL "C:\work\test-NEW.avs"

And for pass2:
Code:
x264 --pass 2 --bitrate 1000 --stats "C:\work\test-NEW.stats" --ref 3 --bframes 3 --b-pyramid --weightb --direct auto --subme 6 --trellis 1 --analyse all  --8x8dct --vbv-maxrate 25000 --me umh --merange 12 --threads auto --thread-input --progress --no-psnr --no-ssim --output "C:\work\test-NEW.264" "C:\work\test-NEW.avs"

The input avisynth script was:
Code:
global MeGUI_darx = 4
global MeGUI_dary = 3
DGDecode_mpeg2source("C:\work\test-new.d2v")
AssumeTFF()
Telecide(guide=1,post=2,vthresh=35) # IVTC
Decimate(quality=3) # remove dup. frames
crop( 2, 0, -10, -4)
Spline36Resize(640,480) # Spline36 (Neutral)

RAR version 2.63 – Had rar run my standard backup batch file which generated about 0.98 G of rars (1,896 files totally). Here is the commandline I used:
Code:
rar a -u -m0 -md2048 -v51200 -rv5 -msjpg;mp3;tif;avi;zip;rar;gpg;jpg  "e:\Backups\Backup.rar" @list.txt
where list.txt a list of all the dirs I want it to back up. I timed how long it took to complete with a stop watch. I ran the backup twice and averaged it as the benchmark.

Trial of Photoshop CS3 – I used the batch function in PSCS3 to batch bicubic resize 10.1 MP to 0.7 MP (3872x2592 --> 1024x685), then applied an unsharpen mask (60 %, 0.8 px radius, threshold 12), and finally saved as quality 8 jpg. In total, 57 jpg files were used in the batch. I timed how long it took to complete two runs, and averaged them together as the benchmark.

Here are the raw data if you care to see them:
datarawuv7.gif
 
In the interest of overkill, I just completed the same benchmark @ 7x428 (edited first post in thread). Results are the same: no benefit of an even higher FSB.
 
Bench with something that actually uses a lot of RAM at once and not a lot of HDD and you will see a difference.

Why only thew 1M and 2M on the PI program? Test it with the 32M setting and see what happens.

I have an arbitrary number calculator that I made that relies very heavily on memory bandwidth... it can use whatever length of numbers you want.

It shows speed increases very well with memory bandwidth increases.

It is not memory efficient like mathmatica and not anywhere as fast.

I have done test calculations that use over 800MB of RAM..... things like that actually rely on the memory bandwidth and not just the processors L1 and L2 cache.
 
In general I agree with your observations, in most applications you would never notice the difference.

But consider this, there is about .3 seconds in the difference in the averages for the fastest and slowest LAME results. (its super complicated so I have just generalized the math and facts below, one could do a PHD dissertation on this if one wanted)

at 3GHz .3 seconds is about 900,000 (nine hundred thousand) clock cycles and the c2ds can execute about 5 instructions per clock for a possible 45 million instructions processed in .3 sec, in some applications (none you will find us running, but they exist) thats huge.

People spend tens of thousands of bucks making a dragster go .1 second faster, by that standard we are wimps. :D
 
In general I agree with your observations, in most applications you would never notice the difference.
With an FX57, Microsoft Word at 200 x 15 isn't noticably slower than 300 x 10, but SuperPi (1k) is 6+ seconds slower...that's huge!
 
Good thread! I've been wondering about this as i'm @ 9x333 at the moment. I'm assuming this would have no noticeable real-world effect on gaming either?
 
If this proves anything - it's not worth it to run high FSBs and stress your motherboard. If you want high RAM speeds set something like 5:6 or 4:5 and keep your FSB low and your multiplier at max.
 
I'd be curious about superpi run at 32M. This is very very interesting though...thank you for spending the time to do it.
 
Back
Top