Nvidia Forceware - test settings button - a dependable GPU max-overclock detector?

Josh_Hayes

Limp Gawd
Joined
Apr 10, 2002
Messages
277
I was wondering what the concensus was on the Test Settings button in the Clock Frequency Settings section of the Forceware drivers. Can it be used to accurately guage a safe max-overclock? For example, on my BFG 6800 GT I can get 415/1180 but even one notch further results in a failed test. I see no artifacts or glitches using these settings in any games. Should I push my luck or be happy with my current OC?

-Josh
 
Be very happy with your current OC, Your getting a $500 card for $400, nothing to be disappointed about.

As for the gauge itself. I find it alittle off sometims, I think its mainly based on heat and not the actual limit seeing how it tells me I can only do 1150 max on ram but with RivaTuner I can hit the 1200 mark no problems.

Just waiting for my NvSilencer. :)
 
Josh_Hayes said:
I was wondering what the concensus was on the Test Settings button in the Clock Frequency Settings section of the Forceware drivers. Can it be used to accurately guage a safe max-overclock? For example, on my BFG 6800 GT I can get 415/1180 but even one notch further results in a failed test. I see no artifacts or glitches using these settings in any games. Should I push my luck or be happy with my current OC?

-Josh

Whatever you get for the autodetect isn't necessarily accurate anyway. Mine autodetects to 420 but if you carefully run test 4 3x in a row of 3dmark2003 I can start to see artifacts at that setting on the turtle shell (and only the turtle shell, look for flickering of textures starting on the second or third run). I had to move it to 410-415 core to eliminate artifacts completely in THAT TEST ONLY. 420 still runs everything else fine, finishes 3dmark, ect.....and I'm sure it's 'safe' but it's not artifact free.

You have to do this sort of testing to find a truly artifact free overclock and autodetect can't do it for you. Test 4 looping is as good a test as I've found, and I highly recommend it to test you final overclock.

Btw, of course you should be happy your overclock is over a standard Ultra and you paid 100 bucks less! :)
 
oozish said:
Whatever you get for the autodetect isn't necessarily accurate anyway. Mine autodetects to 420 but if you carefully run test 4 3x in a row of 3dmark2003 I can start to see artifacts at that setting on the turtle shell (and only the turtle shell, look for flickering of textures starting on the second or third run). I had to move it to 410-415 core to eliminate artifacts completely in THAT TEST ONLY. 420 still runs everything else fine, finishes 3dmark, ect.....and I'm sure it's 'safe' but it's not artifact free.

You have to do this sort of testing to find a truly artifact free overclock and autodetect can't do it for you. Test 4 looping is as good a test as I've found, and I highly recommend it to test you final overclock.

Btw, of course you should be happy your overclock is over a standard Ultra and you paid 100 bucks less! :)

Im not soley relying on the Detect Optimal Frequencies button. What I did was detect the optimal speed, I then chose the highest of 3 runs and increased the slider gradually until it couldn't pass the Test Settings check and then came back down until it passes everytime. I then repeated the same steps for the memory. Is this the be the right way?

Afterthought : I started playing Enemy Territory with my max overclock of 415/1180 and the graphics look fine, but my computer keeps locking up with an sound stuck in the buffer repeating infinitely (almost like the old SB Live bug on VIA chipsets). I have a SB Audigy 2. What would cause this? Am I running out of voltage because I lack the secondary plug of the Ultra? If I run at default speeds I get no lockups.
 
Yes, the turtle in test 4 of 3DMark does seem to be the tell-tale for artefacts. I'm curious though, how do you know the flickering on the turtle is GPU or RAM related? Are artefacts from RAM different to artefacts from the GPU? How do you tell the difference?
 
tamislan said:
Yes, the turtle in test 4 of 3DMark does seem to be the tell-tale for artefacts. I'm curious though, how do you know the flickering on the turtle is GPU or RAM related? Are artefacts from RAM different to artefacts from the GPU? How do you tell the difference?

Set your memory to a safe setting (read:stock) that you know will run time and time again. Then start going on your core until you see the artifacts or it just crashes. At that point set your core to a safe setting (read:stock) and repeat the process, except memory shouldn't make the test crash - it will just artifact. Once you have those two values run the test multiple times at those speeds just to make sure all is well. :)
 
Thanks but I understand that. I was simply wondering what the difference in artefacts are between clocking the two.
 
tamislan said:
Thanks but I understand that. I was simply wondering what the difference in artefacts are between clocking the two.

Try it the way I said and you'll find out first hand. ;)

Generally though, core will have specks or just make it crash.

Memory will artifact - random polys being thrown about - or have some random specks as well.
 
That's what I thought. However, when I see these poly problems on the turtle and clock back the core (and only the core) they go away. I am wondering if the heat spill over from the heat sink on my NV Silencer 5 to the RAM from the GPU is causing the flickering polys on the turtle. I'm contemplating the voltage mod for my GT...
 
If it's heat related, more voltage probably won't help. I see some people using a decent HSF combo on the core and just air cooling the memory with some ram sinks and a fan.

Seems to keep the heat from bleeding over onto the memory as DDR3 generally runs cool anyways.
 
Yeah, I think my RAM is the problem. Oh well, I think I'll leave it alone. Thanks for the input.
 
Back
Top