Effects of voltmodding on lifetime?

Nazo

2[H]4U
Joined
Apr 2, 2002
Messages
3,672
All the reports I've seen of voltmodding seem to be the GT and the Ultra. How much is running at 1.4V going to decrease the lifetime of a chip not officially designed to go in a GT or an Ultra? Seems like it's doing amazing things so far, so I'm inclined to think I got a really good chip, but, I'm still a little worried I could be tearing this thing up... Is it not really going to have a significant effect on the card even in the case of a vanilla 6800? I just want to be certain... (Call me paranoid, but, this thing wasn't cheap.)
 
Most of the people I know who volt moded their GT with a 1.4v Bios got their 6800 GT dead after 2 or 3 months maximum.
Though some Gts work fine.
 
Crap, are you serious? People made it sound like you could still count on longer than THAT... Well, until I hear some better news, I'm going back to factory. *sighs and waves goodbye to his 5060 3dMark05 score* I mean, this isn't even a GT...
 
Well I'd heard claims of 1.5v causing problems, but nothing specific about 1.4v. The latter is the stock voltage for an Ultra and shouldn't cause a GT too many problems I'd have thought?

As with everything, it's the risk you take....

EDIT - sorry ignore me. I thought you had a GT for some reason.
 
Well, at stock voltage for a nu this GPU goes well over 350MHz with all pipelines enabled and everything, so I kind of wonder. But, I'd rather find out more certainly before I risk giving this thing a 2-3 month lifetime.... I definitely need a lot more than 2-3months... More like years...
 
Ok, what about this. The stock voltage for a NU is 1.2V according to someone, is this correct? If so, what would be the effect on lifetime if I ever found a way to flash it to 1.3V? Unfortunately, I don't know how yet since the flash program (OmniExtreme/NiBiToR) refuses to list 1.3V, but, if I find a way in the future, should I even try it? Or will that still hurt my chances of still having a video card a few years from now?
 
Shameless self bumpage... Sorry, but, it's pages away now. I'm really considering finding a way to get 1.3V on there, but, I'm afraid of tearing up my card. Cooling shouldn't be an issue so long as it doesn't require water cooling or something. Never seen 60C even on stock (ok, ok, Leadtek has a good stock heatsink, probably the best in this range of cards as far as stock goes.) And I'm using the nv silencer 5 now anyway.

EDIT: NiBiToR showed 1.3V when I tried the bios for the GT, so I thought I'd just use that and set the memory to 400x2 for saftey's sake, but, when I forced it to flash to that and rebooted, I got SEVERE artifacting. I'm wondering if the memory edit didn't work... The LeadTek bios for the TDH400 GT sets the memory to 500x2 by default. It can't possibly be the voltage, because the 1.4V was stable (up to a MUCH higher GPU speed than the 350MHz I left it set to...) Only other possibility I could imagine is that they've found a way to make it know when you have a nu? I mean, there's the memory difference, but, supposedly that's supposed to not matter.
 
Curious if you ever got 1.3v to the card. 6800LE cards are identical to 6800 vanilla, just have 8 pipes instead of 12. Using Nibitor, you can set voltage to 1.3v. A little hesitant to flash a 6800 w/ an LE bios tho.
 
1.4v should be fine with ok cooling...as long as u temps aren't too high...1.5v on air might be overdueing it...but with an nvsiliencer which cools a bit better than the ultra's stock cooling..1.5v is possible
 
Stock voltage for a NU is 1.2 in 3D, 1.1 in 2D unless my memory plays jokes on me (which is sometimes does -- but, no one is laughing...)

I'm also hesitant about the LE bios idea. Still, flashing downward is PROBABLY less likely to do damage of any sort than upwards. More likely it would harm performance (maybe slow memory timings or something?) Theoretically the pipelines are going to stay unmasked -- though I guess RivaTuner could help there if they aren't.

Now, on the voltages, I'm getting conflicting responses here. Is 1.4 safe or not? Can someone give me something definitive here? The GPU was running cool -- I think less than 50C under load even with 1.4 (seems everyone is saying their LeadTeks run cool, but, regardless, mine never sees even the 60s under worst conditions.) Thing is, like people are saying, the Ultra doesn't just have better cooling, it also has better voltage handling (regulation, better components to handle more voltage, etc.) I'd love to see my GPU pull off 425 (or at least close enough,) but, it's NOT worth tearing up my card over... *sigh* Just wish I could figure out how to get the GPU at 1.3V. That's exactly what I need and may well still pull off 425, but, I just haven't found a proper solution for doing this so far (I think I need to get something to mod the GT bios for the memory without currupting the whole thing or whatever that last attempt ended up doing... If I could get that idea working, I'd be set as the GT is the perfect starting point in everything but the memory.) Like I said, I'm not entirely comfortable with the LE bios idea myself. PS. Physical voltmodding is out. Too dangerous since I always screw things up.
 
The concensus is (mostly from reading about experiences w/GTs) that 1.4v kills a scary number of cards within a few months. I'm no electrical engineer, but the power handling components on the back of a vanilla PCB and an Ultra are totally different, just a few BIG capacitors on the Ultra, lots of small caps on the vanilla, etc.
 
Yeah, that's what I thought. Well, A. it's not worth the risk, and B. like you said, it just makes sense that the ultra is better made to handle such power vs even the GT (isn't the GT just a NU with better memory and GPU -- or, more accurately, the NU a GT with worse memory and GPU?)

Well, 1.3 or 1.2 as far as I'm concerned are the only choices.
 
Pretty different a$$ ends on the cards.

comparo.jpg
 
So I wonder if 1.4v on a PCI-E 6800GT is fine then... Because both the ultra and GT have the same PCB component layout...
 
I ran my GT at 1.4 volts for 2 weeks before I read the horror stories of burning out, I reflashed it back to 1.3.
 
Im also using a EVGA 6800NU with a bios flashed to 1.5volts.
All the pipes are unlocked 16/6 running the core at 435-440 100% stable with the memory around 870-890. Im able to hit 12900 in 3dmark2003 and Im currently in First place on 2005 with a P4 and 6800NU on the ORB Database. Been running this card for over 6 months cranked like this. Farcry, Doom3, Halflife2 all are max in game settings and 16x Anisotropic set in the vid card. Using riva tuner to unlock and overclock.
I have heard that the capacitors on the NU's can not handle the voltage like the GT or Ultra.
Also my friend is running his card at 1.5 for at least 5 months now. Both 6800 have NV5's on them.
 
Honestly. I'm afraid of tearing up my card at 1.4. You couldn't PAY me enough to run 1.5. Well, wait a moment, give me enough to buy a new card and I'll consider it. ^_^


Does anyone have a guess as to why the GT bios causes such bad curruption that even the BIOS is unreadable? I checked the memory timings and they appear to be the same. I've upgraded to Nibitor 2.0a which seems to be the latest and had no better results. Bear in mind that I set the memory and GPU to well below where they run stable on stock voltage (just in case things go horribly wrong in a hot room or something.)
 
Went ahead and flashed my 6800 vanilla w/ a 6800LE bios modded w/ 1.3v - seems to work well.
 
Did you use "Exact Mode" for adjusting the voltage? I noticed when I looked at the voltage tables that 1.3V is actually 1.4V on the LE BIOS I tried. I also note an inconcistancy when I try to enable the extended voltage labeling to mark the unknown voltages in the list. Moving around through tabs, menus, etc, or even disabling and reenabling the option causes the values in the table to completly jump around. The order itself actually changes. Obviously this can't be trusted.

You definitely need some sort of way to verify that voltage... I know that if we were expert enough, we could take a voltage meter to the right spot on the card, but, neither of us is and I for one would just end up shorting something fatal to the card most likely. ^_^

What I plan to do if I ever find a BIOS that works is change all settings (such as card ID/etc) to what my current card uses so that the bios is sort of converted and everything thinks I have a NU -- even the BIOS.
 
Ok, I've given up. I found a modded LeadTek 6800LE bios on MVKTech's forums and gave it a shot (the author even claims it has better memory timings. No errors so far. I figured anything that works for an LE will just as likely work for a NU, but, I must admit that my memory is worse than usual since it can't even make it to 900MHz when ocing to the max.) Nibitor properly shows a 1.3 option in the exact mode. I'm still a little uneasy due to the fact it's hard to trust this program (heck, it shows the official unmodified bios for the 6800 direct from leadtek's site as having a 1.4V 3D, which I KNOW isn't true) but, I do consider it a good sign that I actually got a HIGHER core with this option that nibitor says is 1.3V. That is a pretty good sign as it kind of makes sense that the 1.4 might have been pushing my GPU a bit too much if I get a higher number at what is supposed to be 1.3. I'm still a little worried though. I wish I could find a nice proper way to verify for certain that I'm getting 1.3 not 1.4 or something.

Also using RivaTuner to manually change the device id from 6800LE to 6800NU. Don't know why, I just can't stand the idea of windows thinking I have a LE. ^_^ Principle of the thing I guess. Lol, I'm tempted to say GT, but, then something might try to autodetect the clocks and start from 1GHz on the memory. Anyway, I'm still in the testing phases, so it's not official yet, but, I did see a > 5K score in 3dMark05 when I tried the core at 425. Going to test again once I find the actual max. I'm currently sitting at 433, which seems like a nice round number to stay on since 440 fails the test.

*crosses fingers*

Just have to verify this voltage is all... Lol, I'm paranoid btw. If this card fries though, I'm left with an ancient Rage Pro PCI that has troubles drawing the desktop at a decent speed.
 
higher voltage = shorter life...simple as that. how much shorter may or may not matter to you.
 
NEODARK said:
if you are affraid to bleed, GET OFF the bleeding edge ;)

That's what I'm trying to do. I'm trying to stand on the edge that's dangerously close, but, not quite cutting you. (The cutting being me without a video card that can draw a simple 2d screen faster than 1 pixel per year...) 1.4V is too unsafe, so, I want 1.3. I think I now have it too.


As for the effects on lifetime, I'm guessing you only read the title of the thread before saying that? I'm aware of that much. The question is more along the lines of how much and at what point. Also, bearing in mind it IS cooled well enough that it has never hit 60C yet.
 
First of all, wrong thread. That belongs more appropriately somewhere such as the thread about unlocking wtih RivaTuner.

That said, do bear in mind that not all chips can be fully unlocked. Some can't be unlocked at all. As a matter of fact, people were blowing right through them like insane at first with the original irreversable BIOS mod and just sending the card back if it caused artifacts (I suspect this trick won't work so much now that manufacturers are starting to catch on.)
 
Oh.. Come on...

I have been running my XFX6800 GT at 1.4 V since 09/2004, NO problems..
My temps are like 45 idle/50 load.

Then how about GS bios which is 1.4 V stock for their GT cards. I really don't think it is as serious as what some of you are making it out to be.

I will shorten its life but I think the duration is negligeble compared to how all of us are constantly updating our hardware.
 
OMG Nazo please do something to shorten your signature!

:D

Anway, does the AGP voltage in the BIOS really affect the card in any way, and how?
 
Sorry, the signature is long for a reason... I'm lazy and I don't like having to repeat my hardware/software to people when asking for help with a problem or that sort of thing.

Anyway, I tried 1.4. I got 425. I try 1.3, I get 433. So, even if you don't consider it a valid reason to not do it that so many get fried, I consider it a valid reason that my chip like 1.3 a lot better.

Mind you, I haven't had time to truly test this. It may actually not be 100% stable. I did run doom 3 for a bit as well as 3dmark and rthdribl, and couldn't find anything, but, I only had a few hours. When I get home, I'll be giving it a thorough test to see.

Well, so far, it's looking like the LE bios trick is working wonders.
 
im confused, you decrease voltage to speed up and overclock?
 
tasteestuff said:
Oh.. Come on...

I have been running my XFX6800 GT at 1.4 V since 09/2004, NO problems..
My temps are like 45 idle/50 load.

Then how about GS bios which is 1.4 V stock for their GT cards. I really don't think it is as serious as what some of you are making it out to be.

I will shorten its life but I think the duration is negligeble compared to how all of us are constantly updating our hardware.

are u on watercooling?
 
bjornb17 said:
im confused, you decrease voltage to speed up and overclock?
1.3V is less than 1.4. I decreased from a previous very high increase. I'm still overall higher however. Stock NU is 1.2V. 1.4V was more than the GPU liked apparently, but, 1.2 is not what it WANTS obviously, so it still did better.

Anyway, the thing was, there are reports of it not so much just shortening the life, but, kind of cutting it off... As in, dead card. Within weeks in fact. A lot of people have said this btw, not just on here (I've been looking around.) THAT is what I worry about. I mean, I have $1 in my pocket right now. If my video card dies, I'm in serious trouble...

BTW, my video card also runs around 40C idle, 50C load with a NV silencer 5. You don't need water cooling until you hit the REALLY dangerous stuff like 1.5... Ok, maybe it's a little higher now, I haven't checked (away from home) but, still, you see my point.
 
i have one thing to say.

before voltmod: 405mhz for the core.
after voltmod: 420mhz. decided after about an hour that the excess heat wasn't worth it. flashed back..
now i'm stuck with 380mhz, unless my room is cold, then i can hit 390-400 stable :(

it's a damn good thing i didn't keep it that way for too long
 
Curious. Which card is that? The LE? NU? GT? How much heat did it get up to before it did this damage to the chip? Which voltmod did you do? 1.4? Finally, stock cooling or NVS5 or what?

Sorry for the game of 20 questions here, just want to get some boundaries basically.

I'd say try 1.3 if you tried 1.4 since it's a little safer, but, it sounds like it's too late now. I imagine things would still be worse with that. Oh well, sorry. I guess if you cooled it better you could get it to run higher. Invest in a NVS5 if you haven't already, or, if you are a water cooling person, get a VGA cooler kit to add on or something (I know, independant cooling is best, but, there's only so much you can do, and that would cost insane amounts.) At least you may get it back up to 400, and a small bit on the core seems to make a large difference.
 
In my opinion , it isn't worth the risk.
I ran my nu @1.4V for a couple of months , now I'm back to 1.2V , losing 20mhz from the core (1.4V = 400 , 1.2V = 380).
Frankly , the difference in real games is below negligible.

If your target is a good benchmark , just give it the 1.5V it deserves , run the damn thing and go back to 1.2V.
That's what I do , to great effect (5603 , 12809 , 26385 , 70232 , still room for improvement in some of those :)).

For real games... well I can live with 2 less fps in doom3...
 
II had the voltmod a total of 3 days before I backed off due to minor corruption in games and 2d applications ect., no idea if I effed up my card but it seems to be ok @ 400/1100 in 3D. And I agree with the poster above, it you want uber l33t 3Dmark scores up your Vcore and get the 12,500 or 5200 marks you want, not so impressive if you're just throwing away $400 dollars to show off how large your score is though.
 
Actually, I don't have the GT or the Ultra. And I don't like cheating on benchmarks. And, what I keep saying is I'm NOT using 1.4V now. 1.3 should be safe I believe because this very same chip in a nearly same PCB (the GTs) runs at 1.3V stock with a weaker cooler than what I've put on there. I've actually seen a rather noticable difference in all the ocing and all I've done. Most notably being getting the option to max out EVERYTHING. Also, I'm kind of planning ahead here in that I want to run Oblivion as well as is possible for such a card, and, like it is, it's not so far beneath the overall capabilities of an Ultra at stock at least. I know the memory is low, but, the difference there is a LOT less noticable than what happened the very second I unlocked those pipelines, and, with the GPU up so high, it tends to compensate a good deal for the memory thing as well as long as I don't overdo it with something stupid like 8xFSAA, 16xAF @ 1280 or something. ^_^
 
Personally, I just don't see the point of risking killing your card for a few, even 10 fps, or worse yet, bragging rights in 3dmark :rolleyes: Doesn't make sense to me.
 
I guess my new question is, is there actually any risk to the 1.3V trick with the LE bios. I mean, after all, like I said earlier, the GT ships with 1.3V stock and a weaker cooler than what I'm using.

EDIT: Ok, now this is odd. I was seeing some strange stuff here and there. Nothing major, but, I was worried. So, I went ahead and flashed back until I've verified just what to think here. Well, that much is fine. My core is at the same thing it was before without a glitch. But, the tests kept failing on the memory. Now I've got the memory all the way down to 775. It starts failing around 800 or so. Any idea why the heck this is? I originally had it mostly stable at 866 (as in getting a rare crash after hours and hours of gaming so I stepped back to 850) and now I can't even set 800... Thing is, a GPU voltage mod shouldn't affect memory. Period... I mean, how the heck can adjusting GPU voltage even TOUCH the memory? Meh, I don't know what to think, but, for the moment at least, I'm stepping that memory down all the way to 775 in the hopes of keeping it stable and from tearing up. I don't understand it at all though. I swear, I just have the WORST luck whenever it comes to memory... Even to start with I was practically the only one who's NU (or LEs too apparently) couldn't hit 900MHz on the memory. I don't know, do you suppose the nvidia test built into the coolbits thing isn't accurate or what? Don't see how it changed though...
 
I ran my PNY 6800GT with 1.4v just to see what it could do under watercooling. My temps never busted 36C under benchmark load, and clocked up to a stable 465/1220. But after a handful of runs, benchmarks started to freeze and now the best I can run stable is 425/1180 now. I went back to 1.3v and run 400/1100 on a daily basis with no problems.
 
Back
Top