[H] users SLi guide

like Rivatuner for OCing these days... just make sure you have the primary card selected when you go to set the clocks.. it will be on the primary card by default.. but if you want to tweak fan settings also, you need to select the cards individually, then set the fan levels for each one..
 
I tested this and everest tells me the secondary card's clocks are default. GPU bench w/ Crysis tells me it's clocking down to the lowest card too. It looks like flashing's the only solution right now. :(
 
Just a few notes from an issue I discovered this weekend while I was benching out my new cpu... I had noticed that my SLI performance was down, single card ran great, but in SLI things were struggling to even match the performance of single card operation.. so I killed my supossed overclocks with rivatuner, then checked the gpu clocks in everest and noticed that the clocks on GPU2 (eVGA plain jane 8800GTX) were at 198 core, 1188 shader domain, and 396 mem.. WTF, over? GPU1 (Asus plain non-OC'd 8800GTX) was reading 576/1350/900...

so I did some googling and saw a few other people had reported the same problems.. as it turns out, 198 and 1188 are the lowest clocks a GTX will default to in the event a clock gen fails or something is wrong with power on the card. Oh great! so I pulled the eVGA and inspected it carefully.. no burn marks near the power regs or anything.. I blew it off really well and mopped up the additional dust with some q-tips and re-installed. Same effing problem! So I did some more reading and found a thread here on [H] which said it's likely a driver issue cause by a failed clock detection attempt...

well, it just so happens I was having trouble setting my overclocks (616 core 924 mem, and that sets the shader domain around 1450) with rivatuner a month back.. with rivatuner in SLI you have to select the primary and secondary gpu individually when setting the fan speeds, but you HAVE to select the primary gpu only when setting the clocks in SLI. If you select the secondary gpu for setting the clocks, it will say it has to reboot to detect the clocks and when you reboot, seemingly nothing happens.. but upon that reboot it fubar'd gpu2's clocks so they defaulted to 198/1188/396... so when you go into 3d mode, the drivers set the clocks to those of the lowest gpu's clocks.. so both gpus run at the super low clocks.. crazy!

what I did to fix this.. after cleaning and reinstalling older and newer forceware drivers didn't work.. I uninstalled nhancer, I uninstalled rivatuner, I cleaned my registry with RegSupremePro (great program btw, if you like to prune the registry and startup config like I do, I highly recommend it) in agressive mode and then finally I cleaned my forceware drivers out one last time.. then rebooted and installed 163.75 WHQL's finally, the clocks were normal on both cards again! whew.. so I reinstalled rivatuner and got my fans and overclocks back to where I like them.. everything seems fine again..

just wanted to document this incase someone else has the problem.

edit: so the major lesson here is to make SURE you've selected the primary gpu when ocing with rivatuner in SLI mode..
 
Cheers mate! :D

Futzoring around w/ riva, nhancer, etc got my 2nd GT to around 400ish mhz. Removing all that crap, the driver, running driver cleaner in safe mode, and reinstalling the driver fixed the issue.

FWIW, Everest seems to detect G92 88GT clocks, temps, fan speeds better than riva or ntune. Hopefully some day soon I can look forward to soft oc on my GT's. :( I'm assuming this can be done w/ the GTX?
 
Soft oc? you mean volt mod or overclock?

BTW- yes, everest rulez!! love that prog.
 
I just recently built a new PC (In sig) and I cant seem to get SLI to work. I've tried the latest stable and beta but nothing works. The first time i tried using SLI the screen went perma black, fixed that with driver cleaner. Now the problem is when i enable SLI everything works fine and dandy till i load up something like Crysis. On the EA/Crytek screens it shows what looks like static and then when I get to the actual game the drivers crash thus crashing the game. I know its my system not over heating- CPU 26c, MCP 26c, GPU1 37c, GPU2 38c. Is this a known issue with Vista 64bit or is my system not receiving enough power?
 
hmmm... that's interesting.. first off, if the cards weren't getting enough power they would be making alarm sounds.. I think, I haven't heard this myself, but appearently they have an audible alert for when the power dips. So have you tried disabling SLI in the drivers and trying Crysis then? see what happens and make sure the game install it not corrupt or anything.. if the game works in single card mode, then try forcing SLI rendering mode AFR2 (Alternate Frame Rendering 2) in the game profiles section for crysis... have you seen that section of the drivers? I think it's in the 3d settings menu which defaults to a "global" tab, then tab to the games tab and find Crysis.. you can force settings there.. I was reading AFR2 was better for Crysis..

let us know what happens.

PS: nice rig!
 
I can play anygame fine without SLI, though I would love to be able to play with it on (especially with crysis, my resolution makes the game chug).
 
ok, what I got from your first message was that most games do work ok in SLI since you cleaned the drivers, except for Crysis... is that not the case? regardless, I wanted to see if when you disable SLI Crysis for sure worked ok.. So maybe try forcing Crysis to run in AFR2 render mode like I suggested above...

also, just a couple of quick questions, (stuff to rule out) what DVI port are you using for the monitor? you should be using DVI port 1 on gpu 1, so in an ATX config facing the back of the pc, the top card and left-hand DVI port. Is the SLI bridge on the card pair? Have you tried re-seating your video cards? Don't need to pull them out, just a gentle reseat is all.. especially check gpu 2 (card #2) I know with WC it's a pain, but a partially inserted card #2 could cause some strange stuff like this.. anyways just make sure they are in their slots all the way.
 
No games work in SLI atm and I have the DVI cable plugged into the right port. Tried reseating the cards but that still didnt do anything.
 
Well, hrm.. does the same thing happen each time you try and play a game in SLI mode? do you know for a fact that the secondary video card works? did you test it before you put all the watercooling gear on?

I guess it wouldn't hurt to try another driver cleaning and reinstall.. try the 163.75 whql drivers just for grins.. but it kind of sounds like your gpu 2 might be flaky.. maybe.. wild ass guess.. I hope for your sake I am wrong though..

ps: I have never tried this, but it might work to validate the second card at least in 2d mode.. turn SLI off in the drivers, then turn the machine off, remove the SLI bridge and turn the machine on.. try and run with both video cards but in multi-monitor mode.. then plug the monitor into gpu2's dvi ports and see if you can get signal.. that is if you haven't validated it seperately..
 
The 2nd GPU shouldn't be at fault because its the same GPU I've been using since the cards came out. If its screwed it happened after I applied the waterblock but I was being ultra careful since its a $500+ card......
 
ok, well that validates the card should be good.. so as I asked before, does the same thing happen each time you try and play a game with SLI enabled?? like static on the screen then a driver failure? artifacts on the screen and the like can be hard to pin down, but in the past I have seen that from bad video memory, or overheating video memory.. did you put memory sinks on your cards with the waterblocks? or are they full face blocks?
 
The same thing happens almost everytime I play. Sometimes I get the video corruption on the opening credits/intro video's and sometimes everything looks perfect. But EVERYTIME within about 30secs of getting to the actual games the drivers crash followed by my game crashing.

As far as I can tell the video card memory is making full contact with the waterblock (EK High Performance Full Coverage 8800gtx Acetal water Block). I used Artic Cyramic so its not as case of the thermal grease crossing the circuits. I could take another look at it on all sides but that would require me to take apart my video card loop. It an ez enough process since I had to do it a few times while building it, but if I have to cut tubing I'm screwed (just had enough). Not too fond of doing leak tests either (was worth it though cause my 1st few trys leaked like a mofo)


edit-

BTW all the games I'm trying use DX10 in some shape or form
 
Soft oc? you mean volt mod or overclock?
I meant ocing both cards w/ Riva. Didn't work on the newer GT's, but flashing them worked just fine.

...I'm trying use DX10 in some shape or form
I gave up on Vista a few weeks ago...never been happier. Crysis can't be run all that well in Very High and most of those effects can be done in XP anyway.
 
Jodiuh - ok, gotcha

DayHawk - wow.. well, this is sort of why stopped with the wc on my video cards.. makes it a PITA to sort out hardware issues, well, sometimes.. anyways.. how about this, can you DL atitool and run it's stability validation test to see if your cards will work in 3dmode in SLI sans DX10... let's just see if they work in 3dmode paired up at all..
 
Heres the update to current situation. When ever I enabled SLI sometimes I would get BSOD with the error code 0x00000124. Fixed this problem with a little work around. Alienware discovered with C1E Enhanced Halt State enable in BIOS, sometimes soundcards would cause system to become unstable and crash. That was a easy enough fix. Then I renabled SLI on the 169.12 beta drivers and ran ATI tool. Within about 5secs the little screen that scans for artifacts turned yellow and about 1 sec later the drivers crashed. I reinstalled 163.75 drivers and attempted to run ATI tool but upon starting it up I recieve an error about it not being able to load the kernel drivers. Tried uninstalling a few times, but it didn't make the error go away. I went ahead and tried the program again ignoring the error and all the buttons but scan for artifacts/show 3d are grey'd out. The show 3d window shows it going at 1k+ fps while the scan for artifacts window has been running awhile without any errors. The only problem is I don't know if its even working like its supposed to, and if its working, is it even using my SLI?.....
 
That sounds like you're still on Vista x64, no? I experienced a ridiculous amount of errors ranging from the raid driver, the 4GB issue, nvdkmn...or whatever it's called (the new nv4dis_dll), 1/2 my programs wouldn't work, windows update that disables turning off driver signing requirements, etc. The fix was XP.

MS did release an MGPU patch, but you have to request it. I'm sure someone's got it up on rapidshare tho. Look around the forums.
 
yeah - seems like your issues are system related... drivers and or OS are not happy with each other in SLI mode.. hrm... well, if you have SLI enabled in the drivers you should be in SLI mode when doing anything 3d.. even in windowed mode.. and all those crazy error messages and driver crashes seem really strange but seem like they could indeed all just be vista, as jodiuh is saying... I haven't used vista at all yet else I might be of more help in that dept.. lol, I know, pretty sad.. but I just don't have the time to mess with it when xp works so well for me atm..
 
I might try installing XP64bit again. I perfer not to because everything in Vista64 with 8gb makes everything run smooth as butter with superfetch. Either way I'm determined to find out what the problem is cause otherwise I pissed $500 away for nothing......
 
Well looks like my dreams of getting SLI to work properly died while I was sleeping last night....... Woke up to find my screen wouldn't turn on, looked inside case and primary card had coolant on it due to my chipset springing a minor leak :mad:. Now to see if EVGA will let me RMA it or not. Gonna be replacing all liquid with nonconductive coolant next time so this will never happen again. Could probably get the system up and going with the other card atm but I'd have to redo the loop and risk it leaking again.
 
duuuuuuuuude... that sux0rz.. major! yeah, that's the risk when water cooling... and even if you use a non-conductive coolant a leak can cause all kinds of problems.. not to mention the work in cleaning it up.. anyways.. I guess you can run on gpu2 for a while while you sort out the issues with gpu1. good luck!
 
yes - the Galaxy DXX is essentially a 1300W psu.. it's very conservatively rated at 1kw.. it should handle 3 GTXs no problem.

edit: unless you're gaming at above 1920x1200 I don't think 3-way SLI will help you much... I have only been seing like a little over 10% performance improvements at that res... but the bigger you go the more it should help, just like 2 card SLI. also, the system overhead for managing 3 cards will be more steep.. just some things to consider. :)
 
yes - the Galaxy DXX is essentially a 1300W psu.. it's very conservatively rated at 1kw.. it should handle 3 GTXs no problem.

edit: unless you're gaming at above 1920x1200 I don't think 3-way SLI will help you much... I have only been seing like a little over 10% performance improvements at that res... but the bigger you go the more it should help, just like 2 card SLI. also, the system overhead for managing 3 cards will be more steep.. just some things to consider. :)


thx revenant

exactly, I'll wait for the real perfomance of 9800gtx, I dont want sacrifice my xfi.
 
Update to situation-Got my RMA replacement today through EVGA's EAR program and have my system back and going temporarly. When the rest of my watercooling supplies arrive in the next few days I'll be able to get it back up to where I had it last week (redoing my GPU loop). I "may" have found the problem with my 2nd GPU. Part of the memory doesnt look like it made 100% contact with the waterblock and I'm praying it wasn't on at the time, but it also looked like some coolant got on it. I think the coolant part happened when I was tearing my loop apart. If not I guess I'm gonna have to replace that card too.......

Also will EVGA accept a card that doesn't have the serial# on it? Initially when I was taking off the stock cooling I accidently damaged the serial# (was overtop one of the screws). Now I might be able to try something, I still have the original box which has the serial#. If i was extra careful I might be able to pull it off the box and stick it back on.
 
It's possible they'll take. Worth a try. EVGA's mounting of the serial number sticker is not the best...
 
It's possible they'll take. Worth a try. EVGA's mounting of the serial number sticker is not the best...

No it isn't. Mine fell off one of my cards six months ago. The other one is barely hanging on.
 
Got my SLI working perfectly now. Crysis still rapes my system at 2560x1600 with high/very high :eek:. Besides the leak that happened, the problem with my 2nd GPU was the memory wasn't making complete contact with the waterblock.
 
Very awesome! great news.. yeah, I suspected memory overheating with how you described what was happening, my gut reaction.. anyways.. glad you got it sorted and can enjoy that beast now. :D
 
I must be cursed or something. The moment I get everything working something else goes wrong. Looks like I'm going to have to replace the motherboard since it wont even do a simple POST
 
I must be cursed or something. The moment I get everything working something else goes wrong. Looks like I'm going to have to replace the motherboard since it wont even do a simple POST

Well after burning up 10 out of 11 reference 680i SLI motherboards I swore them off. Now I'm using the Striker and though mine overclocks for shit, it is stable and after a year it still works. I've never had to RMA it.
 
I must be cursed or something. The moment I get everything working something else goes wrong. Looks like I'm going to have to replace the motherboard since it wont even do a simple POST

holy cattle! that double sux... well, yeah, the striker is a great mobo and if I was to buy another 680i it would be that one.. but if you want to OC your quad I hear the 680i "LT" mobos are great for that.. and they're very reasonably priced.
 
so far i am working with a tri sli setup, what should i get with 3dmark scores??
 
so far i am working with a tri sli setup, what should i get with 3dmark scores??

I have no idea. I've yet to run 3D Mark on mine. I don't really want to as I know my scores are going to suck due to having to run my CPU at stock speeds. This older version of the Striker Extreme I'm using right now doesn't overclock quad core CPUs at all it seems. Nothing over 1300MHz FSB anyway.
 
ouch man, well i will play around with it tomorrow, I had to get a new UPS since it overloaded my apc 900va ups.
 
with that quad at 4ghz and 3x 8800GTX cards I am going to guess upwards of 20k on '06.. but that's a guess.
 
ouch man, well i will play around with it tomorrow, I had to get a new UPS since it overloaded my apc 900va ups.

Well I killed my EVGA Black Pearl which caused me to have to use my old standbuy the Striker Extreme. I went ahead and switched back to air cooling at the same time as I was sick of draining the loop and screwing around with that stuff every time I've had to replace a motherboard which has been no less than six times in as many months. Also sorting out other hardware issues and performing upgrades was always a pain so I felt that air cooling was the way to go for me. I'm just upset now because the Striker Extreme has become the new hotness with BIOS 1303 and quad core CPUs but mine gives me virtually nothing.
 
wow sorry to hear that, I just sold my friend for 200 a brand new striker extreme.. if I would have known your situation I would have done a trade with you since he is not going to overclock with it... anyhow do you think tri sli is worth it or more of just a "wow look what I have" thing??
 
Back
Top