NVIDIA Kepler GeForce GTX 680 SLI Video Card Review @ [H]

Thanks for another great review on the 680, though this one doesn't really applies to me as I only have a single monitor, so it makes no sense to run SLi

Looking forward to the overclocking review next :D
 
I don't understand your comment. When the 680 crashes out at 4x it is because of VRAM, but when the 7970 crashes out at 4x it is because it runs out of GPU power? I don't see how you can make that assumption without seeing what happens with 4 and 6 GB cards. Neither card is playable at 4x (or really even at 2x), and whether that is because of VRAM limits or because of GPU power, the fact remains that the 2GB on the 680 doesn't limit the card any more than the 3GB of the 7970. So pretty clearly, it is sufficient.
I never said it does crash out because of VRAM, I said it was inconclusive, though increasing from 2x to 4x and getting a sudden huge loss of FPS and playability is often a symptom of going past a VRAM limit.

You say pretty clearly it is sufficient, but all it actually shows is that's it is sufficient to equal or better a 3gb 7970, not that it's sufficient to appease the 680's own architecture and performance. As you say, that'll only be known with a test of 4gb 680 vs 2gb 680 and see if the 2gb one runs out of legs and the 4gb one remains playable for longer. So, inconclusive whether 2gb is sufficient before then, though I think the symptoms of losing a lot of framerate from an increase of 2x to 4x is often a symptom of running out of RAM rather than running out of legs.
 
You say pretty clearly it is sufficient, but all it actually shows is that's it is sufficient to equal or better a 3gb 7970, not that it's sufficient to appease the 680's own architecture and performance. As you say, that'll only be known with a test of 4gb 680 vs 2gb 680 and see if the 2gb one runs out of legs and the 4gb one remains playable for longer. So, inconclusive whether 2gb is sufficient before then, though I think the symptoms of losing a lot of framerate from an increase of 2x to 4x is often a symptom of running out of RAM rather than running out of legs.

I get what you are saying, but my take-away was that it is clearly sufficient at settings that are playable anyway - it might not be enough VRAM for 4x MSAA, but there isn't enough GPU horsepower to run that setting anyway, so no loss. If it went from getting 100 fps at 2x to 10 fps at 4x, that would clearly be a problem, but going from 40 something to whatever it was at 4x (they never said) doesn't matter as much. And given that the 3GB 7970 runs into the same problems it just doesn't appear that is exclusively a VRAM problem (I know the VRAM use is different in the 2 cards, but if even 50% more isn't enough, that is instructive also).
 
Last edited:
For science.

High:
http://i.imgur.com/Xuca3.jpg
Ultra:
http://i.imgur.com/5INr2.jpg
High:
http://i.imgur.com/Xciy2.jpg
Ultra:
http://i.imgur.com/OXH8R.jpg

There is a pretty large difference in the distance detail with high vs ultra. Now all I need is a 1gb and 2gb card for more accurate science.

Maybe it's just me, but I don't see a pretty large difference. The trees and grass look better, but the buildings in the distance look pretty much the same to me. I focus much closer in the game since I'm busy looking for enemies, so actually the better grass and tress is much more noticeable to me.
 
For science.

High:
http://i.imgur.com/Xuca3.jpg
Ultra:
http://i.imgur.com/5INr2.jpg
High:
http://i.imgur.com/Xciy2.jpg
Ultra:
http://i.imgur.com/OXH8R.jpg

There is a pretty large difference in the distance detail with high vs ultra. Now all I need is a 1gb and 2gb card for more accurate science.

There's hardly any obvious difference, the buildings are blurred by dust on that map.

The point is the 3gb 7970 load higher quality objects further, for no visual gains. It's a detriment to performance. Play the game on High and 4x MSAA, 99.99% of the visuals and much faster.
 
From the review:
With the nature of GPU Boost, both video cards can run at independent clock speeds. This means that GPU Boost can be boosting each GPU separately as it needs to for the best performance, the clock speeds don't have to match up.
This is fantastic. I didn't think of the benefits of gpu boost before for SLI. Could that be the technology that's been needed to entirely eliminate SLI stuttering?
No not really, there factors in play, i my self have Tri-CF.

I use Tri-CF my self, and its much smoother then Duo-CF. (I use 3x 5870 2GB Matrix)
After using CF a couple of weeks, I literately removed the second card and put it up for sale, as many games ware just unplayable because of the micro stutters, even do i had nice fps, when i read a comment from a guy saying that for him Tri-CF nearly eliminated the micro stutters problem.
I removed advertized card from sale, and ordered a third Matrix card, even do i was ready and thinking that i would.

Main reason why i think Tri-CF is smoother, is because the individual cards have more time to render the next frame

Ware i "hear" that the opposite happens with 3 card SLI and that it is little less smooth then 2 card SLI.

But I don't know if this is still the same with the new cards.
I think the second card is not ready compiling the data for output, ware if I use a third card, I have no or at least mouths less of a problem with stuttering, wonder if this is also so for SLI

ps.
Anyone on a AMD system that wants to know how mouths VRAM he uses, process explorer is your help.
http://www.geeks3d.com/20110719/quick-test-process-explorer-15-0-with-gpu-support/
 
The only thing about this review that stumps me is the whole 4.7 GBs of Vram usage on the CFX setup. Nvidia stopped using Vram once the cards hit their caps, but the AMD build went almost 2 GBs beyond this limit.

Do you know how or why this happened?

My only guess is that AMD uses System Ram if the Vram buffer runs out, but that doesn't make sense because the NV cards had more GFX options enabled and still ran with much less ram. Another option could be a Vram Leak in the current CFX drivers
 
Arghhh Wish I wasn't so impatient and waited!! My 2 7970's are jealous. Great review as always!!!
 
Arghhh Wish I wasn't so impatient and waited!! My 2 7970's are jealous. Great review as always!!!

If I had 7970s I wouldn't be jealous at all over the 680s. The 680 is a more refined gaming solution than the 7970 but it did come out almost three months ago and is essentially in the same performance range as the 680.

If one is buying today however I think the 7970 is much tougher sell until the price is adjusted if one is looking for the best single GPU gaming solution.
 
For science.

High:
http://i.imgur.com/Xuca3.jpg
Ultra:
http://i.imgur.com/5INr2.jpg
High:
http://i.imgur.com/Xciy2.jpg
Ultra:
http://i.imgur.com/OXH8R.jpg

There is a pretty large difference in the distance detail with high vs ultra. Now all I need is a 1gb and 2gb card for more accurate science.


What, I don't notice >< can you highlight where the differences are ?
and what would be a really good test is the difference between a 2 and 3gb card to see if it actually makes a difference or not.
 
Why do I get a 403 Forbidden error whenever someone posts imgur links or thumbs?
 
Well, I can access imgur links in a steam chat box, but not when linked here at [H].
 
Damn, I really want to trade up to 680 SLI from my 6990 now... preferrably with 4GB cards... gonna be fuck'n expensive though.
 
The only thing about this review that stumps me is the whole 4.7 GBs of Vram usage on the CFX setup. Nvidia stopped using Vram once the cards hit their caps, but the AMD build went almost 2 GBs beyond this limit.

Do you know how or why this happened?

My only guess is that AMD uses System Ram if the Vram buffer runs out, but that doesn't make sense because the NV cards had more GFX options enabled and still ran with much less ram. Another option could be a Vram Leak in the current CFX drivers


See post #38
 
The only thing about this review that stumps me is the whole 4.7 GBs of Vram usage on the CFX setup. Nvidia stopped using Vram once the cards hit their caps, but the AMD build went almost 2 GBs beyond this limit.

Do you know how or why this happened?

My only guess is that AMD uses System Ram if the Vram buffer runs out, but that doesn't make sense because the NV cards had more GFX options enabled and still ran with much less ram. Another option could be a Vram Leak in the current CFX drivers

It's clearly a problem in reporting VRAM usage with AMD cards. Either it's including "virtual" VRAM (e.g., system RAM) or it's doubling the value since it's CFX.
 
Just a suggestion for future reviews, Kyle, but maybe there is a way you can measure the latency measured between frames and provide a histogram showing the distribution of time between frames? That would be clear way of measuring 'smoothness' between CFX and SLI. Obviously you'd have to figure out how to measure that accurately...
A couple years ago someone posted a program in the forum that would take data captured from FRAPS and show the micro stutter between frames. It worked really well. I'm sure it would work for SLI stuff too.
 
Fraps is not sufficient, isn't it? As far as I know it takes inaccurate frametime measurements that can be biased by Nvidias frame metering technology for example.

Nope, not sufficient, but the current best solution we have.

But this is what is all boils down to, framerate really means shit when it comes to relating your gaming experience. Sure it gives you some idea of what the experience is going to be, but with SLI and CFX, framerate is almost meaningless if you 60fps still gives you an experience that is not smooth.

This is why we PLAY THE GAMES and tell you about our experiences.
 
The only thing about this review that stumps me is the whole 4.7 GBs of Vram usage on the CFX setup. Nvidia stopped using Vram once the cards hit their caps, but the AMD build went almost 2 GBs beyond this limit.

Do you know how or why this happened?

My only guess is that AMD uses System Ram if the Vram buffer runs out, but that doesn't make sense because the NV cards had more GFX options enabled and still ran with much less ram. Another option could be a Vram Leak in the current CFX drivers

It's clearly a problem in reporting VRAM usage with AMD cards. Either it's including "virtual" VRAM (e.g., system RAM) or it's doubling the value since it's CFX.

I think that may well just be the very thing going on.
 
it shows TOTAL VRAM (both cards) via Afterburner, not individual card, EVGA Precision shows individual cards VRAM load, not sure what afterburner does for 680 as I used EVGAs tool for that.
 
Please please please get rid of ME3, Skyrim and Deus Ex in the benchmarks, they all run really well on so many configs and have such low graphics options. Go back to the more taxing ones that really showed a difference Crysis 2, Metro 2033, Civ 5

Couldn't agree more. As they even said in the article, some of the games mentioned simply aren't discriminators when it comes to selecting a graphics card. And lets face it, giving enough information to select a graphics card is the sole point of the test, not simply to tell me what kind of gameplay experience i'm going to get if I play X game on Y card I've already got. Anyone with a card from the last 1-2 generations should be able to play ME3 just fine even in eyefinity, so (IMO) it certainly doesn't belong in a test comparing a pair of two modern day titans.

That said, HardOCP's test methods and rigor are absolutely the best in the biz. Great writeup.
 
Fraps is not sufficient, isn't it? As far as I know it takes inaccurate frametime measurements that can be biased by Nvidias frame metering technology for example.
As said before, no its not.

The TechReport has a excellent article written over "frametime".


And i sincerely hope that HardOCP also will start testing in the same way.
As for me as multiple GPU user, "frametime" is way more important then fps.

bf3-comparo.gif

arkham-comparo.gif

To calculate from ms to fps "X / 1000 = fps", (ware X stand for the amount of milliseconds), so 20ms = 50 fps, 40ms = 25 fps, and you can see the spikes, ware there is a small stutter.

bf3-percentile.gif

arkham-percentile.gif

This graph shows the last 50 to 99.5% of the sample what frame time they used.

As you can see Arkham City has a mouths harder time to stay constant, and why imho frame time is the most important indicator.
 
it shows TOTAL VRAM (both cards) via Afterburner, not individual card, EVGA Precision shows individual cards VRAM load, not sure what afterburner does for 680 as I used EVGAs tool for that.

Afterburner seems to display VRAM correctly for my 570 SLI setup. I think AMD + Afterburner is just bugged somehow.
 
The VRAM debate won't be conclusively ended until the 4gb 680's come out and [H] tests it against the vanilla 680's and see if there is a performance drop at the choke points mentioned in the article.
 
The VRAM debate won't be conclusively ended until the 4gb 680's come out and [H] tests it against the vanilla 680's and see if there is a performance drop at the choke points mentioned in the article.

I think the fact that BF3 Ultra 4xAA was unplayable on the 3 GB cards as well at 5760x1080 points more towards an overall power dearth than a VRAM issue. But I agree, it will be interesting to see what happens with the 4 GB cards.
 
So, it's been like 8 years of fucking DX 9?
Geez, by this time every game should be DX 10/11, omfg......

Stupid consoles....:mad:

By the way, excellent review, i wish there were like 6 more games, no matter how much time is spent.

I also wish HD7870 cost $240 bucks :rolleyes:
 
"I will make a bold and personal statement; I'd prefer to play games on GTX 680 SLI than I would with Radeon HD 7970 CrossFireX after using both."

I feel the same way about SLi and Crossfire after having used both over the years.

Why? SLi tends to run more smoothly, and has better driver support. I care much more about how the game feels vs what FRAPS may be displaying. I'll take
60FPS that feels smooth over 80 that is choppy or regularly becomes choppy during play. Higher FPS doesn't tell the whole story.

I've used Crossfire with; X1900 XTs, HD 3850s, HD 4890s, and HD 5870s. I also tried an HD 4870 X2.

SLi? 7800 GTs, 7950 GX2, GTX 260s, and GTX 470s.

Crossfire always ends up annoying me to the point I abandon it. Obviously, I've given AMD a "second chance" many times. After I sold off my 5870s to bitcoin miners, I decided that was it for me and Crossfire.

Right now my preferences go; single GPU, SLi, Crossfire.
 
Last edited:
Wish the (H) would review monitors/displays. Hard to pick out a good one from newegg, what defines a good monitor in relation to these cards?
 
Right now my preferences go; single GPU, SLi, Crossfire.

I always say to buy the SINGLE most powerful GPU you can and then go from there. But for multi-monitor single GPUs are not quite there for 5760x1080 and above gaming.
 
The resounding win for gtx680 is due to Turbo, it's automatic overclocking. However, once both cards reach their max OC, often around 1.2ghz to 1.3ghz, the 7970s pull ahead. There's plenty of other reviews which shows this.
 
The resounding win for gtx680 is due to Turbo, it's automatic overclocking. However, once both cards reach their max OC, often around 1.2ghz to 1.3ghz, the 7970s pull ahead. There's plenty of other reviews which shows this.

Raw FPS doesn't tell the whole story, particularly when discussing MGPU. IMO this is the resounding win-

"We don't know what other descriptive word to use, other than "smoothness" to describe the difference we feel between SLI and CrossFireX when we play games. We've expressed this difference in gameplay feeling between SLI and CrossFireX in the past, in other evaluations, and we have to bring it up again because it was very apparent during our testing of 680 SLI versus 7970 CFX.

We can't communicate to you "smoothness" in raw framerates and graphs. Smoothness, frame transition, and game responsiveness is the experience that is provided to you as you play. Perhaps it has more to do with "frametime" than it does with "framerate." To us it seems like SLI is "more playable" at lower framerates than CrossFireX is. For example, where we might find a game playable at 40 FPS average with SLI, when we test CrossFireX we find that 40 FPS doesn't feel as smooth and we have to target a higher average framerate, maybe 50 FPS, maybe 60 FPS for CrossFireX to feel like NVIDIA's SLI framerate of 40 FPS. Only real-world hands on gameplay can show you this, although we can communicate it in words to you. Even though this is a very subjective realm of reviewing GPUs, it is one we surely need to discuss with you.

The result of SLI feeling smoother than CrossFireX is that in real-world gameplay, we can get away with a bit lower FPS with SLI, whereas with CFX we have to aim a little higher for it to feel smooth. We do know that SLI performs some kind of driver algorithm to help smooth SLI framerates, and this could be why it feels so much better. Whatever the reason, to us, SLI feels smoother than CrossFireX.

Personally speaking here, when I was playing between GeForce GTX 680 SLI and Radeon HD 7970 CrossFireX, I felt GTX 680 SLI delivered the better experience in every single game. I will make a bold and personal statement; I'd prefer to play games on GTX 680 SLI than I would with Radeon HD 7970 CrossFireX after using both. For me, GTX 680 SLI simply provides a smoother gameplay experience. If I were building a new machine with multi-card in mind, SLI would go in my machine instead of CrossFireX. In fact, I'd probably be looking for those special Galaxy 4GB 680 cards coming down the pike. After gaming on both platforms, GTX 680 SLI was giving me smoother performance at 5760x1200 compared to 7970 CFX. This doesn't apply to single-GPU video cards, only between SLI and CrossFireX."
 
The resounding win for gtx680 is due to Turbo, it's automatic overclocking. However, once both cards reach their max OC, often around 1.2ghz to 1.3ghz, the 7970s pull ahead. There's plenty of other reviews which shows this.

The problem with comparing max OC to max OC is that every card is different. So a review showing the 7970 beating the 680 while clocked at 1300 does nothing to help me if my 7970 only does 1200. OC vs OC is good info to have, but the only really fair comparison is what the manufacturers certify, which is stock vs stock, or factory OC vs factory OC (in the case of aftermarket cards).
 
How about Tri-SLI 680 GTX vs Tri-CFX 7970 to find out at which point either two cards will suffice or if there's some more peformance to gain with scaling by using a third card?
 
The problem with comparing max OC to max OC is that every card is different. So a review showing the 7970 beating the 680 while clocked at 1300 does nothing to help me if my 7970 only does 1200. OC vs OC is good info to have, but the only really fair comparison is what the manufacturers certify, which is stock vs stock, or factory OC vs factory OC (in the case of aftermarket cards).

Stock vs stock is the best comparison for reviews, but I disagree that OC vs OC isn't useful. It definitely shows how the 2 scale with additional clockspeed, so its useful data to have.

But stock vs stock is generally the accepted barometer.
 
Back
Top