Another Reason the GTX 970 is slower than the GTX 980

Lord_Exodia

Supreme [H]ardness
Joined
Sep 29, 2005
Messages
7,008
Interesting Article on Techreport.com.

http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980

The article goes on to explain the pixel fill rate differences between the two cards and he closes in saying a GTX 980 price difference may be justifed if certain reasons are influencing your purchase. He says the following:

Originally Posted by Scott Wasson from TechReport.com
That means, among other things, that I need to build a much more complicated spreadsheet for figuring these things out. It also means paying extra for a GTX 980 could be the smart move if you plan to use that graphics card to drive a 4K display—or to use DSR at a 4X factor like we recently explored. That said, the GTX 970 is still exceptionally capable, especially given the clock speed leeway the GM204 GPU appears to offer.

It was nice to see 2 different reviewers getting together for the sake of accurate Reporting. I think this article is worth the 3 minutes it'll take to read. Post your thoughts on this.
 
Interesting, thanks for the info.

NP, I was shocked to read that too. It's one of those things many including myself would overlook. I didn't choose my 980s because of this as I was completely unaware, but I definitely plan to do a lot of DSR gaming to run on my triple screen setup. Good to know it'll run smoother with 980s.
 
NP, I was shocked to read that too. It's one of those things many including myself would overlook. I didn't choose my 980s because of this as I was completely unaware, but I definitely plan to do a lot of DSR gaming to run on my triple screen setup. Good to know it'll run smoother with 980s.

It'll run smoother on the GTX 970s too but you'll have more pixel throughput with the GTX 980s.
 
It just means that the GTX 980 is marketed toward those with Surround or 4k, while the 970 fits in nicely for those with 1440p. I think that if you were part of the former you were always considering the 980 first and foremost anyway. I'm still on a 1080p monitor at 144Hz, and 970 SLI is perfect for x1.78 DSR at this refresh rate.
 
I picked up two 970s to replace my 680 but my plan is to move them to my Wifes's and Kid's computers when the 980TI or whaterver comes out next year when I complete my full system upgrade.
 
I picked up two 970s to replace my 680 but my plan is to move them to my Wifes's and Kid's computers when the 980TI or whaterver comes out next year when I complete my full system upgrade.

Hmm. That's a solid plan.
 
970 is the bang-for-the-buck placeholder until the 980 Ti / 990 of course...
 
It just means that the GTX 980 is marketed toward those with Surround or 4k, while the 970 fits in nicely for those with 1440p. I think that if you were part of the former you were always considering the 980 first and foremost anyway. I'm still on a 1080p monitor at 144Hz, and 970 SLI is perfect for x1.78 DSR at this refresh rate.

Spot on.
 
970 is the bang-for-the-buck placeholder until the 980 Ti / 990 of course...

Not sure why a 970 would be the bang for buck placeholder for those cards when they're going to be anything but. The 980Ti will be $650 or $699 and I don't even want to know what the 990 will cost. $900-$1000?
 
970 GTX currently goes neck and neck with the 290X. So it's a solid performer for the given power envelope.

The 290X is a matured GPU with solid drivers and one that is not going to get many performance gains after 14.9. On the other hand, the performance is going to scale only upwards with the 970 GTX.
 
There is no guarantee of a 980ti anyways. Nvidia will release the titan first (gm200-210) and then who knows from there.
 
Very good look at the inner workings and Nvidias spin on card specs. Makes the price difference a little more clear.
 
Here's my thoughts regarding this issue (which I was surprised wasn't looked into before even with Kepler), which I commented on asking about this in their comments section but I guess their reviewers might not read those, and is how does the difference in pixel fill rate translate to actual game performance? If it does then we need to re-examine some blind assumptions (that reviewers) made in the past.

It was actually known before the 970/980 came out that ROP count and clock is not the sole determinant of pixel fill performance on Kepler even. This article then comments that the rasterizer count is one aspect which limits pixel fill performance, and that pixel fill performance might have an effect on certain usage scenarios.

So what about previous Nvidia cards with multiple die configurations and therefore rasterizer counts (listed in the article, such as the GTX 780)? Every single review, that I know of, which commented on this issue (if at all) just mentioned that there would be no performance difference. But has any actual test been done between say GTX 780s with different GPC configurations to prove this? Conversely would we also not need to actually do tests between the 970 and 980 to show scaling issues at higher resolutions (as theorized in the article)?

This issue could also be important in the upcoming GTX 860 especially if it is just a further cut down GM204 part as again you are certainly going to run into different GPC configurations again.

In the article it also mentions -

There is another reason which seem to be that unevenly configured GPCs are less efficient with huge triangles splitting (as it’s usually the case with fillrate tests).

So with this issue does this mean that even different GTX 970s will have performance variation depending on which SMMs are cut? Say 1 cut SMM across 3 GPCs vs. 3 SMM cut from 1 GPC?

Also if there are performance variations based on die configuration than how is SLI effected (would in theory add timing complications) between cards using different variations?
 
So with this issue does this mean that even different GTX 970s will have performance variation depending on which SMMs are cut? Say 1 cut SMM across 3 GPCs vs. 3 SMM cut from 1 GPC?
Also if there are performance variations based on die configuration than how is SLI effected (would in theory add timing complications) between cards using different variations?

Whoa whoa whoa. You're really onto something here. Could be another reason to stick with the same card config when going SLi. I assume there would be no marketing materials to show how many Rops, SMs etc are on each card so I wonder if there could be a utility like Gpu-Z to help out. Though it may not make a difference if Nvidia simply cuts them as they please and as long as they end up a "970" then AIBs wouldn't know which what they get. The performance delta between the two(?) types of 970s wouldn't be huge but it would still be there.

I guess there's more than one way to skin a cat when it comes to Gpus...
 
Back
Top