Die size comparison: GT200, RV770, G92, G92b, G80

MrWizard6600

Supreme [H]ardness
Joined
Jan 15, 2006
Messages
5,791
After wondering just who's making more money then who per card I created this little picture to help myself (and anyone else wondering the same thing) understand:

freeforallG92G92bG80RV770GT200-1.jpg


Just to clarify the smallest die is the G92b, represented by the smallest red box there, not the RV770. The RV770 is the box right beside it a mere 4% bigger.

edit: the size of the G92b has fallen into a little bit of contraversy. PC perspective using a ruler says 231mm², users at Byond 3D say 270mm² which is more inline with the estimates that were circulating before the core was released. Either way, we know that the G92b should be very close (or perhalps identical) to the RV770 in terms of production costs.

And my margine for error is resented perfectly by the corners of the die. If I was perfect all four corners should line up in a perfectly straight line (since all the dies are square), since they dont, I made some errors somewhere. I went with numbers given to me online (usually 3 sig figs).
 
According to your information, the G92b is bigger then rv770. (256mm2 versus 276) - or did you get those backwards?
 
even if gt200 was shrunnk to 55nm it would still be the size of g80, and would probably still be expensive to make, I am not talking out of my ass, lol. even anandtech did a rough estimate of what it should be when shrunk to 55nm, so nvidia needs to seriously refine their architecture if they want to make profit and sell the cards at an affordable price.
 
According to your information, the G92b is bigger then rv770. (256mm2 versus 276) - or did you get those backwards?

i got the G92 and the G92b mixed up, its fixed now.

and lol @ unleashed, was that actually a slide ATI dared to show investors? Nvidia designed the GT200 as a flagship, nothing more, just something entirely for marketing purposes. The GT200 means nvidia can truthfully say "we have the fastest single GPU in the world". In the minds of the average consumer that means alot. Nvidia has won the last two years in marketing, and tied the years before that and then (somehow) managed to win before that with the Geforce 6 series (imho, ATI had the superior hardware). So you've got half a decade of Nvidia dominating in the marketing department. At this point superior hardware isn't enough to secure serious sales for ATI.

Also, lets not forget a(nd this is really jaw dropping to me): the RV770 @ full speed consumes as much power than the GT200 @ full speed, only (right now) the GT200 has some superb power savings software which brings its power consumption into th basement when its not being used.
 
I don´t think its that simple, you also have to take into account such things as yields, pcb and memorychip costs as well

A R770 board with GDDR3 ram might be cheaper to make than a G92b board due to simpler and less expensive PCB for eaxmple.........



Heres another comparison:


http://forum.beyond3d.com/showthread.php?t=48631&highlight=die+size&page=2


well, a single 350mm wafer is $5000, so thats without a doubt the majority of your costs. The difference in price due to the complexity of the PCB isn't really too much of an issue, unless your doing something hugly differant from previous generation. Generally speaking mroe layers = more expensive, and the kind of those right now is without a doub the GT200. ATI actually had trouble fitting all the contacts for a 512bit memory bus on the PCB. it surfaced somewhere they were actually considering a 14 or 15 layer substraight.

and in regards to that comparison, mine looks nicer (;)), and hes wrong about the G92b. He's basing that size on a 10% shrink when infact it was slightly more then that. PC perspective here busts out the ruler on a 9800GTX+ and comes up with the size I'm showing you there.

We'll he even went so far as to use the pixels to come up with a ratio and multiply that by the G92's die size to come up with his actual figure. I donno, maybe he's right. Anyways, they're damn near identical in size so both will probably cost the same to produce. But as we know the RV770 is faster, but it consumes more power.

Once again, and this is good for consumers, its a good old fashioned shrink race! If ATI can shrink to the 40's before Nvidia they'll be able to roll the RV770 into greater production at a lower price (assuming a smaller die), and seriously undercut the G92b while still maintaining a nice profit margine for ATI. On the other hand, if Nvidia shrinks the G92b before ATI gets their RV770 smaller a new SKU will probably be made exploiting the extra clocks allowed which would enable this new core (G92c? lol) a new SKU which would probably be able to take down the HD4850. Of course, there's nothing stopping ATI from creating the HD4860. I believe no change in pricing or SKU's will result from a shrink of the GT200. I think Nvidia is currently only after market share and a nice flagship core, so a shrink to the GT200 would allow for higher yeild and increased profit, while maintaing the fastest Single GPU out there, perhalps they'll consider a dual GT200 package.

Good stuff. Nice to see competition again. After a 2 year break ATI's back in it.
 
Hmmm..........

231 is ~69 % of 334 so G92 to G92b is a 31 % shrink

55 is ~84 % of 65 so 65 nm to 55 nm is a 16 % shrink

From your numbers it would seem that Nvidia cut some corners with 92b................
 
well, a single 350mm wafer is $5000, so thats without a doubt the majority of your costs. The difference in price due to the complexity of the PCB isn't really too much of an issue, unless your doing something hugly differant from previous generation. Generally speaking mroe layers = more expensive, and the kind of those right now is without a doub the GT200. ATI actually had trouble fitting all the contacts for a 512bit memory bus on the PCB. it surfaced somewhere they were actually considering a 14 or 15 layer substraight.

Thats what i was getting at i seem to recall reading somwhere that 9800GTX uses more lairs

Once again, and this is good for consumers, its a good old fashioned shrink race! If ATI can shrink to the 40's before Nvidia they'll be able to roll the RV770 into greater production at a lower price (assuming a smaller die), and seriously undercut the G92b while still maintaining a nice profit margine for ATI. On the other hand, if Nvidia shrinks the G92b before ATI gets their RV770 smaller a new SKU will probably be made exploiting the extra clocks allowed which would enable this new core (G92c? lol) a new SKU which would probably be able to take down the HD4850. Of course, there's nothing stopping ATI from creating the HD4860. I believe no change in pricing or SKU's will result from a shrink of the GT200. I think Nvidia is currently only after market share and a nice flagship core, so a shrink to the GT200 would allow for higher yeild and increased profit, while maintaing the fastest Single GPU out there, perhalps they'll consider a dual GT200 package.

Good stuff. Nice to see competition again. After a 2 year break ATI's back in it.

I think G92 has reached the end of its line really, R770 seems to be a more efficient design, so im expecting something else form Nvidia, perhaps a optimized GT200 with a 256 bit bus and something like ATI:s memoryhub ?

I also think both will jump straight to 40nm next round, will be very interesting to see who gets it right :) might have to wait almost a year for that though :(
 
The GT200 means nvidia can truthfully say "we have the fastest single GPU in the world". In the minds of the average consumer that means alot.
What are you smoking? LOL
In the minds of the average consumer the $/performance is king.
That's exactly why nVidia was forced to start dropping the prices 2 weeks after the introduction.
 
I also think both will jump straight to 40nm next round, will be very interesting to see who gets it right :) might have to wait almost a year for that though :(
Yes both companies have designed the next gen parts around the 40nm process.
But TSMC is not expected to move unto the 40nm parts any time soon. Not until Feb/March, the earliest.
 
What are you smoking? LOL
In the minds of the average consumer the $/performance is king.
That's exactly why nVidia was forced to start dropping the prices 2 weeks after the introduction.

Having the fastest GPU means it's easier to sell the lower-end ones two.. Joe will see a "Geforce 8600 GT Ultra GS XT Pro Extreme" in the store and he'll remember that he read in some PC magazine or maybe heard from a friend that this "Nvidia" company makes the fastest GPU available.. That has got to mean that this 8600 card is also very good, because it's made by the same company. Especially since it's got all those letters in the product name that brings your thoughts to high performance sports cars.. Oh, and it sais "Pro" too.. it's a "professional" video card from the maker of the fastest GPU in the world! :)
 
Having the fastest GPU means it's easier to sell the lower-end ones two.. Joe will see a "Geforce 8600 GT Ultra GS XT Pro Extreme" in the store and he'll remember that he read in some PC magazine or maybe heard from a friend that this "Nvidia" company makes the fastest GPU available.. That has got to mean that this 8600 card is also very good, because it's made by the same company :)
We're talking about a GPU upgrader here.
An average Joe capable of upgrading his computer components will be able to figure out how to get the best bang for his buck.
 
We're talking about a GPU upgrader here.
An average Joe capable of upgrading his computer components will be able to figure out how to get the best bang for his buck.

Not really.. Anyone can pop the case open and jam a new card into the slot.. Making an informed purchasing decision, searching for info online etc.., that's another thing. I've seen quite a few "dumb" users being told by equally dumb computer store employees what they should buy..
 
Hmmm..........

231 is ~69 % of 334 so G92 to G92b is a 31 % shrink

55 is ~84 % of 65 so 65 nm to 55 nm is a 16 % shrink

From your numbers it would seem that Nvidia cut some corners with 92b................
If you're going to bring math into it, please do so all the way (math major here): 65->55nm is a 16% shrink along only one axis, but it is rougly squared since the shrink occurs in both dimensions relative to the substrate layer. So, to give you the full degree of savings: (0.84*0.84=0.7056), yielding an almost exactly 30% reduction per core. No cut corners here, sorry! :p
 
Having the fastest GPU means it's easier to sell the lower-end ones two.. Joe will see a "Geforce 8600 GT Ultra GS XT Pro Extreme" in the store and he'll remember that he read in some PC magazine or maybe heard from a friend that this "Nvidia" company makes the fastest GPU available.. That has got to mean that this 8600 card is also very good, because it's made by the same company. Especially since it's got all those letters in the product name that brings your thoughts to high performance sports cars.. Oh, and it sais "Pro" too.. it's a "professional" video card from the maker of the fastest GPU in the world! :)
This is true....

your "average" pc user probably never installed a videocard before. The pc that he's "upgrading" from is probably a dell or some sub $800 unit he bought from a local bestbuy or a local computer store. He sees that a 8800gtx is top dog so the 8600 is probably just a tad bad slower. Having the performance crown is still important in some sense.
 
What are you smoking? LOL
In the minds of the average consumer the $/performance is king.
That's exactly why nVidia was forced to start dropping the prices 2 weeks after the introduction.

Neither ATI or Nvidia care about Newegg or NCIX or Zipzoomfly sales, thats gotta just be icing, its all about what Dell and Alienware do, thats 90% of sales. If Dell ships inferior Nvidia cards, the amazing price/performance of the RV770 means nothing.
 
GT200 is bigger than two RV770

Yup, you can derrive that mathmatically too:
576/265 = 2.25. you can fit 2 and a quarter RV770 dies on a wafer for every single GT200 you can fit on it. Using this logic you can bet even the HD4850 has a healthy profit margine on it.
edit: better double check my numbers with the math major here :p
 
After wondering just who's making more money then who per card I created this little picture to help myself (and anyone else wondering the same thing) understand:

You can't determine who's making more money per card by simply looking at die sizes. You need to know their total cost of production and average selling prices to make any sort of a comparison. Case in point - RV670 vs G80. One is tiny and the other is huge by comparison yet the smaller chip still lost money.
 
If they do not subtract anything from the G200 die shrink, it will be a 412mm GPU, so quite a bit smaller than G80, but larger than G92. Personally I think it will be more than a die shrink.
 
Yup, you can derrive that mathmatically too:
576/265 = 2.25. you can fit 2 and a quarter RV770 dies on a wafer for every single GT200 you can fit on it. Using this logic you can bet even the HD4850 has a healthy profit margine on it.
edit: better double check my numbers with the math major here :p

Smaller die helps yield too. A small defect could totally ruin a single 576mm^2 GT200 die but with a smaller die, AMD can still have one fully working 265mm^2 RV770 die and one faulty die.
 
This is true....

your "average" pc user probably never installed a videocard before. The pc that he's "upgrading" from is probably a dell or some sub $800 unit he bought from a local bestbuy or a local computer store. He sees that a 8800gtx is top dog so the 8600 is probably just a tad bad slower. Having the performance crown is still important in some sense.

I work at a computer shop and this is the way that my boss thinks. the tries to sell any computer with a video card in it as a gaming computer, and most of our buisness is in used parts so the cards are Geforce 4-5 era.
 
omg talking about rip off to unknowing gamers. Well most likely wow players are gonna buy that anyway so no matter
 
I will try to make a new picture later unless someone else wants to do it. That one was just slapped together in 5 minutes.
 
Back
Top