GDDR5 & GTX 280?

3DChipset

Gawd
Joined
Sep 3, 2007
Messages
670
Is the GTX 280 capable of supporting the GDDR5 memory modules? Or is the design/fab not capable of using that type of memory?
 
I don't have a link, but seem to recall seeing a post stating the G200 doesn't support GDDR5.
 
Is the GTX 280 capable of supporting the GDDR5 memory modules? Or is the design/fab not capable of using that type of memory?

I am sure they could (chip revision) but why? if they can't feed that thing with a 512 bit bus then what would the GDDR5 accomplish? now on the next round it would be cheaper (easier board to make) and it would be more cost effective but I don't see any performance increase coming from it. clock per clock on the core the 4870 is only ~10% faster then the 4850.
 
I don't have a link, but seem to recall seeing a post stating the G200 doesn't support GDDR5.
Thanks, I wasn't sure because I haven't heard too much regarding the possible GTX280b (whatever they name their 55nm card) with GDDR5.
 
Thanks, I wasn't sure because I haven't heard too much regarding the possible GTX280b (whatever they name their 55nm card) with GDDR5.

The 55nm die-shrink won't be so much faster than the 65nm 280 that it needs double the memory bandwidth!!
 
I might have seen it at Beyond3D's forum. They have some very savy insiders over there.
 
Thanks, I wasn't sure because I haven't heard too much regarding the possible GTX280b (whatever they name their 55nm card) with GDDR5.
if its still on a 512bit bus then it will not use gddr5 because there is no need for the additional bandwidth and cost. nvidia probably went with large bus width because they were unsure what the availability and cost of gddr5 would be when they were designing the GTX280.
 
GDDR5 would be wasted on a 512bit bus unless the GPU becomes much faster which will not happen soon!
NVIDIA cruelly needs a GT200 re-spin for competitive and performance purposes.
 
Read the post on Evolucion8's explanation of how gddr5 isn't just about memory bandwidth.
It allows the GPU to offload certain calculations that the gddr5 can compute faster that aren't just for memory bandwidth relations.
 
Read the post on Evolucion8's explanation of how gddr5 isn't just about memory bandwidth.
It allows the GPU to offload certain calculations that the gddr5 can compute faster that aren't just for memory bandwidth relations.
still not going to help anything on the GTX280 and would only add more cost. the 4850/4870 if ran at the same core clocks only show about a 5-7% performance gain going from 2000mhz gddr3 to 3600mhz gddr5.
 
Ya you have to remember that with memory bandwidth, more is better only to a point. You want sufficient bandwidth that the GPU (or CPU if you are talking about a system) isn't waiting long on data. However, once you can get it the data as fast as it wants, there isn't any additional benefit. Having the RAM wait on the GPU is no more efficient than having the GPU wait on the RAM.
 
Poor memory management is also to blame... it feels like the GT200's take hits with higher AA and textures and resolutions that their specs shouldn't be taking, while the 48xx's tend to cruise along
 
It won't help.

The GT200 needs a revamp on how the whole architecture deals with logic units ("stream processors") in order to get to the next generation.

Plus, the memory controller on the GT200 has not been modified to support longer burst rates yet, which means it won't support even GDDR4, without even considering 5. You think nVidia really did not want to use GDDR4 initially for their top end and make the G92 (8800GT/GTS) the new top dog?

The MC redesign has been prompted many times, and they're still rehasing the exact same old architecture from G80, just with double precision units (and slow of them to perform) and extra cache space per shader to make geometry shaders faster.
 
i think the question is would it be cheaper to nock it down to a 256bit bus using gdr5 instead of the 512 using gdr3, which i have no clue
 
in the long run, gddr5 + 256-bit will be cheaper than 512-bit with gddr3
 
i think the question is would it be cheaper to nock it down to a 256bit bus using gdr5 instead of the 512 using gdr3, which i have no clue

It does not automatically make the chip less hot.

Heat/power use is the main problem here. Moreover, using GDDR5 = no power saving abilities anymore (GT200 turns off/super-low clocks its GDDR3 RAM, which you can't do smoothly using retraining GDDR5) = idle power would get back to something higher.


Even a 4850 @ 800Mhz/1100Mhz would be extremely close to the 4870 in most <1920 benches. X2ing also reduces the bandwidth tax and you're not limited till ridiculous settings, which is why the 4850X2 could actually be quite impressive in the $400 league.
 
Plus, the memory controller on the GT200 has not been modified to support longer burst rates yet, which means it won't support even GDDR4, without even considering 5. You think nVidia really did not want to use GDDR4 initially for their top end and make the G92 (8800GT/GTS) the new top dog?

GDDR4 has no advantage over GDDR3 except a very slight power consumption decrease. That does not warrant its use over GDDR3 in any way, even more so because it was expensive.
 
i think the question is would it be cheaper to nock it down to a 256bit bus using gdr5 instead of the 512 using gdr3, which i have no clue

yes, the memory itself is more expensive but it would reduce the manufacturing cost of the board considerably. the problem here is that the chip will still be too big for a 256 bit bus even after the die shrink, remember there are engineering concerns here as well as cost
 
256-bit bus would lower the die size considerably, hence help its yields a lot
 
GDDR5 will eventually be cheaper than GDDR3. The reason it's expensive now is that they can't produce enough of them which means higher prices according to supply and demand. When GDDR5 is mature it will be less expensive per memory chip, but I don't think that will happen in time for the 55nm GTX280.
 
eh, how do you figure that?

Because die defects increase EXPONENTIALLY with area size.

So a 25% increase in area size will mean a increase in die defects of much greater percentage, hence lowering yields.

Put it this way: if teh GT200 were 25% smaller, yields AND # of chips per wafer will increase meaning that they cost a lot less to make.

If a current wafer only yields 25 GT200 GPUs that work, but if you lowered die size by cutting memory controller width (512 to 256) and say, shrunk the die 15%, you might yield 40 or so die. That's a significant change in cost per GPU, which can even out with any GDDR5 costs that exist.

Fact of the matter is, ATI gambled with their cards being tied to GDDR5 development, and this time, their gamble drew a full house while Nvidia tried to play it conservatively, and lost.
 
I don't think 256 bit bus will let them decrease the memory control size that much.
 
Back
Top