How about a 8800 GS???

honestjohn

Gawd
Joined
Jul 28, 2004
Messages
864
Another 8800 card is coming if you can believe this .....

http://en.expreview.com/?p=151

So in no particular order we now have the following 8800s. Let me know if I missed any.

8800 Ultra 768 MB
8800 GTX 768 MB
8800 GTS 640 MB
8800 GTS 320 MB
8800 GTS 640 MB (revised 112sp)
8800 GT 512 MB
8800 GT 256 MB
8800 GT 1GB
8800 GTS 512 MB
8800 GS 384 MB/768 MB
 
Another G92 card? Uh. This is getting old.
How about they stop wasting time and just bring us some faster than the GTX already?
 
Utterly disgusting...So how does this fall into the hierarchy of the existing 8800's? Is it better or worse than the GT?
 
Maybe to make up for the fact that the 8600's are crippled by their memory interface, is why these are being released.
 
Maybe to make up for the fact that the 8600's are crippled by their memory interface, is why these are being released.
I get so tired of hearing that. its the 32 SP not the 128-bit that is killing the 8600gt/gts. all the memory bandwidth in the world wont hardly make a damn bit of difference when you only have 32 SP to begin with.
 
I think they're just trying to compete with the 3750 more so ATI won't have the advantage at that price point...and like the article said, it'll probably cost them less money to make these cards, so it should be more profitable for NVIDIA.

I still wish they'd stop milking it though...I mean we have high-end and great mid-range cards now...give it a rest and make something better already. And if they're gonna focus on more lower-end cards, why not focus on making integrated video cards that don't suck?

I'm seeing Q6600's in stores with 2-4GB of DDR2 RAM, 500GB hard drives and piece of shit Intel integrated graphics. The people buying those things think they're getting a "fast" computer and really they won't be able to run any games with it. Put at least some 8600-level cards in ALL motherboards at least if you're not gonna improve things in the high-end segment. Having a smaller difference between low-end and high-end would at least be useful to most people...but mid-range cards and slighty-above-mid-range then slighty-above-slighty-above-mid-range is just ridiculous.
 
Oh, I'm sure they haven't stopped working on their new top dog. They seem to be pre-empting ATi at every chance atm.
 
I get so tired of hearing that. its the 32 SP not the 128-bit that is killing the 8600gt/gts. all the memory bandwidth in the world wont hardly make a damn bit of difference when you only have 32 SP to begin with.

Um, yeah. The 8600GT/S are crippled by the 128-bit memory bus width (not bandwidth). Only 32 UP is hampering, but the texture fill rate is still high enough to have good performance, even at higher resolutions. However, once higher resolutions are reached, the 128-bit memory bus width will bottleneck the cards all to hell (I know, I've owned 8500GT and 8600GT and 8600GTS).

My card is used for video editing in HD with H.264, so having a lot of memory helps. In games, I would be lucky to reach a resolution with settings good enough to eat up the 512MB of memory on my card. If the bus width were at least 256-bit (which it should have fucking been in the first place, rawr), these cards would have performed similarly to the 7900 cards (at higher resolutions with DX9).

Oh, as for memory bandwidth. It is important, but only if the bus width is big enough to support it. On the 8600GTS, 32GB/s memory bandwidth is great, but only being on a 128-bit bus kills it. I would rather have a 6800GT with 32GB/s on a 256-bit bus, it would perform better at higher resolutions (that is, on older games; obviously the 6800GT is beaten to a pulp).

I think with 128-bit bus width (from my experiences ranging from 32-bit up to 320-bit), it is only good to 1280x1024 or 1440x900. After that, the bus bottlenecks so badly the gameplay isn't even worth it. With Hellgate London (DX9), my 7900GT will out perform my 8600GTS by quite a bit at 1440x900 and beyond with ease. The texture fill rates are the exact same (32 UP vs 24 PP didn't make any difference in this category) and the shader power is much lower on the 7900GT, yet it still managed to outperform the 8600GTS at higher resolutions. Thus proving my point about memory bus width being the bottleneck.

...and so, I leave you with this message:

pwned111za6.jpg
 
Um, yeah. The 8600GT/S are crippled by the 128-bit memory bus width (not bandwidth). Only 32 SP is hampering, but the texture fill rate is still high enough to have good performance, even at higher resolutions. However, once higher resolutions are reached, the 128-bit memory bus width will bottleneck the cards all to hell (I know, I've owned 8500GT and 8600GT and 8600GTS).

My card is used for video editing in HD with H.264, so having a lot of memory helps. In games, I would be luck to reach a resolution with settings good enough to eat up 512MB memory. If the bus width were at least 256-bit (which it should have fucking been in the first place, rawr), these cards would have performed similarly to the 7900 cards (at higher resolutions with DX9).
You cant say its limited by bus width but not bandwidth...that make no sense. The whole point in having a larger bus width is to get more bandwidth. For example a 320 bus with 1600mhz memory has the same memory bandwidth as a 256 bus with 2000mhz.

I agree with you that the 8600gt would definitely keep more of its performance at higher resolutions with more memory bandwidth. The problem is that the 8600gt with only 32 SP doesnt really have the horsepower to run higher settings anyway. Its a 1280x1024 and below card no matter how much memory bandwidth you give it.

The picture you just added shows that you are too immature to even discuss this like an adult.
 
You cant say its limited by bus width but not bandwidth...that make no sense. The whole point in having a larger bus width is to get more bandwidth. For example a 320 bus with 1600mhz memory has the same memory bandwidth as a 256 bus with 2000mhz.


The picture you just added shows that you are too immature to even discuss this like an adult.

EDIT: Neva mind!

Oh, and if you were an adult, you would have laughed at the pic. (it was meant as a joke).

Take some deep breaths, and every once in a great while, try to LOL.

aqua-teen-hunger-force-20070413005543414.jpg


"Don't be sucha beetcha. You come up to room, nude be me!"
- Travis of the Universe
 
Um, yeah. The 8600GT/S are crippled by the 128-bit memory bus width (not bandwidth). Only 32 UP is hampering, but the texture fill rate is still high enough to have good performance, even at higher resolutions. However, once higher resolutions are reached, the 128-bit memory bus width will bottleneck the cards all to hell (I know, I've owned 8500GT and 8600GT and 8600GTS).

My card is used for video editing in HD with H.264, so having a lot of memory helps. In games, I would be lucky to reach a resolution with settings good enough to eat up the 512MB of memory on my card. If the bus width were at least 256-bit (which it should have fucking been in the first place, rawr), these cards would have performed similarly to the 7900 cards (at higher resolutions with DX9).

Oh, as for memory bandwidth. It is important, but only if the bus width is big enough to support it. On the 8600GTS, 32GB/s memory bandwidth is great, but only being on a 128-bit bus kills it. I would rather have a 6800GT with 32GB/s on a 256-bit bus, it would perform better at higher resolutions (that is, on older games; obviously the 6800GT is beaten to a pulp).

I think with 128-bit bus width (from my experiences ranging from 32-bit up to 320-bit), it is only good to 1280x1024 or 1440x900. After that, the bus bottlenecks so badly the gameplay isn't even worth it. With Hellgate London (DX9), my 7900GT will out perform my 8600GTS by quite a bit at 1440x900 and beyond with ease. The texture fill rates are the exact same (32 UP vs 24 PP didn't make any difference in this category) and the shader power is much lower on the 7900GT, yet it still managed to outperform the 8600GTS at higher resolutions. Thus proving my point about memory bus width being the bottleneck.

...and so, I leave you with this message:

[snip]

But the 320-bit bus would be capable of sustaining higher resolutions with less bottleneck, even though the bandwidth is the same. Yeah, bandwidth is important, but memory bus width is equal to, if not more important. If a card had a 64-bit bus width with 200GB/s memory bandwidth, one couldn't game past 1024x768 max. The performance at that res would be awesome, but going beyond it would bottleneck the card so badly the game would be unplayable.

Oh, and if you were an adult, you would have laughed at the pic. (it was meant as a joke).

Take some deep breaths, and every once in a great while, try to LOL.

[snip]

:rolleyes:

I'm sorry your wrong. Every single item you've stated as fact I have grounds to attack. You don't seem to understand the correlation between memory bandwidth (aka throughput) and memory bus width. You also don't seem to understand the correlation between texture mapping horsepower and maximum playable resolution. I'll leave it at at that.
 
But the 320-bit bus would be capable of sustaining higher resolutions with less bottleneck, even though the bandwidth is the same. Yeah, bandwidth is important, but memory bus width is equal to, if not more important. If I had a 64-bit bus width with 200GB/s memory bandwidth, I couldn't game past 1024x768 max. The performance at that res would be awesome, but going beyond it would bottleneck the card so badly the game would be unplayable.
If you're serious, stop and think about this for a minute. To get the bandwidth number, you multiply the bus width by the frequency. A 200GB bandwidth at 64 bits would need to run 2560MHz (5120 DDR). This would outperform any 384 or 512-bit part today by a wide margin.

Another example is dual channel system memory. What would you think provides more bandwidth; single channel (64-bit) at 533MHz (1066 DDR) or dual channel (128-bit) at 200MHz (400 DDR)?
 
If you're serious, stop and think about this for a minute. To get the bandwidth number, you multiply the bus width by the frequency. A 200GB bandwidth at 64 bits would need to run 2560MHz (5120 DDR). This would outperform any 384 or 512-bit part today by a wide margin.

Another example is dual channel system memory. What would you think provides more bandwidth; single channel (64-bit) at 533MHz (1066 DDR) or dual channel (128-bit) at 200MHz (400 DDR)?

the numbers are a bit off but its on the right track.

Bus width (in bits) is bits/clock
Frequency is clock(s)/sec (remember, frequency here is given in MEGA hertz, so 1Mhz is 1,000,000 clocks/sec, so if the speed is 200Mhz, we use 200,000,000. ALSO we use the higher DDR speed, not the lower SDR because we want the effective throughput)
if we multiply the two we get bits/sec
we know there are 8 bits in a byte
if we divide by 8, we get bytes/sec
there are 1000 bytes in a kilobyte
there are 1000 kilobytes in a megabyte
there are 1000 megabytes in a gigabyte
so we divide by 1000 thrice (best word ever), to give us GB/s

lets try shall we! 8600GTS:

128bits/clock
2000Mhz or 2,000,000,000 clocks/sec
multiply the two: 256,000,000,000bits/sec
8 bits/byte
divide by 8: 32,000,000,000 bytes/sec
1000 bytes/Kilobyte
1000 Kilobytes/Megabyte
1000 Megabytes/Gigabyte
devide by 1000 three times...

We get 32GB/s! Check any source you like, thats accurate!

So, yes, if we had a smaller bus we would need to increase the frequency dramatically in order to maintain the same amount of memory throughput.

edit: now I have confused myself a little, it might be 1000XXXbytes/XXXbytes because its serial... but it should still be 1024 instead of 1000 no? I onno... if you do the 1024 deal you get 29.8GB/s, which is only slightly less right.

simple calculations folks! The only complex thing there is the 8bits/byte!

also, texture mapping has a very complicated relationship with resolution, however it isn't as simple as you put it out.

ANYWAYS, lets wheel this on over to the OPs post...

looks like a decent card. Finally we might see something to bridge that massive gap.
 
Whoa!

Ok, I understand that now. Thanx everybody! :D


cannondale06, you could still get a sense of humor though.
 
Ah, I missed a zero, so litigate me :p

All I have to say about the original topic is: another G92 derivative? I thought supply was tight on those chips. The only card worth caring about really is the first that was launched and it's still priced pretty high yet demand has died down a bit. Me no get it.
 
Seems like these cards are to compete with ATI's 3850 assuming the msrp is correct. I've always considered the ATI 3850 the best card for price/performance/watt. I hope I can change my tune when real benchies of these cards are revealed.

I will expect a G92 value cards to be released later in the year. Nvidia will make more gpu's per wafer with this core than the G80 and hence make more money. There's a reason why they're Forbe's Magazine company of the year.

I remember back in the day that we had only 3 cards to choose from, Large Medium and Small. Now it like we have Xtra Large, Large, Large-Medium, Xtra Medium, Medium, Medium-Small, Xtra Small and Small, add fries and a coke.
 
My GF isn't a huge gamer but does like to play some games every now and again. She has a 7900GS right now and I may sidegrade her into one of these. I imagine it won't be a huge gaming card but it will more than fit her needs better in the long term vs her current card. For ~200 the 768mb card could better a better buy than the 256mb 8800GT
 
I agree.. It seems logical that the 768MB version would definitely outperform the 256MB 8800GT, at least at higher resolutions or settings. It would depend on if you could overclock it to the point where it can actually use that extra memory a little bit better. I wonder how well they will OC... Looks like this may be my next purchase, especially if they can stabilize the price at around $180 - $190 for the 768MB versions. Maybe I'll just save myself the cash and go with the 384MB version, since I wont be gaming at above 1680x1050 anyways and I'm not a huge graphics whore.
 
Oddly enough I had been pondering for the past few months if a 192-bit GPU would see the light of day -- but I thought if I mentioned it I'd be laughed off the web. The reason is that there could be one possible use for a 192-bit interface -- low profile cards.

The continuing lack of 256-bit low-profile cards lead a couple of us in the low profile thread to speculate that it may be too hard to squeeze a 256-bit bus onto the PCB. But 192-bit may be do-able. If so, a LP 8800GS with SPDIF-in + DVI/HDMI could be an attractive proposition for a slimline HTPC.
 
Back
Top