G80 Specs!

that's what SLI is for lol. Jesus it always costs more each refresh. I'll settle for one with 1920x1200 on my 24" Acer Lcd
 
spaceman said:
that's what SLI is for lol. Jesus it always costs more each refresh. I'll settle for one with 1920x1200 on my 24" Acer Lcd

I don't want to have to run SLI just so I could run things at a bare minimum!

No man....no.
I want the single card to handle 2560x1600 with at least some of the eye candy to handle at decent frame rates.
THEN and only then would I consider SLI at that point for some extra punch. Thats what I'm hoping from the next gen stuff from nvidia.
 
StalkerZER0 said:
I don't want to have to run SLI just so I could run things at a bare minimum!

No man....no.
I want the single card to handle 2560x1600 with at least some of the eye candy to handle at decent frame rates.
THEN and only then would I consider SLI at that point for some extra punch. Thats what I'm hoping from the next gen stuff from nvidia.

That's a pretty tall order.
 
3rd ATi seems to be either losing money or profitting very poorly on their GPU's. Nvidia always seems to come out with a stopgap card that forces ATi to lower their price on one of their mid or high end cards to compete thus making them either lose money or profit 1/2 of what they would have liked. It's not always the case but it's what I have noticed lately.

ATI and Nvidia seem to have reversed focuses. ATI focuses on engineering first, profiting second, while Nvidia focuses on profiting first, engineering second. Not to knock Nvidia's engineering though. For a chip that's a good 80 million fewer transistors, it sure trades blows well with the x1950/1900XTX. Undoubtedly because of this, Nvidia must make much higher profit margins on their midrange/highend cards, but we all know that the real bread winners for each of these companies lies in the OEM market. This is one of reasons what put 3dfx under - not only did Nvidia have superior engineering, but they had a much stronger hold on the OEM market way back then.

I wish I had extra money back then to invest in Nvidia stock. Ah well, I was just a kid...still am. :)
 
Well I know that, just interested in hearing some rough estimates according to the so called "leaked" specs.
 
StalkerZER0 said:
I don't want to have to run SLI just so I could run things at a bare minimum!

No man....no.
I want the single card to handle 2560x1600 with at least some of the eye candy to handle at decent frame rates.
THEN and only then would I consider SLI at that point for some extra punch. Thats what I'm hoping from the next gen stuff from nvidia.


I doubt that you'll ever have to have SLi just to run things at the bare minimum, that makes no sense at all.

Secondly, that IS a tall order and I think you're going to be disappointed if you really think that those cards will be able to handle that res no problem.

Granted the 7950GX2 can handle 2560x1500 with not much hassle, but that is after all a single slot SLi of sorts...

But we can dream can't we :D
 
I expect the g80 to be as powerfull if not more than the gx2, I mean Nvidia considers the gx2 a single card, hell at the PDXlan I just got back from the representitive who did a presentation kept calling the gx2 a "single gpu" card and was saying that the gx2 is the "faster single gpu card on the market." I know we all consider it 2 gpu's because well, it is. But if Nvidia is saying it's a single gpu and considers it so, then the g80 will probably be faster because "single cards" are always faster than the last gen right? Hope I make some sense. It's late...
 
Ranari said:
ATI and Nvidia seem to have reversed focuses. ATI focuses on engineering first, profiting second, while Nvidia focuses on profiting first, engineering second. Not to knock Nvidia's engineering though. For a chip that's a good 80 million fewer transistors, it sure trades blows well with the x1950/1900XTX. Undoubtedly because of this, Nvidia must make much higher profit margins on their midrange/highend cards, but we all know that the real bread winners for each of these companies lies in the OEM market. This is one of reasons what put 3dfx under - not only did Nvidia have superior engineering, but they had a much stronger hold on the OEM market way back then.

I wish I had extra money back then to invest in Nvidia stock. Ah well, I was just a kid...still am. :)

I saw this type of thing written in other threads and it still amazes me that you actually believe it...
So NVIDIA is in it for the money first and ATI isn't ?
They are BOTH in it for the money. They don't have any other reason to be in this market.
 
ITSTHINKING said:
I expect the g80 to be as powerfull if not more than the gx2, I mean Nvidia considers the gx2 a single card, hell at the PDXlan I just got back from the representitive who did a presentation kept calling the gx2 a "single gpu" card and was saying that the gx2 is the "faster single gpu card on the market." I know we all consider it 2 gpu's because well, it is. But if Nvidia is saying it's a single gpu and considers it so, then the g80 will probably be faster because "single cards" are always faster than the last gen right? Hope I make some sense. It's late...

Although there's nothing to back that up (concerning the real specs of the G80), that is also my belief.
I think the G80 will be as fast or faster than a GX2. The only thing really confirmed from a NVIDIA representative, was the half a billion transistors of the G80, which is saying something: GX2: 2 x 279 million transistors GPUs and G80: ~500 million transistors.
 
Silus said:
Although there's nothing to back that up (concerning the real specs of the G80), that is also my belief.
I think the G80 will be as fast or faster than a GX2. The only thing really confirmed from a NVIDIA representative, was the half a billion transistors of the G80, which is saying something: GX2: 2 x 279 million transistors GPUs and G80: ~500 million transistors.


It should stomp the GX2.

It should easily double (at least) the performance of the last highest end card, the 7900GTX. That's common for a new card (not fall refresh).

Since the GX2 is two underclocked GTX's slapped together, it's less than 2X 7900GTX, so less than G80 will almost certainly be. Then throw in that G80 clocks are going to be 700-800mhz on the core almost surely (along with a huge nuumber of pipelines, at LEAST 48).

Throw in SLI overhead holding back the GX2 further, and G80 is just going to rip it apart. I would guess as much as 250%-350% as fast, or more.
 
Sharky974 said:
It should stomp the GX2.

It should easily double (at least) the performance of the last highest end card, the 7900GTX. That's common for a new card (not fall refresh).

Since the GX2 is two underclocked GTX's slapped together, it's less than 2X 7900GTX, so less than G80 will almost certainly be. Then throw in that G80 clocks are going to be 700-800mhz on the core almost surely (along with a huge nuumber of pipelines, at LEAST 48).

Throw in SLI overhead holding back the GX2 further, and G80 is just going to rip it apart. I would guess as much as 250%-350% as fast, or more.

I'm sorry, but I don't agree. 250%-350% ? That's wishful thinking only IMHO. I'm sure that the G80 will be as fast or faster than two current GPUs, but that margin seems to be too big.
However, when it's out (and if you are right), you can quote me and say I was wrong, because I will be impressed, if that margin was actually true.
 
I pretty sure the G80 single GPU with ~500million transisters will be faster than the GX2 with 2 x 279million transistors. The G80 has about twice the transisters but the GX2 is SLI which is never double the speed so I'd be pretty confident the G80 will be a good bit faster than GX2. It has more memory with 384 bit memory interface and way higher core clockspeed (scaled up to 1.5ghz!) along with GDD4 2+ghz. G80 is going to PWN the GX2!!! Im glad I skipped out on the GX2 as my single GTX has pwned everything right now at max settings 1680x1050 4xAA/16xAF
 
Nirad9er said:
I pretty sure the G80 single GPU with ~500million transisters will be faster than the GX2 with 2 x 279million transistors. The G80 has about twice the transisters but the GX2 is SLI which is never double the speed so I'd be pretty confident the G80 will be a good bit faster than GX2. It has more memory with 384 bit memory interface and way higher core clockspeed (scaled up to 1.5ghz!) along with GDD4 2+ghz. G80 is going to PWN the GX2!!! Im glad I skipped out on the GX2 as my single GTX has pwned everything right now at max settings 1680x1050 4xAA/16xAF

Well at this point, we can't really say it "has" 384 bit memory interface, because that's nothing but a rumor.
November is not very far way though and we'll probably see some final specs in mid October.
 
Well, I love to speculate as much as the next guy.

So you'd think that because it's a new gen card, that it'd have to be at least as fast as the fastest last gen card (Gx2). Now the GX2 is fast, the 7900GT was as fast if not a touch faster than the 7800GTX 256. The 7900GTX was faster than the 7800GTX 512.

I believe the G80 GT card will be as fast or faster than a 7900GTX and the G80 GTX card will be as fast or faster than the GX2.

Something like that
 
winston856 said:
I doubt that you'll ever have to have SLi just to run things at the bare minimum, that makes no sense at all.

Secondly, that IS a tall order and I think you're going to be disappointed if you really think that those cards will be able to handle that res no problem.

Granted the 7950GX2 can handle 2560x1500 with not much hassle, but that is after all a single slot SLi of sorts...

But we can dream can't we :D

Well, if the single 7950gx2 can handle 2560x1600 without having a heart attack at even mimimum fps then certainly a gx2 version of the high end g80 card can do better. Then I would feel good buying a second card to run SLI because then I would be playing even better.
I don't just want to be able to play crysis at 2560x1600.....I want to ENJOY playing it at 2560x1600! :cool:
 
i think a better way of finding out how much of a performance increase were going to see out of g80 is to look back on the differences we saw between the 7800gtx 256 and the 6800 ultra, because the 7800 was a true next generation after the 6800 and not just a refresh, i have no idea what kind of performance increase we saw between the two so if somebody could chime in about that that would be great
 
moto316 said:
i think a better way of finding out how much of a performance increase were going to see out of g80 is to look back on the differences we saw between the 7800gtx 256 and the 6800 ultra, because the 7800 was a true next generation after the 6800 and not just a refresh, i have no idea what kind of performance increase we saw between the two so if somebody could chime in about that that would be great

I'm not sure that would be a totally accurate comparison....but then again total accuracy isn't necessary. But keep in mind g80 represents nvidia's first dx10 capable card with unified shaders so I'm not sure how that is going to factor in any estimations of its power over the 7950gx2.
 
Actually, it's a perfectly valid comparison, while the 7800 to 7900 series comparison is perfectly invalid. A new architecture has historically represented a much larger performance jump than a simple refresh. For instance, the 7800 GTX was faster than 2 6800 Ultras in SLI. The 6800 Ultra was more than twice as powerful as an FX 5950.

Similar stats for the ATI side. The X1800 XT stomps on the X850 XT PE, and the X800 XT stomps on the 9800 XT. The 9700 Pro kicked the snot out of the 8500, which beat the Radeon DDR to a pulp.

There are a few counterpoints to this "doubling of performance" idea going back a bit farther on the NVidia side, though. The FX 5800 Ultra only showed modest performance increases over the GeForce 4 Ti4600, and paled in comparison to the 9700 Pro. The GeForce 4 Ti4600 was only about 25-35% faster than the GeForce 3 Ti500. The GeForce 3 was only marginally faster than the GeForce 2 Ultra (although it supported many more features). However, aside from those anomalies, the picture is clear. The GeForce 2 GTS was a huge jump over the GeForce 256 DDR, and the GeForce 256 SDR's jump over the Riva TNT2 was perhaps the biggest generational leap ever seen in the history of video cards.

Given all that evidence, I'd predict that the 8800 GTX will indeed be faster than either the 7950 GX2 or 2 7900 GTX's in SLI, and by a significant, but not overpowering margin.
 
arthur_tuxedo said:
Actually, it's a perfectly valid comparison, while the 7800 to 7900 series comparison is perfectly invalid. A new architecture has historically represented a much larger performance jump than a simple refresh. For instance, the 7800 GTX was faster than 2 6800 Ultras in SLI. The 6800 Ultra was more than twice as powerful as an FX 5950.

Similar stats for the ATI side. The X1800 XT stomps on the X850 XT PE, and the X800 XT stomps on the 9800 XT. The 9700 Pro kicked the snot out of the 8500, which beat the Radeon DDR to a pulp.

There are a few counterpoints to this "doubling of performance" idea going back a bit farther on the NVidia side, though. The FX 5800 Ultra only showed modest performance increases over the GeForce 4 Ti4600, and paled in comparison to the 9700 Pro. The GeForce 4 Ti4600 was only about 25-35% faster than the GeForce 3 Ti500. The GeForce 3 was only marginally faster than the GeForce 2 Ultra (although it supported many more features). However, aside from those anomalies, the picture is clear. The GeForce 2 GTS was a huge jump over the GeForce 256 DDR, and the GeForce 256 SDR's jump over the Riva TNT2 was perhaps the biggest generational leap ever seen in the history of video cards.

Given all that evidence, I'd predict that the 8800 GTX will indeed be faster than either the 7950 GX2 or 2 7900 GTX's in SLI, and by a significant, but not overpowering margin.

stupendous post my friend, i also hope its a winner over 7900gtx sli
 
They're going to come out with other cards right? I mean I always go one or two steps below the top of the line (around $300 or so) but that 8800GT is really really expensive. Anyone hear of any other cards from this lineup yet?
 
What I cannot understand from those specs is why stop at 768MB of memory on the card. We've seen that 1GB is possible via SLI/Crossfire so why don't ATi AND nVIDIA make the obvious jump and make their DX10 products 1GB of memory on the card right off the bat? It to me just feels like 768MB is an odd, strange number to jump to after 512. I mean with memory it went 128, 256, 512 and 1GB. Just my nickel. Out!
 
768 is actually a good amount. We're coming "over the hill" in terms of texture data being used in current gen games. We have colors, normals and diffuse maps and we're simply at the point of increasing their resolutions. With DXT5 and 3Dc compression, we're talking about really maximizing available memory for texture storage (there's no way that G80 will not natively support 3Dc, rather than G70's "wrapper"). Any way you slice it, 768MB ends up being slightly excessive, but good in terms of future proofing. I doubt we'll be seeing 1GB until we have some sort of part with separate PPUs/GPUs, as it would be generally un-needed. I'd peg UT2007 as maxing out, typically, at 400MB of texture data. That leaves adequate room on a 512MB card and more than enough for a 768MB card. As mentioned earlier, as well, 768MB of memory is "perfect" for a 384 bit bus. Makes things real easy-like.

There's still some issue with moving this much data, however. Engines are going to need to take different directions to allow un-obtrusive transfer of textures or else we're going to be dealing with some nasty stutter. With upwards of 500MB of texture data, trying to dynamically load and purge data is going to be very nasty process.
 
As others have noted above, the "odd" increases in bus bandwidth and memory size seem to make a lot more sense if they represent a separate 128-bit channel to a separate 256MB buffer for specialized uses. So much sense that I can't help but think that this is the real explanation. Although of course the idea of three banks of texture memory with 128 bit channels to each also works.
 
RareAir23 said:
What I cannot understand from those specs is why stop at 768MB of memory on the card. We've seen that 1GB is possible via SLI/Crossfire so why don't ATi AND nVIDIA make the obvious jump and make their DX10 products 1GB of memory on the card right off the bat? It to me just feels like 768MB is an odd, strange number to jump to after 512. I mean with memory it went 128, 256, 512 and 1GB. Just my nickel. Out!

Right now its not neccessary to have 512 bit connection for 1gb ram. But the ram densities probably make it very expensive to have 1gb and would probably push the price of the card up VERY high. As per 512bit connection, we may never see this unless ram goes serial, the core has to be a certain size to allow for all the connections to the ram. So the a compromise between the two would be 384bit with 768mb of ram even if it is a little weird.
 
scrawnypaleguy said:
They're going to come out with other cards right? I mean I always go one or two steps below the top of the line (around $300 or so) but that 8800GT is really really expensive. Anyone hear of any other cards from this lineup yet?

Not cards but more like chips. I mean I think their budget and mid range cards based on derivatives of the g80 are already in the works. I think one of the chips based on the g80 will becalled the g85 but I'm not sure about the other one. But they will have budget and mid range cards available....eventually.
But the flagship cards will debut first I believe. LOL thing of it is I'm not even interested in the 8800gtx.
I'm more interested in a 8800gx2 card. :p Which I'm sure will break my wallet in half. :(

Who cares though?!! want supreme power dammit!
:D
 
StalkerZER0 said:
Not cards but more like chips. I mean I think their budget and mid range cards based on derivatives of the g80 are already in the works. I think one of the chips based on the g80 will becalled the g85 but I'm not sure about the other one. But they will have budget and mid range cards available....eventually.
But the flagship cards will debut first I believe. LOL thing of it is I'm not even interested in the 8800gtx.
I'm more interested in a 8800gx2 card. :p Which I'm sure will break my wallet in half. :(

Who cares though?!! want supreme power dammit!
:D

:)

I, on the other hand, don't intend to buy a G80 or R600 anytime soon, since I did a complete system change, about a year ago. But I sure am curious about these cards. If the rumored specs are any indication of the truth, they will be beasts, while rendering the games we play.

I also share your enthusiasm, regarding a possible 8800 GX2. IMHO; the current GX2 is the crown achievement of the G70 architecture. Kudos to NVIDIA for it.
 
Silus said:
:)

I, on the other hand, don't intend to buy a G80 or R600 anytime soon, since I did a complete system change, about a year ago. But I sure am curious about these cards. If the rumored specs are any indication of the truth, they will be beasts, while rendering the games we play.

I also share your enthusiasm, regarding a possible 8800 GX2. IMHO; the current GX2 is the crown achievement of the G70 architecture. Kudos to NVIDIA for it.

Yup. :)
But the GX2 isn't a pure dual core though. I was hoping that the g80 chip would be based on 2 cores but it doesn't look so. That woulda been great. I wonder what the 8800GX2 would be able to acheive. And imagine two of those cards in SLI! :D
 
StalkerZER0 said:
Yup. :)
But the GX2 isn't a pure dual core though. I was hoping that the g80 chip would be based on 2 cores but it doesn't look so. That woulda been great. I wonder what the 8800GX2 would be able to acheive. And imagine two of those cards in SLI! :D
most GPUs that people on here would consider purchasing are multi-core already.
 
StalkerZER0 said:
Yup. :)
But the GX2 isn't a pure dual core though. I was hoping that the g80 chip would be based on 2 cores but it doesn't look so.
There is absolutely no real reason to take a "multi-core" approach to designing GPUs. GPUs, for a long time, have been built around a multi-tier, sequential function parallel processing configuration (pipelines are still typically built in quad configurations). With a 1600x1200 display, we're talking about determining color values of 1.92 million pixels - to process and write one at a time 60+ times per second would be, well, insane.

It's not really correct to think of GPUs as "multi-core". You're right about the GX2 - it is not dual core, but rather dual GPU. The GX2 has two completely separate GPU packages, just as servers typically have four or eight (or more) separate CPU packages operating in a (ideally) parallel configuration.
 
phide said:
It's not really correct to think of GPUs as "multi-core". You're right about the GX2 - it is not dual core, but rather dual GPU. The GX2 has two completely separate GPU packages, just as servers typically have four or eight (or more) separate CPU packages operating in a (ideally) parallel configuration.

Yeah, working on all those pixels is a highly parallel process. Having multiple GPU cores is good, however, due to the overhead of syncing two cores, increasing the amount of pixel pipes and shaders on a single GPU has a greater positive effect.

I'm not an expert though.
 
Brent_Justice said:
The whole thing could be wrong.

It is all rumor at this point.

I did not know there was another thread out, as I cant read all the threads that
come through
 
Back
Top