Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
spaceman said:that's what SLI is for lol. Jesus it always costs more each refresh. I'll settle for one with 1920x1200 on my 24" Acer Lcd
StalkerZER0 said:I don't want to have to run SLI just so I could run things at a bare minimum!
No man....no.
I want the single card to handle 2560x1600 with at least some of the eye candy to handle at decent frame rates.
THEN and only then would I consider SLI at that point for some extra punch. Thats what I'm hoping from the next gen stuff from nvidia.
MrFace said:That's a pretty tall order.
3rd ATi seems to be either losing money or profitting very poorly on their GPU's. Nvidia always seems to come out with a stopgap card that forces ATi to lower their price on one of their mid or high end cards to compete thus making them either lose money or profit 1/2 of what they would have liked. It's not always the case but it's what I have noticed lately.
Darkatom said:So what kind of minimum power supply are we looking at for G80 SLI ?
StalkerZER0 said:I don't want to have to run SLI just so I could run things at a bare minimum!
No man....no.
I want the single card to handle 2560x1600 with at least some of the eye candy to handle at decent frame rates.
THEN and only then would I consider SLI at that point for some extra punch. Thats what I'm hoping from the next gen stuff from nvidia.
Ranari said:ATI and Nvidia seem to have reversed focuses. ATI focuses on engineering first, profiting second, while Nvidia focuses on profiting first, engineering second. Not to knock Nvidia's engineering though. For a chip that's a good 80 million fewer transistors, it sure trades blows well with the x1950/1900XTX. Undoubtedly because of this, Nvidia must make much higher profit margins on their midrange/highend cards, but we all know that the real bread winners for each of these companies lies in the OEM market. This is one of reasons what put 3dfx under - not only did Nvidia have superior engineering, but they had a much stronger hold on the OEM market way back then.
I wish I had extra money back then to invest in Nvidia stock. Ah well, I was just a kid...still am.
ITSTHINKING said:I expect the g80 to be as powerfull if not more than the gx2, I mean Nvidia considers the gx2 a single card, hell at the PDXlan I just got back from the representitive who did a presentation kept calling the gx2 a "single gpu" card and was saying that the gx2 is the "faster single gpu card on the market." I know we all consider it 2 gpu's because well, it is. But if Nvidia is saying it's a single gpu and considers it so, then the g80 will probably be faster because "single cards" are always faster than the last gen right? Hope I make some sense. It's late...
Silus said:Although there's nothing to back that up (concerning the real specs of the G80), that is also my belief.
I think the G80 will be as fast or faster than a GX2. The only thing really confirmed from a NVIDIA representative, was the half a billion transistors of the G80, which is saying something: GX2: 2 x 279 million transistors GPUs and G80: ~500 million transistors.
Sharky974 said:It should stomp the GX2.
It should easily double (at least) the performance of the last highest end card, the 7900GTX. That's common for a new card (not fall refresh).
Since the GX2 is two underclocked GTX's slapped together, it's less than 2X 7900GTX, so less than G80 will almost certainly be. Then throw in that G80 clocks are going to be 700-800mhz on the core almost surely (along with a huge nuumber of pipelines, at LEAST 48).
Throw in SLI overhead holding back the GX2 further, and G80 is just going to rip it apart. I would guess as much as 250%-350% as fast, or more.
Nirad9er said:I pretty sure the G80 single GPU with ~500million transisters will be faster than the GX2 with 2 x 279million transistors. The G80 has about twice the transisters but the GX2 is SLI which is never double the speed so I'd be pretty confident the G80 will be a good bit faster than GX2. It has more memory with 384 bit memory interface and way higher core clockspeed (scaled up to 1.5ghz!) along with GDD4 2+ghz. G80 is going to PWN the GX2!!! Im glad I skipped out on the GX2 as my single GTX has pwned everything right now at max settings 1680x1050 4xAA/16xAF
winston856 said:I doubt that you'll ever have to have SLi just to run things at the bare minimum, that makes no sense at all.
Secondly, that IS a tall order and I think you're going to be disappointed if you really think that those cards will be able to handle that res no problem.
Granted the 7950GX2 can handle 2560x1500 with not much hassle, but that is after all a single slot SLi of sorts...
But we can dream can't we
moto316 said:i think a better way of finding out how much of a performance increase were going to see out of g80 is to look back on the differences we saw between the 7800gtx 256 and the 6800 ultra, because the 7800 was a true next generation after the 6800 and not just a refresh, i have no idea what kind of performance increase we saw between the two so if somebody could chime in about that that would be great
arthur_tuxedo said:Actually, it's a perfectly valid comparison, while the 7800 to 7900 series comparison is perfectly invalid. A new architecture has historically represented a much larger performance jump than a simple refresh. For instance, the 7800 GTX was faster than 2 6800 Ultras in SLI. The 6800 Ultra was more than twice as powerful as an FX 5950.
Similar stats for the ATI side. The X1800 XT stomps on the X850 XT PE, and the X800 XT stomps on the 9800 XT. The 9700 Pro kicked the snot out of the 8500, which beat the Radeon DDR to a pulp.
There are a few counterpoints to this "doubling of performance" idea going back a bit farther on the NVidia side, though. The FX 5800 Ultra only showed modest performance increases over the GeForce 4 Ti4600, and paled in comparison to the 9700 Pro. The GeForce 4 Ti4600 was only about 25-35% faster than the GeForce 3 Ti500. The GeForce 3 was only marginally faster than the GeForce 2 Ultra (although it supported many more features). However, aside from those anomalies, the picture is clear. The GeForce 2 GTS was a huge jump over the GeForce 256 DDR, and the GeForce 256 SDR's jump over the Riva TNT2 was perhaps the biggest generational leap ever seen in the history of video cards.
Given all that evidence, I'd predict that the 8800 GTX will indeed be faster than either the 7950 GX2 or 2 7900 GTX's in SLI, and by a significant, but not overpowering margin.
RareAir23 said:What I cannot understand from those specs is why stop at 768MB of memory on the card. We've seen that 1GB is possible via SLI/Crossfire so why don't ATi AND nVIDIA make the obvious jump and make their DX10 products 1GB of memory on the card right off the bat? It to me just feels like 768MB is an odd, strange number to jump to after 512. I mean with memory it went 128, 256, 512 and 1GB. Just my nickel. Out!
scrawnypaleguy said:They're going to come out with other cards right? I mean I always go one or two steps below the top of the line (around $300 or so) but that 8800GT is really really expensive. Anyone hear of any other cards from this lineup yet?
StalkerZER0 said:Not cards but more like chips. I mean I think their budget and mid range cards based on derivatives of the g80 are already in the works. I think one of the chips based on the g80 will becalled the g85 but I'm not sure about the other one. But they will have budget and mid range cards available....eventually.
But the flagship cards will debut first I believe. LOL thing of it is I'm not even interested in the 8800gtx.
I'm more interested in a 8800gx2 card. Which I'm sure will break my wallet in half.
Who cares though?!! want supreme power dammit!
Silus said:
I, on the other hand, don't intend to buy a G80 or R600 anytime soon, since I did a complete system change, about a year ago. But I sure am curious about these cards. If the rumored specs are any indication of the truth, they will be beasts, while rendering the games we play.
I also share your enthusiasm, regarding a possible 8800 GX2. IMHO; the current GX2 is the crown achievement of the G70 architecture. Kudos to NVIDIA for it.
most GPUs that people on here would consider purchasing are multi-core already.StalkerZER0 said:Yup.
But the GX2 isn't a pure dual core though. I was hoping that the g80 chip would be based on 2 cores but it doesn't look so. That woulda been great. I wonder what the 8800GX2 would be able to acheive. And imagine two of those cards in SLI!
There is absolutely no real reason to take a "multi-core" approach to designing GPUs. GPUs, for a long time, have been built around a multi-tier, sequential function parallel processing configuration (pipelines are still typically built in quad configurations). With a 1600x1200 display, we're talking about determining color values of 1.92 million pixels - to process and write one at a time 60+ times per second would be, well, insane.StalkerZER0 said:Yup.
But the GX2 isn't a pure dual core though. I was hoping that the g80 chip would be based on 2 cores but it doesn't look so.
phide said:It's not really correct to think of GPUs as "multi-core". You're right about the GX2 - it is not dual core, but rather dual GPU. The GX2 has two completely separate GPU packages, just as servers typically have four or eight (or more) separate CPU packages operating in a (ideally) parallel configuration.
Brent_Justice said:The whole thing could be wrong.
It is all rumor at this point.