Ok so what's next now ? 8900GTX or 2950XTX ?

Zorachus

[H]F Junkie
Joined
Dec 17, 2006
Messages
11,308
Ok so two brand new VideoCards have been released May 2007, the 8800Ultra from nVidia and the 2900XT from AMD, with the Ultra being the top dog as of today.

So looking in the near and far future what will be coming out and have the performance crown ? I predict a 8900GTX from nVidia in a few months or less ? And even before that the 2900XTX from AMD that should be built on a 65m process and clocked faster with 1gb memory, which should make this equal to or better than the 8800Ultra ? Does this make sense or sound about correct ? Release date's for these ?

Now far ahead in the future like 2008/2009 I see the 3900XTX being top dog in Spring 2008, and the 9900Ultra being top in Fall 2008. Come 2009 both nVidia and AMD/ATI will cry due to Intel bringing out their first VideoCard and being the 9700Pro for it's generation ? I see AMD getting really powerful because of ATI and thing will change by 2010 with VideoCard's and CPU's being allot closer built with each other, and both Intel and AMD will be top dogs and not sure where nVidia will be at that point ?


So my point is what is your opinion on two thing's; the near future for nVidia and AMD VideoCards like this year, and then second, please comment on the far future like 2-3 years from now......Thank's
 
Who knows. Both companies need to think outside the box.

What I'd like to see? A litte far-fetched actually...

- Dual-core / quad-core GPU's would be a nice start.

- A graphics card designed so you could upgrade the memory. Sorta like a mini-mobo. Specially-designed memory modules, so you could say start with 256mb x2, later on upgrade to dual 512's or even dual Gig's.

- How about a motherboard with 2 sockets (or 3 if running dual CPU's)?
One for the CPU, one for the GPU. Room could easily be made since you wouldnt need all those expansion slots. Instead of buying a card, just buy the chip and drop it in. Standard DDR/DDR2/DDR3 (?) memory modules could be used, albeit keep em seperate, not shared.
 
Who knows. Both companies need to think outside the box.

- How about a motherboard with 2 sockets (or 3 if running dual CPU's)?
One for the CPU, one for the GPU. Room could easily be made since you wouldnt need all those expansion slots. Instead of buying a card, just buy the chip and drop it in. Standard DDR/DDR2/DDR3 (?) memory modules could be used, albeit keep em seperate, not shared.


Now that is a really nice idea, instead of plugging in a huge 12" long VideoCard with a leaf blower, make it simple stupid and have two sockets on the motherboards :cool: One for the 5ghz 8Core CPU, and a second socket that would be standard size that all VideoCard companies have to design to and ya plug in the GPU processor = brilliant
 
I think ATI will come out with a new high end card first - A 65nm R600/R650 (whatever you want to call it) with 1gb of gddr4, maybe some architecture upgrades. It may come along as soon as September, going by how quickly the overdue R520 was refreshed. The impetus is on them.

Nvidia should have a refresh out this Fall too but they are really sitting pretty now. I doubt either company will launch a new DX10 architecture until 2008.
 
Well, somekind of cache in GPU perhaps? Something like that 10mb in the x360 GPU. Anyway I don't see much changes in the next couple of years. Probably the number of stream processor will increase, 320, 640, 1240, who knows... Maybe ATi will follow nVidia method of clocking the stream processor separately. Then the usual bigger faster memory. And maybe Microsoft might come up with DirectX 10b, Dx10c...

But any big changes like the transition from pixel/vertex shader to unified shader, will not occurs until the next DirectX 11 IMO.

I'm waiting for the day we'll perhaps use ray tracing to render our graphics in real time:D
 
Well, somekind of cache in GPU perhaps? Something like that 10mb in the x360 GPU. Anyway I don't see much changes in the next couple of years. Probably the number of stream processor will increase, 320, 640, 1240, who knows... Maybe ATi will follow nVidia method of clocking the stream processor separately. Then the usual bigger faster memory. And maybe Microsoft might come up with DirectX 10b, Dx10c...

But any big changes like the transition from pixel/vertex shader to unified shader, will not occurs until the next DirectX 11 IMO.

I'm waiting for the day we'll perhaps use ray tracing to render our graphics in real time:D

GPU cache. I like that.
I also like what blackacidevil said.
Dual to quad core GPUs.
Expandable memory.

I personally think that the expandable memory would make the most money.
But a GPU cache sounds nice. Like, developers can put the most common textures that are used the most, for example, in a particular map, into the cache.
I think that would provide a nice FPS increase. :)

As far as dual core GPUs, nVIDIA or ATI (I would hope ATI, so they could redeem themselves) should just try it, and then pull some kind of 7950GX2 scpheel with it.
Dual dual cores. :)

But that goes into the lines of AMD 4x4, which was a good idea, but, sucked.

nVIDIA, Intel, and AMD/ATI needs to come to [H] and read up.
 
Who knows. Both companies need to think outside the box.

What I'd like to see? A litte far-fetched actually...

- Dual-core / quad-core GPU's would be a nice start.

GPU has been multicore for years. 128 unified processors – 32 TMUs – 24 ROPs anyone?
 
Now that is a really nice idea, instead of plugging in a huge 12" long VideoCard with a leaf blower, make it simple stupid and have two sockets on the motherboards :cool: One for the 5ghz 8Core CPU, and a second socket that would be standard size that all VideoCard companies have to design to and ya plug in the GPU processor = brilliant

Don't look that brilliant to me. GPU manufacturers would be limited with the motherboard GPU interface and design and that would reduce room for innovations. Instead of having to change the motherboard every now and then because of new chipset features for new CPUs (or new socket), you'd have to change it every six months for the new GPU interfaces, new GPU memory, etc. I think it's better to leave it as it is for the time being.
 
GPU has been multicore for years. 128 unified processors – 32 TMUs – 24 ROPs anyone?

Yes, thank you. Anyone who uses the word 'dual-core' in reference to a GPU has absolutely no idea what they're talking about.

All dual core means, is that there are two separate processing pipelines on one die. Video cards have been doing this since the TNT1 and the Voodoo2.
 
Now that is a really nice idea, instead of plugging in a huge 12" long VideoCard with a leaf blower, make it simple stupid and have two sockets on the motherboards :cool: One for the 5ghz 8Core CPU, and a second socket that would be standard size that all VideoCard companies have to design to and ya plug in the GPU processor = brilliant

I think that would be pretty much impossible. The cooling requirements of a high-end videocard would be difficult to implement on a standard ATX motherboard. And what about the memory subsystem, that would have to built on the motherboard too, and then you have the same memory subsystem no matter what GPU you have? This sounds nice, but it's just 'fok' or 'fusion of knowledge' and it doesn't make much practical sense.

Someday though, GPUs and CPUs will likely be brought together on the same die. But even then it'll probably only be for budget cards.
 
I think ATI will come out with a new high end card first - A 65nm R600/R650 (whatever you want to call it) with 1gb of gddr4, maybe some architecture upgrades. It may come along as soon as September, going by how quickly the overdue R520 was refreshed. The impetus is on them.

Nvidia should have a refresh out this Fall too but they are really sitting pretty now. I doubt either company will launch a new DX10 architecture until 2008.

I think nVidia will try to ruin their party again and launch a refresh of G80 just before R650 comes out. Adding GDDR4 to R600 will only make it more expensive, it won't improve performance one drop. Adding GDDR4 to G80 might help though, since it has a smaller memory bus.

What should be interesting, is to see who gains more clock speed from the transition to 65nm. nVidia has the whole 1.5GHz shader clock thing going on right now, how will that do on 65nm? will it scale to 2GHz? or will it hit a brick wall?

We've already seen that R600 scales well with clock speed, so we kinda know what to expect out of R650: more clock, more performance, but with less power-draw and less heat.

Who knows what G81 will turn out to be...
 
GPU cache. I like that.
I also like what blackacidevil said.
Dual to quad core GPUs.
Expandable memory.

I personally think that the expandable memory would make the most money.
But a GPU cache sounds nice. Like, developers can put the most common textures that are used the most, for example, in a particular map, into the cache.
I think that would provide a nice FPS increase. :)

As far as dual core GPUs, nVIDIA or ATI (I would hope ATI, so they could redeem themselves) should just try it, and then pull some kind of 7950GX2 scpheel with it.
Dual dual cores. :)

But that goes into the lines of AMD 4x4, which was a good idea, but, sucked.

nVIDIA, Intel, and AMD/ATI needs to come to [H] and read up.

None of these things make any sense. Dual and quad core makes no sense, because video cards already have hundreds of 'cores'.

Expandable memory makes no sense because it will add too much to the cost of the board to put an expansion slot in and by the time you're ready to upgrade the memory, there will be a faster video card out. That, and video cards are rarely made without enough memory. Typically they are limited by other things way before they run out of memory. And even when they do run out of memory, it would hardly be worth $100 to upgrade a 6 month old card to get maybe 5% more performance.

Cache on a GPU, however, is already happening, and you can expect to see more of it in the future as GPUs become more generalized.
 
Don't look that brilliant to me. GPU manufacturers would be limited with the motherboard GPU interface and design and that would reduce room for innovations. Instead of having to change the motherboard every now and then because of new chipset features for new CPUs (or new socket), you'd have to change it every six months for the new GPU interfaces, new GPU memory, etc. I think it's better to leave it as it is for the time being.

It's called a "standard". Are we not already limited to PCI-e, AGP, and the forthcoming PCI-E 2.0? Why not create a standard socket interface for graphics chips?

I think that would be pretty much impossible. The cooling requirements of a high-end videocard would be difficult to implement on a standard ATX motherboard. And what about the memory subsystem, that would have to built on the motherboard too, and then you have the same memory subsystem no matter what GPU you have? This sounds nice, but it's just 'fok' or 'fusion of knowledge' and it doesn't make much practical sense.

Impossible to cool? CPU's can get just as hot as GPU's, and we cool those just fine. What's so hard about using a CPU heatsink on a GPU?

None of these things make any sense. Dual and quad core makes no sense, because video cards already have hundreds of 'cores'.

I'm only thinking along the parallel of what AMD and Intel have done with their CPU's - slapping 2 cores onto one die.

Expandable memory makes no sense because it will add too much to the cost of the board to put an expansion slot in and by the time you're ready to upgrade the memory, there will be a faster video card out. That, and video cards are rarely made without enough memory.

Didn't know memory slots were so expensive? I bet saving the money from putting in PCI-E-x16/PCI-E-x1/AGP/PCI slots that would no longer be needed would make no difference....

Typically they are limited by other things way before they run out of memory. And even when they do run out of memory, it would hardly be worth $100 to upgrade a 6 month old card to get maybe 5% more performance.

Tell that to the people justifying their upgrade from a GTX to an Ultra...:rolleyes:

Remember, the OP wanted thoughts on what could be possible years down the road, not today. I doubt the limitations we see now would be applicable in the future, let alone even exist.
 
Impossible to cool? CPU's can get just as hot as GPU's, and we cool those just fine. What's so hard about using a CPU heatsink on a GPU?
Yes please, I would love to hang an anvil off of a piece of PCB that's "supported" by only 1 screw.
 
Ok so two brand new VideoCards have been released May 2007, the 8800Ultra from nVidia and the 2900XT from AMD, with the Ultra being the top dog as of today.

So looking in the near and far future what will be coming out and have the performance crown ? I predict a 8900GTX from nVidia in a few months or less ? And even before that the 2900XTX from AMD that should be built on a 65m process and clocked faster with 1gb memory, which should make this equal to or better than the 8800Ultra ? Does this make sense or sound about correct ? Release date's for these ?

Now far ahead in the future like 2008/2009 I see the 3900XTX being top dog in Spring 2008, and the 9900Ultra being top in Fall 2008. Come 2009 both nVidia and AMD/ATI will cry due to Intel bringing out their first VideoCard and being the 9700Pro for it's generation ? I see AMD getting really powerful because of ATI and thing will change by 2010 with VideoCard's and CPU's being allot closer built with each other, and both Intel and AMD will be top dogs and not sure where nVidia will be at that point ?


So my point is what is your opinion on two thing's; the near future for nVidia and AMD VideoCards like this year, and then second, please comment on the far future like 2-3 years from now......Thank's

IMHO, NVIDIA will not release anything new, until November. By that time, G90 should be out the door.
My wild guess about it, is a 192-256 Stream Processors card @ 65 nm, 512 Bit memory bus and at least 1 GB of Video RAM. This should give it at least, a performance leap of 65-75% over the current GTX / Ultra.

Given this scenario and the fact that NVIDIA has no real competition in the high-end, at this moment, I really don't think we'll see a 8900 GTX, which, if it existed, would only be a die shrink from 90 nm to 80 nm and would get higher clock and stream processor frequencies, plus less heat and power usage. Since that's what the Ultra basically is (except for the die shrink), the 8900 GTX seems to be just a rumor.

AMD/ATI is in a different situation. They need to catch up and do it fast. R600 was a fiasco and they need something to at least, reduce the lead NVIDIA has over them. R650 should be that product. I think it's safe to say, they'll aim to completely destroy the GTX / Ultra with it and that's the only logical guess anyone can make at this point, since that's exactly what AMD/ATI needs.

Also AMD is preparing the release of Barcelona, which, according to rumors, is shapping up nicely. I really hope this is the winner product that AMD needs to get back on their feet, otherwise I don't think AMD can resist two failures in a row. One thing is for sure, neither NVIDIA or Intel, will give them any breathing room.
 
So my point is what is your opinion on two thing's; the near future for nVidia and AMD VideoCards like this year, and then second, please comment on the far future like 2-3 years from now......Thank's

Forgot to comment on the future. I don't think NVIDIA will be in a bad situation at all, and if the fact their sharing tech with Intel is any indication of that, my guess is that they'll just get stronger. At first (when I read the State of the Silicon Union article), I kind of agreed that, in the near future, NVIDIA was probably the one with more problems, than any other company. But given NVIDIA's excellent management decisions and great products, for the last couple of years, I can only imagine they'll keep on doing so, for a few years to come.

The only variable in the graphics card market, is really Intel. I have no idea what they'll release, but if anyone has the capabilities to pull off an enthusiast card, other than NVIDIA and ATI, is Intel, especially now, that NVIDIA seems to be sharing info with them. Guess we'll just need to wait and see.
 
Also AMD is preparing the release of Barcelona, which, according to rumors, is shapping up nicely. I really hope this is the winner product that AMD needs to get back on their feet, otherwise I don't think AMD can resist two failures in a row. One thing is for sure, neither NVIDIA or Intel, will give them any breathing room.

Kyle has given his predictions regarding R600 and Barcelona. So far the prediction for the former is true. I do think the latter would be true too since the fact Kyle has more insider's information than 99% of us on this board or else AMD/ATI will be in deep shit.
 
Impossible to cool? CPU's can get just as hot as GPU's, and we cool those just fine. What's so hard about using a CPU heatsink on a GPU?

Right now a quad-core CPU consumes 75W of power. A 2900 XT consumes 215W. You do the math. Of course it *could* be done, but why? What would you gain from such an arrangement except less room on the motherboard for other stuff and a limited selection of video cards that would fit in the thing?

I'm only thinking along the parallel of what AMD and Intel have done with their CPU's - slapping 2 cores onto one die.

Right, and that makes absolutely no sense. You're grossly misunderstanding what 'dual core' means. This is a really common misconception for some reason. Video cards have been dual core since 1998.

Didn't know memory slots were so expensive? I bet saving the money from putting in PCI-E-x16/PCI-E-x1/AGP/PCI slots that would no longer be needed would make no difference....

Well think about how big video card PCBs are. And think how hard it is to get super fast memory connected to the GPU. You'll notice that the memory is laid out right around the GPU to minimize the distance between the two so you don't get a bunch of noise or have to run those traces the whole length of the board.

Now imagine you wanted to take a memory slot and stick it on the board somewhere. That takes up a lot of space. Now imagine you gotta run all those traces all the way down to where you put the memory slot. That's really expensive in terms of space on the PCB.

There's a reason this hasn't been done before for consumer video cards. After all of these problems, what have you really accomplished? By the time a video card needs more RAM, it's almost always obsolete and buying add-in memory (which would be really expensive) makes no sense.

Tell that to the people justifying their upgrade from a GTX to an Ultra...:rolleyes:

Remember, the OP wanted thoughts on what could be possible years down the road, not today. I doubt the limitations we see now would be applicable in the future, let alone even exist.

Well anything is possible, but none of this stuff makes any sense. I'm sure it sounds nice, but it would only really work for midrange cards, and even then, it would work pretty badly. The whole idea of moving everything to the motherboard will just make it a lot harder to produce, a lot harder to upgrade because you'll be stuck with the same memory subsystem on your motherboard, and a lot harder to cool. And what did you gain in the end?
 
My guess is that the next top end part will be the 8850 which will be a 80nm G80 core refresh.

==> G80 core refresh.. 80nm part, about 100Mhz clock increase. 1750 to 1850MHz shader clock.
Then

==> G90 core , 65nm part, 160 SP, 448bit bus, 800 to 900 MHz part. (I guess this will be realeased in 1 years time.)
 
IMHO, NVIDIA will not release anything new, until November. By that time, G90 should be out the door.
My wild guess about it, is a 192-256 Stream Processors card @ 65 nm, 512 Bit memory bus and at least 1 GB of Video RAM. This should give it at least, a performance leap of 65-75% over the current GTX / Ultra.

Given this scenario and the fact that NVIDIA has no real competition in the high-end, at this moment, I really don't think we'll see a 8900 GTX, which, if it existed, would only be a die shrink from 90 nm to 80 nm and would get higher clock and stream processor frequencies, plus less heat and power usage. Since that's what the Ultra basically is (except for the die shrink), the 8900 GTX seems to be just a rumor.

AMD/ATI is in a different situation. They need to catch up and do it fast. R600 was a fiasco and they need something to at least, reduce the lead NVIDIA has over them. R650 should be that product. I think it's safe to say, they'll aim to completely destroy the GTX / Ultra with it and that's the only logical guess anyone can make at this point, since that's exactly what AMD/ATI needs.

Also AMD is preparing the release of Barcelona, which, according to rumors, is shapping up nicely. I really hope this is the winner product that AMD needs to get back on their feet, otherwise I don't think AMD can resist two failures in a row. One thing is for sure, neither NVIDIA or Intel, will give them any breathing room.

I don't think Nvidia is going to go for a 192 - 256 SP card any time soon, not until the 45nm process is avalible. A 192SP refresh of the G80 will probaby have around 1 billion transistors. Such a chip will be tricky even on a 65mn process. (It more likely they will go for a 160SP 850 million transistor chip with higher clock speeds. Its a safer bet then a a billion transistor chip.
 
Back
Top