NVIDIA GPU Conference Live

This looks like it could be quite evil !
With the ability to make decisions on chip and the CPU section having direct access to graphics memory, this can remove PCI-E latency issues thus speeding up gaming functions.
As DanD mentioned, this will likely be a super boost for PhysX.

This chip is bristling with new tech, some will need coding for more directly, but any tech NVidia owns should see big performance gains.

As long as idle power use is low, I will consider this card.
If its not, I may go ATI or hang on to my GTX260 for a bit longer until a die shrink refresh.
After all, I'm bored of Crysis and Batman plays very well :D
 
So, with all this super duper GPGPGPGPGPGU with fairies on top crammed into one card and they can't bother to support DX11?

Are you really serious nVIDIA?

I think they are underestimating how popular windows 7 is going to be...
 
nVidia better have something up their sleeve...

On a side note...

You should head down to Michis Sushi in Campbell Best Sushi in the world... get the Alex Smith on Fire role...

Or you could just head over to St.Johns in Sunnyvale and get the awesome Beg For Mercy Chicken Sandwich...

ya I'm hungry

What I like about St. John's is they have specials every weekday. 1/2 off aint no joke man! I prefer TGI Sushi's in Sunnyvale/Campbell.
 
I'm a gamer, and I'm excited about NV's direction.

Two choices:

a) Create a flexible, general purpose parallel super computer with good dev tools and lots of potential applications (including many in gaming), which btw can also run DX games at satisfactory speed.

b) Make yet another specialized DX9/10/11 GPU so you can raise your Crysis AA settings from 8x to 16x even though you don't even play the game anymore.

Times have changed, PC games are simply not pushing the envelope anymore.

Of course this all depends on how "satisfactory" is the gaming perf, and how much more flexible it really is compared to ATI's stuff for GPGPU.

While nvidia's execution still needs to be evaluated, their stance makes sense to me.
 
Last edited:
I'm a gamer, and I'm excited about NV's direction.

Two choices:

a) Create a flexible, general purpose parallel super computer with good dev tools and lots of potential applications (including many in gaming), which btw can also run DX games at satisfactory speed.

b) Make yet another specialized DX9/10/11 GPU so you can raise your Crysis AA settings from 8x to 16x even though you don't even play the game anymore.

Times have changed, PC games are simply not pushing the envelope anymore.

Of course this all depends on how "satisfactory" is the gaming perf, and how much more flexible it really is compared to ATI's stuff for GPGPU.

While nvidia's execution still needs to be evaluated, their stance makes sense to me.

this is exactly what i'm thinking...I cant understand how people aren't excited about this...nvidia is actually moving in the right direction by introducing new technology and all people care about is how much their crysis fps will increase. I'd much rather see virtual reality gaming or something like that (something NEW at least) and this is the right move by nvidia to actually advance graphics technology. Besides, if the card is as good as it looks to be for general purpose, it's undoubtable that extreme gaming performance is sure to follow. Personally i'm excited.
 
I am amazed how many seem to think that 3 billion transistors, with nVidia 'new tech' general purpose parallel processors, won't be fast at games. :)
 
I am amazed how many seem to think that 3 billion transistors, with nVidia 'new tech' general purpose parallel processors, won't be fast at games. :)

It's hard to say until we see hard benchmarks. Numbers give you an indication of theoretical performance, that is all. You can speculate but at the end of the day only real-world performance matters. That said I do think this new card will be fast, but we will see.
 
Still no working hardware shown yet?

Looks like this will be their flagship card and their mainstream cards would just be some renamed GT200 cards. I doubt that nVidia would be ready with "Fermi" mainstream cards soon with all that architecture changes and renamed GT200 cards will not win the mainstream market against HD5000.
 
So I'm guessing if your at hard you've run a benchmark with some sort of software rendering on it, where the cpu gets like 3 - 5 fps. Rendering for games is done in hard ware because triangles are so much faster.

Now where I see this going is nvidia started with gaming graphic cards, they were cheap and eventually faster than SGI expensive workstations. Now they have workstation cards that are faster at HPC tasks. Which includes software rendering. Where this gets interesting is that all direct x games are based on c++ or c#, so now the question is how fast software will run on it? Games could simply be recompiled with out trying to figure out how to port your libraries to cuda... if they are faster in software than hardware, all it really would take would be for epic to create a compiler for ue3 and any game that would be faster as software or fast enough with but with things that are only possible in software, to make it very interesting.

One of larbee big things they want is ray tracing, which has issues with hardware rendering. The big question is can they do software fast enough to give you 24 fps in real time? Because at that point developers are going to have the choice of holy crap and good enough. I figure most would go with good enough and few with money to throw around are going to go the software route just to be the next crysis :)

Should be something to see a major dog and pony if it is too slow, or new twist to an old game if they pull a rabit out of their hat.
 
What I like about St. John's is they have specials every weekday. 1/2 off aint no joke man! I prefer TGI Sushi's in Sunnyvale/Campbell.

I'll have to check it out! And St. Johns pitcher deal is awesome!
 
This is actually a good long term move by nVidia. Moving away from a GPU dedicated to gaming is future proofing the company. Since almost all games are for consoles, there is no need to push the envelope to squeeze another 5 frames. Since a console generation is 5+ years, games will be built to run on old hardware. There is just no need to upgrade. If this is the case, Both nVidia and ati need to come up with other business opportunities. There is a reason one card can easily power 3 24 inch monitors without breaking a sweat. The games are made to run on really old hardware, and there is no reason to make a game that few people can run.

By making the GPU multi-purpose as nVidia is doing, they are opening up new markets(business solutions), which equals huge profits. I find it quite exciting as to what people will be able to do with these GPGPU's.
 
Lol all you Nvidia haters are pretty pathetically narrow-minded. This shit they're talking about - imaging and detecting breast cancer, doing engineering simulations - is actually really fascinating and useful. It may not pertain to gaming, but it's tremendously helpful to a lot of people.
This is nothing new. 3DFX had a chip that could do that years ago, but they instead used it for something else. :p
 
nVidia is going in another direction, but I don't know why many feel it is the "correct" one. I can't wrap my mind this ... I buy a graphics card ... for graphics. Anything more and it feels like feature bloat.

Most of the time my Cores are sitting idle doing nothing ... how about someone deal with that instead of off loading the little work I seem to be making my CPU do already to a device I bought for a far different purpose.

It is interesting that nVidia is trying to make a GPU into a CPU at the same time AMD is trying to make a CPU into a GPU (fusion). I sort of remember a quote from way back in the TNT / TNT2 days where someone had said that Graphic cards will become faster and faster and then one day completely disappear.

Seems that this is direction it is going.
 
Well, ah, I'm not sure what to make of this for the long term. Nvidia wants to make cGPU's or whatever for games and HPC. They'll have competition, at some point, in Larabee. Intel has the pockets, the brains, and the time it needs to come up with an HPC solution. Further that with a possible AMD strategy. For all we know, AMD could have a pet HPC project in the works. Intel is basing theirs from hopped up Pentiums. I don't think it's that far a stretch that AMD couldn't do similar things with their own chips.

That leaves us with games. We will have to see what Fermi brings to the table for gaming with price and power in mind. But, if it's only on par with ATI then I don't know why I would want to put a sub-computer in my computer to play games.

Though I do like competition as it works for us, the end users. I would like to see Fermi square up against ATI's 58's. But if Nvidia can't make this work then they may have no place in the market anymore.
 
The current situation in the games market seems to be that graphics companies offer little benefit to games simply by speeding up their tech, at least for this generation of games. Both companies seem to have realized this and have changed their strategy accordingly.

AMD is choosing to specialize in the gaming sector even more by focusing on the gaming experience via huge speed improvements with lower power requirements and eyefinity.

nVidia is choosing to broaden its market by making a more general purpose processor that can be used in fields outside of gaming.

I'm personally more intrigued by nVidia's strategy since eyefinity, which I'll admit is very cool, and its $700 buy in (card +2 24" monitors) don't appeal to me much. What I'm left hoping for is to see speed improvements via fermi in mundane applications I use every day. Even seeing something as lame as Excel using CUDA to execute several times faster would make me smile.
 
Hm, not worth to cancel my HD5870 order. Maybe it will be 10-20% more powerfull. But you can also bet it will be at least $100 more expensive. And considering the "wait a little more" comment, Q1/2010 or Q2/2010 seems very real.

I did read somewhere that they launch at 27.11. But you can bet this will be the biggest paper launch ever.
 
thing is nvidia is only gpu manufacturer while amd is both cpu and gpu so amd doesn't need/want there gpu to do cpu work. so with this announcement i think nvidia is gonna fight amd and intel cpus rather than ati gpu and seems that it won't be that much fast than 5870 if it was jen prolly would try to open some kinda can of whoop ass again.
 
nVidia is going in another direction, but I don't know why many feel it is the "correct" one. I can't wrap my mind this ... I buy a graphics card ... for graphics. Anything more and it feels like feature bloat.

Graphics are reaching the point where more polygons and pixel shaders, increasing resolution, more AA, etc. are bringing diminishing returns. To reach the next level of graphics, different technologies, such as ray-tracing and improved interactivity/physics modeling are needed.

Fermi could realistically be > 5x faster in things like ray-tracing and PhysX. So you could see major improvements in graphics that come from improvements in compute.
 
superbooga, that is cool. But absolutely useless until there will be a DirectX version where raytracing will be the way to go. Until then, it's a useless feature bloat, as Taldren said.
 
superbooga, that is cool. But absolutely useless until there will be a DirectX version where raytracing will be the way to go. Until then, it's a useless feature bloat, as Taldren said.

Why does there have to be a DirectX version where raytracing is the way to go? If you can calculate a ray-traced scene using the CPU and then display that using DirectX, what prevents the developer from moving that code to the GPU, using the variety of GPU-computing standards available?
 
What prevents the developer ? Hm... Number of people having compatible cards ? No sane company will develop for something with small market share.
 
What prevents the developer ? Hm... Number of people having compatible cards ? No sane company will develop for something with small market share.

Ray-tracing is a simple technique, not an API or anything hardware-specific. It actually doesn't even require GPU computing -- it can be done with earlier hardware. However, GPUs in the past are terribly inefficient at doing ray-tracing.
 
Working in 3d visual effects and animation and also playing games in my free time, I can definitely say that nvidia is heading in the right direction.
 
The title should be changed from "NVIDIA GPU Conference Live " to "NVIDIA GPGPU Conference ".

It seems some people really don't get that this where for the GPGPU market, not gamers...I have even seen people post "what no DX11??!!!" because of this...people are simply not getting "it"...as usual :p
 
NVIDIA has been pushing very efficient designs, in which the architecture can maintain its real world performance, very close to its theoretical FLOPS capacity. Fermi takes this to a new level it seems. Some even speculate that GF100 doesn't have ROPS or TMUs and that the ALUs will rely on the L1 and L2 caches to do the ROP operations. This is truly a new vision for what a GPU can be. It's Computing capabilities seem to be off the chart. Now let's wait to see what the benefits of this architecture are, for graphics.
 
Ray-tracing is a simple technique, not an API or anything hardware-specific. It actually doesn't even require GPU computing -- it can be done with earlier hardware. However, GPUs in the past are terribly inefficient at doing ray-tracing.

Yes, and now reread my post. Having this feature in 1% of market means it is a unusable feature.
 
Here's the Thing: Nvidia isnt trying to compete with ATI anymore. The Intel Larrabee platform is going to be everyone's enemy, cause if Intel can do all this and put it on a CPU die, than my friends we will see the death of the GPU business possibly! Nvidia doesnt have time 2 sit around and nit-pick what gamers are saying, I want 2 second that with, if the Game isnt coded/programmed for the features on the Card or the hardware on the card than it doesnt maximize the card's full ability! On go back 2 your arguement about ATI this and Nvidia that. Nvidia is going after the majors
 
The current situation in the games market seems to be that graphics companies offer little benefit to games simply by speeding up their tech, at least for this generation of games. Both companies seem to have realized this and have changed their strategy accordingly.

AMD is choosing to specialize in the gaming sector even more by focusing on the gaming experience via huge speed improvements with lower power requirements and eyefinity.

nVidia is choosing to broaden its market by making a more general purpose processor that can be used in fields outside of gaming.

I'm personally more intrigued by nVidia's strategy since eyefinity, which I'll admit is very cool, and its $700 buy in (card +2 24" monitors) don't appeal to me much. What I'm left hoping for is to see speed improvements via fermi in mundane applications I use every day. Even seeing something as lame as Excel using CUDA to execute several times faster would make me smile.

I agree eye infinity is a selling point for Ati, but it's not necessarily an ati only thing.

All eye infinity requires is 3 display outputs and some driver tweaks. You can already buy specialist nvidia cards with lots display outputs for driving those big walls of monitors you see in airports so they have the tech.

They just need to put and extra display output on their future cards, although whether it will be on the first NF100's remains to be seen.
 
Here's the Thing: Nvidia isnt trying to compete with ATI anymore. The Intel Larrabee platform is going to be everyone's enemy, cause if Intel can do all this and put it on a CPU die, than my friends we will see the death of the GPU business possibly! Nvidia doesnt have time 2 sit around and nit-pick what gamers are saying, I want 2 second that with, if the Game isnt coded/programmed for the features on the Card or the hardware on the card than it doesnt maximize the card's full ability! On go back 2 your arguement about ATI this and Nvidia that. Nvidia is going after the majors
hmmm majors lol i think they first must release that thing before ati takes there ms. major or not nvidia is still behind ati and i don't think that ati just sits and doesn't develope anything for there future release .....
 
hmmm majors lol i think they first must release that thing before ati takes there ms. major or not nvidia is still behind ati and i don't think that ati just sits and doesn't develope anything for there future release .....

It's a release cycle, between both companies and they are not aligned to each other. I think It's a good thing, you are always closer to a release of a card.
 
I'm surprised they limit the GPU to 6GB of memory? Or am I reading that wrong?

A lot of intensive computing applications require large memory sets, you see 32-64GB of RAM per processor, some server boards coming with 256GB total at 4 processors.

Obviously this scale is not needed for the geforce line but you would think they would have the option to put like 32GB on tesla systems.
 
nVidia is going in another direction, but I don't know why many feel it is the "correct" one. I can't wrap my mind this ... I buy a graphics card ... for graphics. Anything more and it feels like feature bloat.

Most of the time my Cores are sitting idle doing nothing ... how about someone deal with that instead of off loading the little work I seem to be making my CPU do already to a device I bought for a far different purpose.

It is interesting that nVidia is trying to make a GPU into a CPU at the same time AMD is trying to make a CPU into a GPU (fusion). I sort of remember a quote from way back in the TNT / TNT2 days where someone had said that Graphic cards will become faster and faster and then one day completely disappear.

Seems that this is direction it is going.

I think the issue is that there's no real reason to believe Fermi won't be just as good(or better?) than ATI's option, yet. The fact that games aren't really pushing people to upgrade much anymore means people need another reason to buy the hardware. nVidia wants a new group of people to sell hardware to, a group whose demand has nothing to do with the gaming market.

As long as the card can compete in price/performance in games, I think this is a great move. Whether or not it's the right move? Time will tell.
 
Still no working hardware shown yet?

Looks like this will be their flagship card and their mainstream cards would just be some renamed GT200 cards. I doubt that nVidia would be ready with "Fermi" mainstream cards soon with all that architecture changes and renamed GT200 cards will not win the mainstream market against HD5000.

wow, thread crap often?
 
From everything I'm reading, I guess the impression that the GT300 is meant to be a next generation Nvidia Quadro card and NOT a Geforce. All the features they're building into this and advertising are obviously being aimed at workstations, not consumer desktop machines.

http://img223.imageshack.us/img223/3123/picture48h.png
http://techtickerblog.com/wp-content/uploads/2008/11/nvidia-quadro-fx5800.jpg

The prototype on display even kind of resembles an FX5800 (admittedly, thats mostly because of the grill near the front of the card, which I don't believe I've seen on Geforces). The part about it supporting 6GB of VRAM is also a big hint. The FX5800 is the only card I know of that comes packed with 4GB of memory (consumer Geforces don't need more than 2GB max right now anyway). The people commenting on this not being meant for games are certainly correct. If this IS meant to be a Quadro, then gamers sure as hell won't be able to afford it anyway. However, that shouldn't stop Nvidia from releasing a derivative Geforce later. If thats the case, its rather interesting because they'd be following a similar pattern to AMD's CPU division. They develop and release a server/workstation part first, then release a desktop derivative later. For the sake of comparison, Intel seems to do the opposite, they release a desktop derivative first then release the workstation/server part later (at least, that's what they did with Nehalem).
 
I really hope Nvidia's CEO takes a serious look at the computer hardware market and realize that it needs something for the 2009 Q4. I cringe everytime I type NVDA to check the stock prices. From what I gather, the GT300 architecture is a necessary develoment for Nvidia's long term success, months ago they declared they weren't only going to be about graphics anymore. But that still doesn't mean that they can look the other way and not stay competitive in the games market, a market in which their name is highly associated with.

All Fermi shows us now is a bunch of vaporware. I feel as confident about it as Bitboys Oy at this point (which may change Q2 next year, but that Q2!!!).

Like the current AMD vs. Intel scenario... Currently AMD has nothing serious on the CPU front to compete against the Nehalems, so it focuses on what it can... Economical and Mainstream CPUs and gamer focused GPUs. AMD knows no one's going to believe their Phenom IIs can beat an i7, so it doesn't try to compete in that sector. This is also why AMD didn't release the 5870 at $499 and the 5850 at $350. It knows it can't in the current market conditions.

Nvidia should take a hint and come up with cheap alternatives for it's current parners. So that they can drop all Geforces by one market pricing segment, i.e.) GTS-250 $75-100; GTX-260-275 $100-200; GTX-285 $200-275; GTX-295 $300-380. Moreover, they can play the DX11 isn't widely implemented yet card until the season is over. DX11 although very important, really won't be implemented until later next year. I'm sure it will be painful, but it's better than selling no GPUs now and having board partners flog obsolete parts off for 1/2 price next year.
 
From everything I'm reading, I guess the impression that the GT300 is meant to be a next generation Nvidia Quadro card and NOT a Geforce. All the features they're building into this and advertising are obviously being aimed at workstations, not consumer desktop machines.

http://img223.imageshack.us/img223/3123/picture48h.png
http://techtickerblog.com/wp-content/uploads/2008/11/nvidia-quadro-fx5800.jpg

The prototype on display even kind of resembles an FX5800 (admittedly, thats mostly because of the grill near the front of the card, which I don't believe I've seen on Geforces). The part about it supporting 6GB of VRAM is also a big hint. The FX5800 is the only card I know of that comes packed with 4GB of memory (consumer Geforces don't need more than 2GB max right now anyway). The people commenting on this not being meant for games are certainly correct. If this IS meant to be a Quadro, then gamers sure as hell won't be able to afford it anyway. However, that shouldn't stop Nvidia from releasing a derivative Geforce later. If thats the case, its rather interesting because they'd be following a similar pattern to AMD's CPU division. They develop and release a server/workstation part first, then release a desktop derivative later. For the sake of comparison, Intel seems to do the opposite, they release a desktop derivative first then release the workstation/server part later (at least, that's what they did with Nehalem).

I read it as exactly the opposite really. I read it that they were making a more generalized hardware that, in the end, will be better for gaming. By not developing hardware specifically for DX11, but instead hardware that could be generalized to anything (including DX11 and theoretically DX12) it will lead to a much more robust product. The 6GB of ram is clearly only for quadro and tesla cards, no question there, but 1.5GB is a great amount for a Geforce part.

I think ultimately it is a good long term move. I find it comical that people think that because Nvidia will be 3 months late to market the world is going to end, especially when the numbers suggest it will easily match a 5870.
 
Here's the Thing: Nvidia isnt trying to compete with ATI anymore. The Intel Larrabee platform is going to be everyone's enemy, cause if Intel can do all this and put it on a CPU die, than my friends we will see the death of the GPU business possibly! Nvidia doesnt have time 2 sit around and nit-pick what gamers are saying, I want 2 second that with, if the Game isnt coded/programmed for the features on the Card or the hardware on the card than it doesnt maximize the card's full ability! On go back 2 your arguement about ATI this and Nvidia that. Nvidia is going after the majors

I think they're still competing with ATI but they're also thinking about the future. Fermi seems to be the first step an answering Intel's Larrabee and AMD's Fusion. The industry thinks that the future is the integration of cpu/gpu and since Nvidia doesn't have x86 license, and if they didn't do anything about it, they would be left in the dust. Fermi is definitely about a long term plan, but the question is (especially for the majority of the people on this forum) how much of Fermi is dedicated to graphics?

I think they are headed in the right direction (at least for the company in general), but also taking a significant risk, only time will tell if it pays off...
 
Nvidia will not be 3 months late, more like 6 months late. Which is about the length of 75% of a whole product cycle. No one said the world was ending. I'm looking at this situation as a consumer and a shareholder. If you follow Nvidia's stock it's like it's on crack, of which sometimes I think Jen Hsun Huang is on at times as well. (Remember he does tend to blame his best customers for design problems and offend the folks that make his chips.)

What I'm hoping for is for Nvidia to salvage the situation and be realistic. It can continue to develop the next latest and greatest, but continue to have a somewhat healthy financial situation. No one realizes that they're still hurting from the whole G80 GPU laptop substrate bullshit.

AMD went with an elegant chip design that was focused on the next MS api. It doesn't run CUDA or PhysX, however it will work very well with a piece of software that almost every modern computer will have installed on it. That's realistic.
 
Nvidia will not be 3 months late, more like 6 months late. Which is about the length of 75% of a whole product cycle. No one said the world was ending. I'm looking at this situation as a consumer and a shareholder. If you follow Nvidia's stock it's like it's on crack, of which sometimes I think Jen Hsun Huang is on at times as well. (Remember he does tend to blame his best customers for design problems and offend the folks that make his chips.)

What I'm hoping for is for Nvidia to salvage the situation and be realistic. It can continue to develop the next latest and greatest, but continue to have a somewhat healthy financial situation. No one realizes that they're still hurting from the whole G80 GPU laptop substrate bullshit.

AMD went with an elegant chip design that was focused on the next MS api. It doesn't run CUDA or PhysX, however it will work very well with a piece of software that almost every modern computer will have installed on it. That's realistic.

doesn't Nvidia focus more on Medical and other scientific apps?
 
Back
Top