NVIDIA Fermi White Paper

One thing that scares me is that so far they aren't really addressing what Fermi can do as a GPU in games. If all this computational power doesn't perform well in games your average gamer and enthusiast isn't going to care what the hell it can do with CUDA or anything else.

I am really puzzled by Nvidia's new direction/marketing strategy. They are already late to market and with no specifics on how Fermi will benefit gamers why would you delay your buying decision waiting for Fermi. They are conceding the GPU market to AMD/ATI before they bring their own products to market. This may turn out to be a case study on how to lose market share.
 
At this ppint I'm probably going to buy the 5870 X2 and some more monitors. If in 5 months when NVIDIA has their shit together, if their product is compelling I'll buy that. All NVIDIA's really done this time is driven me (and I'm sure others as well) into the AMD camp this round. By the time NVIDIA's products hit the market, it may be refresh time for AMD.

I'm used to buying video cards every six months anyway, so it looks like I'm going with the red team this time.
 
I don't know why NV announce the Fermi in such a high profile. To prevent people buying from ATI 5870? I think it only hurt their own business. Only those die hard NV fans will wait for the G300 come about 3-4 months later. Normal people will buy the ATI 5870 or 5850 now if they need a new card. Die hard NV fans will hold on all their purchases and wait for the new Fermi, so who is going to buy the NV 280 285 and 295 now?
 
I don't know why NV announce the Fermi in such a high profile. To prevent people buying from ATI 5870?
No it was to get the attention of the HPC market. Those weren't gamers in the audience.

Only those die hard NV fans will wait for the G300 come about 3-4 months later. Normal people will buy the ATI 5870 or 5850 now if they need a new card. Die hard NV fans will hold on all their purchases and wait for the new Fermi, so who is going to buy the NV 280 285 and 295 now?

Normal people don't know 5870 exists. However, the enthusiast community will flock to AMD's new stuff. But really, the thing is a week old and people are predicting the end of Nvidia. Funny.....
 
At this ppint I'm probably going to buy the 5870 X2 and some more monitors. If in 5 months when NVIDIA has their shit together, if their product is compelling I'll buy that. All NVIDIA's really done this time is driven me (and I'm sure others as well) into the AMD camp this round. By the time NVIDIA's products hit the market, it may be refresh time for AMD.

I'm used to buying video cards every six months anyway, so it looks like I'm going with the red team this time.

Agreed. I jumped in and bought a XFX 5850.
 
At this ppint I'm probably going to buy the 5870 X2 and some more monitors. If in 5 months when NVIDIA has their shit together, if their product is compelling I'll buy that. All NVIDIA's really done this time is driven me (and I'm sure others as well) into the AMD camp this round. By the time NVIDIA's products hit the market, it may be refresh time for AMD.

I'm used to buying video cards every six months anyway, so it looks like I'm going with the red team this time.

I wouldn't blame you for being a little gun shy after your issue with the 4870X2 (though I do blame the board more then the card) for myself I am thinking more of just going with the 2gb card when it comes out (I think we may finally have a reason for a 2gb card) and a couple of extra 28" monitors :D

also what has come to mind is some of those 42" monitors TG got a while back
 
I love how people want to hate on one company or another no matter what they do. Nvidia is obviously trying to push technology to another level and all people can say is "Yeah, but can it play Crisis at 60 FPS". Are people really that single minded? Also claims that Nvidia does not care about the average consumer are laughable at best. Nvidia has come out with a number of solid performing cards for the average consumer. They have been launching their top of the line enthusiast cards first for awhile now and then making more mature lower end cards for the average consumer. I would say that they have been paying more attention to the average consumer than ATI (who remember had the overheating problem on their average consumer card when it came out, remember the HD4850?). But really both companies are doing their best to put out solid products. The funny thing is that Nvidia is doing exactly what ATI should have been doing when they were bought out by AMD, putting CPU and GPU functions together to do more. In this area where parallel computing, multi-core, multi-thread, virtualization, etc are all at the forefront what is ATI doing? They are making technology to output to multiple monitors at once. Okay...big deal. There are already dedicated solutions for that. What are they doing to further the advancements of virtualization, parallel computing, etc? Nvidia started making quite a bit of money with this CUDA technology in the data modeling world and knew that is a profitable direction for them. So what do they do? They make a card that is even more enhanced to do those applications. Also remember that DX 11 is supposed to have more physics capability in it, I am sure Nvidia is also thinking about that, especially the tesselation features of DX 11.

Really, I like how people make out Eyefinity to be this 'awesome' feature that will kill Nvidia, when nothing else in the 5000 series seems to be all that much different from the 4000 series. Sure the perfomance is far better, and that in itself is significant (in fact I have also considered getting one, but know its a bit early in the race to pick a winner), but what other markets can they catch to help out the failing AMD side? Why isn't AMD/ATI making products to reach more markets? Why aren't they combining their two sides to produce a better product and one that can achieve far more? These are the questions people should be asking... Or perhaps they are, and just aren't able to do anything with that product, or don't know how to market it...
 
I love how people want to hate on one company or another no matter what they do. Nvidia is obviously trying to push technology to another level and all people can say is "Yeah, but can it play Crisis at 60 FPS". Are people really that single minded? Also claims that Nvidia does not care about the average consumer are laughable at best. Nvidia has come out with a number of solid performing cards for the average consumer. They have been launching their top of the line enthusiast cards first for awhile now and then making more mature lower end cards for the average consumer. I would say that they have been paying more attention to the average consumer than ATI (who remember had the overheating problem on their average consumer card when it came out, remember the HD4850?). But really both companies are doing their best to put out solid products. The funny thing is that Nvidia is doing exactly what ATI should have been doing when they were bought out by AMD, putting CPU and GPU functions together to do more. In this area where parallel computing, multi-core, multi-thread, virtualization, etc are all at the forefront what is ATI doing? They are making technology to output to multiple monitors at once. Okay...big deal. There are already dedicated solutions for that. What are they doing to further the advancements of virtualization, parallel computing, etc? Nvidia started making quite a bit of money with this CUDA technology in the data modeling world and knew that is a profitable direction for them. So what do they do? They make a card that is even more enhanced to do those applications. Also remember that DX 11 is supposed to have more physics capability in it, I am sure Nvidia is also thinking about that, especially the tesselation features of DX 11.

Really, I like how people make out Eyefinity to be this 'awesome' feature that will kill Nvidia, when nothing else in the 5000 series seems to be all that much different from the 4000 series. Sure the perfomance is far better, and that in itself is significant (in fact I have also considered getting one, but know its a bit early in the race to pick a winner), but what other markets can they catch to help out the failing AMD side? Why isn't AMD/ATI making products to reach more markets? Why aren't they combining their two sides to produce a better product and one that can achieve far more? These are the questions people should be asking... Or perhaps they are, and just aren't able to do anything with that product, or don't know how to market it...


TL;DR


have you seen the benchmarks for the 5750? it beats the gts250....
 
I wouldn't blame you for being a little gun shy after your issue with the 4870X2 (though I do blame the board more then the card) for myself I am thinking more of just going with the 2gb card when it comes out (I think we may finally have a reason for a 2gb card) and a couple of extra 28" monitors :D

also what has come to mind is some of those 42" monitors TG got a while back

True. That experience does make me a bit gun shy in regard to buying an AMD card again. However I believe that the problem was a compatibility issue between the 4870 X2 and the D5400XS. I think AMD could have addressed the issue if they had wanted to, but didn't. Granted it didn't make sense for them to put a whole lot of time and money into duplicating the issue and trying to resolve it as very few people actually bought the D5400XS boards in the first place.
 
I love how people want to hate on one company or another no matter what they do. Nvidia is obviously trying to push technology to another level and all people can say is "Yeah, but can it play Crisis at 60 FPS". Are people really that single minded? Also claims that Nvidia does not care about the average consumer are laughable at best. Nvidia has come out with a number of solid performing cards for the average consumer. They have been launching their top of the line enthusiast cards first for awhile now and then making more mature lower end cards for the average consumer. I would say that they have been paying more attention to the average consumer than ATI (who remember had the overheating problem on their average consumer card when it came out, remember the HD4850?). But really both companies are doing their best to put out solid products. The funny thing is that Nvidia is doing exactly what ATI should have been doing when they were bought out by AMD, putting CPU and GPU functions together to do more. In this area where parallel computing, multi-core, multi-thread, virtualization, etc are all at the forefront what is ATI doing? They are making technology to output to multiple monitors at once. Okay...big deal. There are already dedicated solutions for that. What are they doing to further the advancements of virtualization, parallel computing, etc? Nvidia started making quite a bit of money with this CUDA technology in the data modeling world and knew that is a profitable direction for them. So what do they do? They make a card that is even more enhanced to do those applications. Also remember that DX 11 is supposed to have more physics capability in it, I am sure Nvidia is also thinking about that, especially the tesselation features of DX 11.

Really, I like how people make out Eyefinity to be this 'awesome' feature that will kill Nvidia, when nothing else in the 5000 series seems to be all that much different from the 4000 series. Sure the perfomance is far better, and that in itself is significant (in fact I have also considered getting one, but know its a bit early in the race to pick a winner), but what other markets can they catch to help out the failing AMD side? Why isn't AMD/ATI making products to reach more markets? Why aren't they combining their two sides to produce a better product and one that can achieve far more? These are the questions people should be asking... Or perhaps they are, and just aren't able to do anything with that product, or don't know how to market it...


First of all, AMD has the Firestream series of processors (built off of the 4xxx series cards) that are designed for OpenCL and Brook+ applications. Secondly, the TFLOP count for the 5xxx series is ~2.5TFP SP, roughly ~1.3TFLOP DP. That is an increase of 2x over the last generation cards. Meaning that a Firestream processor based off of the 5xxx series will be at minimum 2x as fast in both Single Precision FP, and Double Precision FP. I would say that is a pretty good leap in terms of processing power for DPFP science applications. We have absolutely no performance specs, even theoretical specs, on the Fermi (at least, not that I have seen), except for estimations that it SHOULD be ~3TFLOP SP (which should roughly equate to 1.5TFLOP DP).

This card, while it appears to be a major architectural departure, is not going to be all that much faster than the ATI, and if the prices are the usual, will not beat it in price/performance/watt. Companies who need HPCs will buy the 6 cheaper cards instead of 5 more expensive ones to get the same performance in an HPC.

Another thing, one of the major architectural differences of the card, the Fused MAD instructions, don't really even increase the performance of DP Floating point compared to FP Floating point. It is still roughly half. I was expecting at least a relational speed increase of 10% with that type of change, but it doesnt change it at all.

I am excited over the new ATI cards, the nVidia cards, with CUDA and '3D Graphics' (which I cant use anyway) dont interest me at all, as all of my applications are not using the CUDA wrapper, but are being written under OpenCL specs. I don't want the companies using my software to be locked into specific hardware, that makes my business get smaller.

On a lighter note, nVidia's support for OpenCL is certainly welcome. (And I like the look of the CUDA DE for Visual Studio.. if only CUDA worked on ATI cards, I would actually look into using it instead of Brook+, coding in C++ would be much more efficient)
 
Ian McNaughton goes out against The Way it's Meant to be Played
My point is you don't understand. nVidia offers an open standard (CUDA) on a closed platform. This is detrimental to the entire PC industry. Informed consumers might consider this when balancing immediate needs vs. long term investments in hardware.

AMD offered an open standard/closed hardware with its earlier iterations of supporting Brook+ and CAL. It now has transistion to supporting open standards on open hardware with OpenCL and DirectCompute. These API's are open to all to use and require no special developer assistance to implement, and don't lock in to hardware other than those that support the standard.

If you can't understand why this is a rebuttal to 'AMD doesnt have good games and physics' then perhaps you should be more reticent to proffer opinion (repeatedly...) on the subject.

You are in the minority as gamers in your hardware purchases - not many run sli and 3d glasses. Consider the more mainstream gamer when you offer that AMD doesn't have the best gaming experience - it doesn't for you, and you should be absolutely clear on that point every time you make it.

Consumers win with flexibility and choice - nVidia is offering some great choices but they lack long term flexibility and choice: when the industry moves to DirectX11 and OpenCL (and they've already started to do so) all the guys with $500 invested in nVidia hardware will be left out in the cold (but this is nothing new, as new standards arise).

DirectX 11 has already changed the game, and AMD has responded by supporting open standards on hardware agnostic platforms. This is a good thing. nVidia is downplaying this... why? Because it is to their detriment.

GTX285 SLI with GTX260 physx is an awesome config. But it's 1 in a million, or more; your gaming experience is absolutely not the norm for joe gamer out there. Seriously; trade a GTX280 for 260 just to play 1 title? Not many people would consider that; that you do speaks to a different set of priorities (no judgement, just opinion :)*) than many people.

Bottom line; AMD is not failing to offer you anything - game companies are, for some reason, not allowing the experience on all hardware.

Whats that reason? Money. nVidia kickbacks? Game sales? Exclusivity agreement?
Why are TWIMTBP partners refusing to talk to AMD engineers? nVidia agreements? Bad smell?
It's odd... something doesn't add up.

No this is detrimental to AMD. And Beneficial to Nvidia. Lets not forget that CUDA will always be ahead of OpenCL in implementation. I think when people see Nvidia next gen architecture they will understand why CUDA is important as it can open up hardware access to new features that take a long time too implement into an open standard like OpenCL. As always. With having direct control over its own API. Nvidia has the ability to implement new features now and directly that come with new hardware. But I think people will see why having the ability to unlock new features. Especially when major changes occur on GPU Compute hardware is beneficial to anyone interested in these new features.

I think some people really soon will understand what I mean by changes to GPU architecture. And hardware without software is pointless. I will comment later on this when the information becomes available.



See this is where I dont understand you. Nvidia supports OpenCL and DirectX Compute. Infact Nvidia has openly embraced them. Has a OpenCL SDK available now and offers the tools to "port" CUDA over to OpenCL with minimal effort. And the same time will offer the same for DirectX Compute.

The reason for this is simple. CUDA was a driving force behind OpenCL's standards. And I argue anyone here to prove me wrong. OpenCL/DirectX Compute offer nearly identical programming model.\

Point in Case.

- Sharing of Key Abstraction threads
- Thread Blocks and Grid of thread Blocks
- Barrier Synchronization
- Per Block Shared Memory
- Global memory
- Atomic Operations.

Reasons for this its so easy to use Nvidia's "own" sets of tools to convert CUDA to OpenCL and vice versa and eventually DirectX Compute. Infact alot of people are arguing about Bullet here. Bullet was originally designed on CUDA. And was ported to openCL using Nvidia's Toolset and OpenCL SDK.

There is zero reason why CUDA and OpenCL can't coexist together so long as as they have similar functionality. And I see no reason why CUDA and OpenCL will not continue to follow that trend. With CUDA being always a step ahead in feature support. Hence the reason CUDA will continue to exist.

Standards are great, And I fully support things such as OpenCL and believe they will be great for cross platform support. But they dont always have the best interest of the hardware in mind when exposing features and utilizing the best your hardware has to offer.
 
Nope.



Heh, you said that like Nvidia's OpenCL support isn't miles ahead of AMD's.

Wow.. ok, I retract that part of my comment. I found the calcs, comes out to ~.5 DP TFLOPs... thats depressing. Thankfully I don't code apps that take advantage of DP, but that is still rather depressing.

I am pretty sure they are on pretty even footing OpenCL wise. I have seen nothing in technical documents to make me assume otherwise.
 
Edit: nevermind that doesn't add up to anything significant.


On paper this should be a great card, but I don't know if I trust Q1 market availability at this point. With massive changes like this you'd almost expect a 2009 launch to be a year ahead of development, so props to NV on that atleast. It will be interesting what Nvidia will do to stall AMD in the months ahead because there's certainly no reason to avoid buying a 4850/4870 this holiday season.
 
Last edited:
Can anyone shed some light on Thread Shared Memory, it is placed alongside L1 Cache in the diagram so does it perform at L1 cache speed?
http://www.hardocp.com/image.html?i...XbFpOVmxwV1YydGFVRlpyU2xOVlJsRjNVRkU5UFE9PQ==
Hopefully that link works, its MASSIVE in length!!
If not, its page 15 on the Fermi White Paper:
http://www.hardocp.com/article/2009/09/30/nvidias_fermi_architecture_white_paper

will the geforce variants support ECC memory?

The chips are capable but I doubt desktop cards will get it.


...This card, while it appears to be a major architectural departure, is not going to be all that much faster than the ATI, and if the prices are the usual, will not beat it in price/performance/watt. Companies who need HPCs will buy the 6 cheaper cards instead of 5 more expensive ones to get the same performance in an HPC...

Fermis new caching system will keep the threads loaded more of the time, it remains to be seen how fast the caches are though.
 
I don't know all of the details of either architecture but from what I've gathered:

The shared memory is also L1, the only difference is how it can be used. For GT300 it says that L1 is for caching of local and global memory operations and Shared memory is shared between all of the threads in a SM. This is used a bit differently than the previous architecture.


In GT200 there are 30 SM's with 8 thread blocks each (240 shader cores). I think that has up to 960 threads scheduled per clock.
L1 16kb shared texture cache per SM, some cuda memory operations couldn't use this.

In GT300 there are 16 SM's with 32 thread blocks each (512 shader cores). That should be up to 1024 threads scheduled per clock and each double-precision operation takes 2 threads.
L1 16kb cache and 48kb shared, or 48kb cache and 16kb shared per SM. L2 768kb per SM (load, store, texture).


Keep in mind the threads per clock are not equal to operations per clock with the architecture changes. It says GT200 is 240 MAD ops/clock (30 DP MAD ops/clock) and the GT300 is 512 FMA ops/clock (256 double-precision FMA ops/clock). It also says the SM is 8 times faster at double-precision performance which it is, but they don't tell you the real world performance.
 
Last edited:
TL;DR


have you seen the benchmarks for the 5750? it beats the gts250....

I am sorry, did you have a point there? You mean the newest generation of ATI cards beats an nVidia card 2 generations older? Wow...I am shocked. Remember the GTS250 is just a rebranded 9800GTX which is a rebranded 8800GTX. So you are saying that a 5750 beats a suped up 8800GTX and I am supposed to be impressed? Hardly.

First of all, AMD has the Firestream series of processors (built off of the 4xxx series cards) that are designed for OpenCL and Brook+ applications. Secondly, the TFLOP count for the 5xxx series is ~2.5TFP SP, roughly ~1.3TFLOP DP. That is an increase of 2x over the last generation cards. Meaning that a Firestream processor based off of the 5xxx series will be at minimum 2x as fast in both Single Precision FP, and Double Precision FP. I would say that is a pretty good leap in terms of processing power for DPFP science applications. We have absolutely no performance specs, even theoretical specs, on the Fermi (at least, not that I have seen), except for estimations that it SHOULD be ~3TFLOP SP (which should roughly equate to 1.5TFLOP DP).

This card, while it appears to be a major architectural departure, is not going to be all that much faster than the ATI, and if the prices are the usual, will not beat it in price/performance/watt. Companies who need HPCs will buy the 6 cheaper cards instead of 5 more expensive ones to get the same performance in an HPC.

Another thing, one of the major architectural differences of the card, the Fused MAD instructions, don't really even increase the performance of DP Floating point compared to FP Floating point. It is still roughly half. I was expecting at least a relational speed increase of 10% with that type of change, but it doesnt change it at all.

I am excited over the new ATI cards, the nVidia cards, with CUDA and '3D Graphics' (which I cant use anyway) dont interest me at all, as all of my applications are not using the CUDA wrapper, but are being written under OpenCL specs. I don't want the companies using my software to be locked into specific hardware, that makes my business get smaller.

On a lighter note, nVidia's support for OpenCL is certainly welcome. (And I like the look of the CUDA DE for Visual Studio.. if only CUDA worked on ATI cards, I would actually look into using it instead of Brook+, coding in C++ would be much more efficient)

Okay, so where exactly is the big new breakthrough in technology? Like I said, increased performance with little else to show for it. The only changes they made were to be compliant with DX11 standards, no breakthroughs of their own. Meanwhile, Nvidia is at least trying to push the envelope and make something that can do far more.

As far as arguments about price/performance, we all know that Nvidia has also put several cards in competition with ATI cards and was on par with many of the price/performance battles more recently. So really all those arguments are ridiculous in my mind. Nvidia is known to put out their GTX line first, and then their more budget cards. Theit budget cards tend to be better and more mature than ATI's, which is exactly what Nvidia should be doing. Someone buying a budget card is not going to go, "Oh, hey, I need the latest and greatest right now!" They will wait til they find something they like, at the price they like, and that is exactly how Nvidia markets.

And, you all are missing the point of my post. This isn't about who is better, all competition is good. This is about who is making strives to push the envelope. ATI should be able to push it more with all the engineers over at AMD, but they don't. They haven't designed anything to compete in that arena at all. In the days where more and more people are doing console gaming, and the computer, console, and mobile worlds are becoming more blurry, Nvidia is the one trying to capture that market. The technologies they are producing are geared to an all purpose chip that can perform many different tasks...much like where computing is going. I am just shocked that ATI is not trying to do the same thing. The AMD/ATI merger is one of the biggest failures in mergers I have seen in a long time. They really don't bring anything new, different, or inventive to the table to compete with other companies. They produce decent products, but they are not future seeking. It is already hurting AMD, how much time before the same starts happening to ATI?
 
I am sorry, did you have a point there? You mean the newest generation of ATI cards beats an nVidia card 2 generations older? Wow...I am shocked. Remember the GTS250 is just a rebranded 9800GTX which is a rebranded 8800GTX. So you are saying that a 5750 beats a suped up 8800GTX and I am supposed to be impressed? Hardly.



Okay, so where exactly is the big new breakthrough in technology? Like I said, increased performance with little else to show for it. The only changes they made were to be compliant with DX11 standards, no breakthroughs of their own. Meanwhile, Nvidia is at least trying to push the envelope and make something that can do far more.

As far as arguments about price/performance, we all know that Nvidia has also put several cards in competition with ATI cards and was on par with many of the price/performance battles more recently. So really all those arguments are ridiculous in my mind. Nvidia is known to put out their GTX line first, and then their more budget cards. Theit budget cards tend to be better and more mature than ATI's, which is exactly what Nvidia should be doing. Someone buying a budget card is not going to go, "Oh, hey, I need the latest and greatest right now!" They will wait til they find something they like, at the price they like, and that is exactly how Nvidia markets.

And, you all are missing the point of my post. This isn't about who is better, all competition is good. This is about who is making strives to push the envelope. ATI should be able to push it more with all the engineers over at AMD, but they don't. They haven't designed anything to compete in that arena at all. In the days where more and more people are doing console gaming, and the computer, console, and mobile worlds are becoming more blurry, Nvidia is the one trying to capture that market. The technologies they are producing are geared to an all purpose chip that can perform many different tasks...much like where computing is going. I am just shocked that ATI is not trying to do the same thing. The AMD/ATI merger is one of the biggest failures in mergers I have seen in a long time. They really don't bring anything new, different, or inventive to the table to compete with other companies. They produce decent products, but they are not future seeking. It is already hurting AMD, how much time before the same starts happening to ATI?

Excuse me if I may have misread your post but I feel what your trying to get at is the fact that Nvidia's main focus for it's Fermi architecture has simply not been towards simply "gamer" relations but to focus on GPGPU technologies ? I somewhat grasp (but don't agree) your statements about AMD lacking "breakthrough" technologies, but I can't hate on AMD for not focusing so greatly on GPGPU neither (which they actually have been). I don't run any advanced server clusters or dedicated PC's that deal with scientific research of global climate change etc for that matter. I do have minor interests towards Super Computers and they're related functions. Nvidia by the way may make a good name for themselves in that market if Fermi's white papers hold true. I can't argue against that. But I buy a GPU to play games on. (PhysX aside :rolleyes: ) And to not be future seeking as you put it is one hell of a bold statement to make. They were the first to DX9, The first to DX10.1, and now the first to DX11. They dealt with tessellation for years now and from what I've seen they've played a pretty good "future seeking" role in GPU technologies. Let's not forget what GPU's power the Wii (#1 in console sales) and Xbox 360 (#2 in console sales). AMD/ATI does plan for the future.
 
Last edited:
Excuse me if I may have misread your post but I feel what your trying to get at is the fact that Nvidia's main focus for it's Fermi architecture has simply not been towards simply "gamer" relations but to focus on GPGPU technologies ? I somewhat grasp (but don't agree) your statements about AMD lacking "breakthrough" technologies, but I can't hate on AMD for not focusing so greatly on GPGPU neither (which they actually have been). I don't run any advanced server clusters or dedicated PC's that deal with scientific research of global climate change etc for that matter. I do have minor interests towards Super Computers and they're related functions. Nvidia by the way may make a good name for themselves in that market if Fermi's white papers hold true. I can't argue against that. But I buy a GPU to play games on. (PhysX aside :rolleyes: ) And to not be future seeking as you put it is one hell of a bold statement to make. They were the first to DX9, The first to DX10.1, and now the first to DX11. They dealt with tessellation for years now and from what I've seen they've played a pretty good "future seeking" role in GPU technologies. Let's not forget what GPU's power the Wii (#1 in console sales) and Xbox 360 (#2 in console sales). AMD/ATI does plan for the future.

ATI was the only one to DX 10.1. Something very few gaming companies have used. And your reasoning for not being on board with Fermi is that you personally don't use it. You, like many other people seem to be stressing that you buy GPUs for games. Does Nvidia not make GPUs for games? GPUs that have again and again been the highest performing and still had numerous cards for low budget and price/performance options? And while accomplishing that, they are still pushing the envelope and creating new arenas that they can compete and sell products in. Which is extremely important in this recession ridden era. That is my point. ATI is not doing the same thing. Let us not forget that originally it was Nvidia going into the console market, and then Nvidia going into the physics market, and then Nvidia going into the budget supercomputer market. My point is that Nvidia is pushing the envelope and making new technologies that can be used for many different tasks, not just one task. Again, ATI is merged with AMD and I see very few breakthroughs in that merger. You suggest they have a number of breakthroughs, but what are they? Being the first to a standard that is already laid out for them and anyone can use?

I don't hate ATI, I hate that they let AMD bring them down and they don't utilize the opportunity to merge technologies. I hate they continue to "play it safe" rather than pushing to develop something new. Even their "Eyefinity" is nothing new, other companies have been doing the same thing for years. What I am afraid of here is that ATI, while making solid products is wasting away a lot of their potential. That progress is being stunted by their merger with AMD. It is not that I am some huge fan of Nvidia, it is rather that I am a huge fan of competition and progress and I feel that ATI is only competing enough to stay in business and not enough to progress and grow.
 
ATI was the only one to DX 10.1. Something very few gaming companies have used. And your reasoning for not being on board with Fermi is that you personally don't use it.

Yes! ATI was the only one to go the DX10.1 route and I give them applause for it.
Your "personally don't use is" statement :rolleyes: you're right, CAUSE IT'S NOT EVEN OUT YET! I CAN'T USE IT! :eek:

Are you bashing ATI cause somehow either the 5000 series was below your expectations or you think they must follow Nvidia as their leader? Just cause the green team decides they want to put a strong focus on GPGPU doesn't mean the red team has to follow even though the ATI 2000 through 5000 series can do GPGPU functions. So I suppose ATI should tear their 5000 series architecture apart, go back to the drawing boards, and release a new chip from the ground up? They don't have to!
 
To: NoOhter

AMD has done exactly what they had to this time around with the 5800 series and they've done it well. Ranting about "Lacking of breakthrough technologies" does little to mask the fact that they released new cards (powerful ones at that) That have full DX11 support not to mention an idle power that is simply mind blowing.I play on Nvidia cards as well as ATI ones. They launched a new series of cards, simple as that. Nvidia brags up these "new" technologies they're introducing in their next gen cards and somehow ATI should get the shaft? Don't forget the other series of Nvidia GPU's (8000,9000,200 series) that shared the same architecture with lack of any significant incorporation of new" technologies from one series to the next. This happens with hardware. Many people are worried (including myself) about what impact all this incorporation/design focus of GPGPU on Nvidia's Fermi architecture has on it's graphics rendering performance.
 
Last edited:
That all sounds really nice, but I am not sure who though that I'd be a good idea to more than double the number of transistors from one generation to the next. Let's pray that nVidia gets good yields :/
 
Back
Top