nVidia's decision to drop GPGPU?

Zer_

n00b
Joined
Nov 6, 2010
Messages
6
Well, I've been following both AMD and nVidia for years now. Both have done some great leaps in the GPU field. I can't help but feel that nVidia has been slipping recently. We know that AMD tends to dominate the mid-range, but I feel now they are starting to catch up on the high end.

I've also never really supported nVidia's use of PhysX as proprietary technology. Yeah, from a business standpoint it does make sense, but I've always felt it to be dickish. Especially considering the fact that PhysX still doesn't have multi-threaded support on CPUs. They are just now adding it in. I feel this was deliberate to force consumers to either go nVidia, or be left in the dust when it comes to PhysX. Fortunately, most developers offered us some more options.

Now, we all know that nVidia has decided to drop GPGPU in an effort to cut costs and reduce heat on their GPUs. I can't help but feel this is a cop-out. I feel now we'll end up with a massive divide that goes way beyond just fanboyism. Now there's far more than just personal taste at play.

But as it is now, I don't see any real net benefits for developers and consumers coming from this decision. Their prices might go down, but now developers are left with a choice, CUDA, or GPGPU. Either that, or they need to spend more time and money doing both. Now, they don't have to use any if they so choose, and I really think that's the most sensible decision they could make.

The PC has always been a relatively open platform. Proprietary technologies is so Apple...

What are your thoughts? And constructive posts only, please. I am indeed posting this from an AMD consumer moreso than nVidia, so correct me if I'm wrong anywhere.
 
PhysX
I place no fault in nvidia for using physX for their benefit, after all they paid millions for the technology from ageia. If I was running nvidia I believe I'd do the same thing putting myself in their shoes. From a business standpoint it makes sense.

What I do blame them for is not making it more usable for AMD users. That is the screwed up part. They should offer a cheap PPU card that can be bought by those who chose to use AMD graphics and make sure that it's compatible with AMD and even offer help to make sure it all worlks. This way everyone wins. Also it should be multi threaded and use newer CPU technology. I remember reading it was on X87 or something like that which is very antiquated. They are definitely at fault in a way for not updating it sooner, but I also read that the code was very poor when they bought it. I'm looking forward to see what they will bring when they optimize the code further in the new release they plan to drop early next year. Should be some interesting gains.

GPGPU
I think nvidia made the right decision to remove or minimize it from their gpu's. After all, they are purchased by gamers. They were suffering because of it. Frankly the GPGPU was taking up too much die space on the GPU. That and the landscape is very competitive where every mm2 of die space counts. I anticipate they will simply strengthen their Tesla products. They may make those even more Cuda monsters and perhaps beef up their GPGPU on those GPU's.

Gaming competition is a race that will probably never end and they need to take any advantage they can there.

These 2 specific changes may require that they manufacture GPU's exclusively for Tesla, Geforce, and Quadro but it's probably a decision that will make them even more competitive in all 3 markets and better for the consumers in all 3 markets.
 
Last edited:
Now, we all know that nVidia has decided to drop GPGPU in an effort to cut costs and reduce heat on their GPUs.

o.0

No they haven't. Even in the GTX 460, they removed some of the double precision units, they didn't remove GPGPU abilities at all (unless you need dp, of course).

Nvidia isn't removing GPGPU at *all* from any of their cards, nor do I see them doing that in the short term at all. Down the line we *might* see Tesla spin off to use a separate GPU, but even that I kind of doubt.

Their prices might go down, but now developers are left with a choice, CUDA, or GPGPU.

CUDA *IS* GPGPU. You can't have support for CUDA but not GPGPU. GPGPU is an idea, it just means you can do general purpose stuff on a GPU. CUDA is an implementation of the GPGPU idea. OpenCL and DirectCompute are other implementations. Nvidia fully supports all 3 and will continue to do so. Developers are free to choose whichever GPGPU implementation they want.

So it sounds to me like this thread is based off of something that isn't remotely true and a misunderstanding of terms.

PhysX
I place no fault in nvidia for using physX for their benefit, after all they paid millions for the technology from ageia. If I was running nvidia I believe I'd do the same thing putting myself in their shoes. From a business standpoint it makes sense.

What I do blame them for is not making it more usable for AMD users. That is the screwed up part. They should offer a cheap PPU card that can be bought by those who chose to use AMD graphics and make sure that it's compatible with AMD and even offer help to make sure it all worlks. This way everyone wins

I mostly agree except for one thing. Nvidia refusing to allow me to use an AMD card for graphics and an Nvidia card for PhysX. That asinine restriction lost them at least one sale from me. I love the idea of GPU physics. I love the idea of better game physics. I hate they way nvidia has taken that idea and raped the shit out of it while expecting customers to like it. Fuck. That.
 
Now that AMD has been taking the lead in performance and power consumption for a while, Nvidia has been forced to take all necessary measures to recoup the performance crown. One such measure is abandoning the high performance computing (HPC) and GPGPU capability which was one of the cornerstones of the Fermi architecture which powered the GTX 480 family.

This coming from Megagames. Now is it possible that they've taken that course of action only for the GTX 580?
 
I mostly agree except for one thing. Nvidia refusing to allow me to use an AMD card for graphics and an Nvidia card for PhysX. That asinine restriction lost them at least one sale from me. I love the idea of GPU physics. I love the idea of better game physics. I hate they way nvidia has taken that idea and raped the shit out of it while expecting customers to like it. Fuck. That.

LOL!!! :D Agreed!
 
This coming from Megagames. Now is it possible that they've taken that course of action only for the GTX 580?

No.

Megagames either means that they will be stripping out many of the DP units and cache, or that they are stripping GPGPU completely. If they really mean the former, then that is quite possible, perhaps even likely. Nvidia basically did that with the GTX 460. If they mean the later, then the site is retarded and you should ignore whatever they are saying.
 
all they are doing is removing some of the double precision units on gtx 580. reason for this is that barely anything uses double precision. also double precision has a much higher power usage. it is why AMD only has double precision support on the 5800/5900 and 6900 series cards. 6800 and 5700 and lower have single precision only. this is the only way that nvidia can keep the gtx 580 within the 300w pci-e 2.0 spec. which is why im still saying the 250w TDP nvidia is claiming is a load of shit.
 
This coming from Megagames. Now is it possible that they've taken that course of action only for the GTX 580?

When Nvidia designed the GF100 it was suppose to do everything under the sun. in fact they artificially limited the GPGPU computing on the non tesla lines. The problem was all this extra crap contributed nothing to gaming (or very little) at the cost of a lot of die space. this makes them very non competitive against AMD cards that didn't forget they main market. (consider a that a 5870 performs just a little under a 480GTX and cost far less to manufacture) The new GF110 supposedly is just the the GF110 cut down a little to make it competive with the AMD cards. in other words they removed some of the hardware that didn't contribute to game performance. While this does diminish the card in GPGPU computing it doesn't render it incapable. in fact given how much the GF100 was artificially limited in GPGPU I doubt that we are going to see any real difference on the consumer level. . Of course this is all speculation but I would not think of this as Nvidia leaving the GPGPU market but rather coming out with a gamers version of the GF100. The one chip to do it all couldn't and I suspect that we will see a divergent line start here.
 
o.0
I mostly agree except for one thing. Nvidia refusing to allow me to use an AMD card for graphics and an Nvidia card for PhysX. That asinine restriction lost them at least one sale from me. I love the idea of GPU physics. I love the idea of better game physics. I hate they way nvidia has taken that idea and raped the shit out of it while expecting customers to like it. Fuck. That.

+1. Took the words right out of my mouth(saved me some typing in other words).

If nVidia wants to completely disable PhysX when non-nVidia cards are being used, that does not sit well with me. especially, when a simple driver modification proves that without any deliberate upkeep, other graphic cards can be made to interface with nVidia GPUs to receive PhysX information.

nVIDIA chooses to hinder the proliferation of dedicated physics processing on grahic cards for gaming by the world-at-large. nVIDIA chooses to hinder the adoption of PhysX as a physics engine that can receive general use from multi-core processors. I choose to not buy an nVidia GPU for graphics or PhysX.

This has left a rotten taste in my mouth ever since I heard about and witnessed it.

/endrant
 
Glide was good but it still failed. Physx has been shit since day one, it failed even more.
 
Stop spreading FUD. All the same GPUGPU features such as CUDA, ECC, and caches are still in the GTX 580.
 
Yeah if I can agree with anything here it's that the way that Nvidia manages Physx is selfish, they should have atleast not have it auto-disable when an AMD is the primary renderer.

I know that they offered AMD the technology so that AMD could run physx on their cards but AMD denied. I'm not that mad at AMD for denying that because I'm glad they're sticking their guns with open source alternatives (Havok, OpenGL)
 
Last edited:
+1. Took the words right out of my mouth(saved me some typing in other words).

If nVidia wants to completely disable PhysX when non-nVidia cards are being used, that does not sit well with me. especially, when a simple driver modification proves that without any deliberate upkeep, other graphic cards can be made to interface with nVidia GPUs to receive PhysX information.

nVIDIA chooses to hinder the proliferation of dedicated physics processing on grahic cards for gaming by the world-at-large. nVIDIA chooses to hinder the adoption of PhysX as a physics engine that can receive general use from multi-core processors. I choose to not buy an nVidia GPU for graphics or PhysX.

This has left a rotten taste in my mouth ever since I heard about and witnessed it.

/endrant

Actually there was a bug recently where with an NVIDIA and AMD GPU present, the driver would go and suck up 13% CPU. Very few people had this problem.. and the only common factor was they all had both an NVIDIA and AMD GPU.

This is because the NVIDIA driver assumed it was the primary GPU in certain cases where it wasn't. So your assertions that there is no upkeep for supporting exotic configurations is unfortunately inaccurate.

I'd say while true that it wasn't an AMD bug causing issues here, it is definitely a rational example of why NVIDIA chooses not to support this and actively disallow it.
 
Yeah if I can agree with anything here it's that the way that Nvidia manages Physx is selfish, they should have atleast not have it auto-disable when an AMD is the primary renderer.

I know that they offered AMD the technology so that AMD could run physx on their cards but AMD denied. I'm not that mad at AMD for denying that because I'm glad they're sticking their guns with open source alternatives (Havok, OpenCL)

Um.. you know that OpenCL is not a physics library right? therefore it is not an open source alternative.

Also, Havok is not open source, in fact it is owned by their biggest competitor.. Intel. So I fail to see a valid alternative here.
 
Stop spreading FUD. All the same GPUGPU features such as CUDA, ECC, and caches are still in the GTX 580.

It's still too early to tell. It's also possible that this info was based off of an early press model of the GTX 580 which may have indeed forgone GPGPU.
 
Last edited:
Actually there was a bug recently where with an NVIDIA and AMD GPU present, the driver would go and suck up 13% CPU. Very few people had this problem.. and the only common factor was they all had both an NVIDIA and AMD GPU.

This is because the NVIDIA driver assumed it was the primary GPU in certain cases where it wasn't. So your assertions that there is no upkeep for supporting exotic configurations is unfortunately inaccurate.

I'd say while true that it wasn't an AMD bug causing issues here, it is definitely a rational example of why NVIDIA chooses not to support this and actively disallow it.

the NVIDIA driver assumed it was the primary GPU in certain cases where it wasn't despite delegating the card to physx? Is that not supporting AMD cards or blatantly ignoring that they exist? that doesn't wash. if there was some basic incompatibility (other then marketing) I could see this but it really is nothing more then Nvidia taking a punitive measure against AMD owners that wish to run both. If you can enable physx with a very simple hack then there wasn't much wrong to start with.
 
GPU accelerated PMEMD has been implemented using CUDA and thus will only run on NVIDIA GPUs at present. Due to accuracy concerns with pure single precision the code makes use of double precision in several places. This places the requirement that the GPU hardware supports double precision meaning only GPUs with hardware revision 1.3 or 2.0 and later can be used. At the time of Amber 11's release this comprises the following NVIDIA cards (* = untested):

Gamers having cheap video cards >> Science and understand human biology

http://ambermd.org/gpus/

I find it somewhat ironic that [H] is among the biggest for folding, but (and maybe I am misunderstanding something here) there seems to be a lot of hatred towards Nvidia for making fermi cards that support fp64 since it does not offer tangible benefits for gamers but increases cost and temperature. Mean while gamers will spend a lot of money on electricity running folding....seems strange to me.

I use namd so fp64 doesn't matter so much for me. I talked with the developers of namd at the tcbg @uiuc and for the force calculations in namd there is no destructive data concerns from using single precision.

But realistically, there could be massive speedup for all kinds of md simulations via gpus if and only if the gpus support fp64. Otherwise, the performance hit is too much make running a lot of the sim on the gpu worth it.
 
they suck at double precision, but they typically still dominate in directcompute and opencl benches afaik.
 
Actually there was a bug recently where with an NVIDIA and AMD GPU present, the driver would go and suck up 13% CPU. Very few people had this problem.. and the only common factor was they all had both an NVIDIA and AMD GPU.

This is because the NVIDIA driver assumed it was the primary GPU in certain cases where it wasn't. So your assertions that there is no upkeep for supporting exotic configurations is unfortunately inaccurate.

I'd say while true that it wasn't an AMD bug causing issues here, it is definitely a rational example of why NVIDIA chooses not to support this and actively disallow it.

$100 says that that "bug" was intentionally put there, just like the reverse gravity "bug" when using an AMD GPU for graphics and an Nvidia GPU for PhysX.

There really isn't a technical reason for the decision at all. It was 100% a business decision, not a technical one. They lied about the compatibility/support reason. If they support using a dedicated card for PhysX (which they do), then they've already done all the work needed. People never seem to grasp that PhysX does *not* talk directly to DirectX/OpenGL. PhysX never does anything rendering related. PhysX and rendering are completely independent loops that never directly communicate (they don't even store data the same way). The game takes the PhysX results and then renders them. The drivers don't do anything to help with that. It's all done by the game. That is why the support claim is utter bullshit, the two technologies don't talk to each other at all. That is also why PhysX can run on just about everything, from the PS3 with Cell to a PC with a CPU or a GPU. Because PhysX doesn't give a shit how the results are rendered. It simply does math, nothing more.
 
Um.. you know that OpenCL is not a physics library right? therefore it is not an open source alternative.

Also, Havok is not open source, in fact it is owned by their biggest competitor.. Intel. So I fail to see a valid alternative here.
I meant (and amended my post) to say OpenGL but I thought Havok was non-proprietary?
 
I meant (and amended my post) to say OpenGL but I thought Havok was non-proprietary?

its proprietary but but has several people backing it, intel, AMD, and such. I think your thinking of Bullet which is open source.
 
I meant (and amended my post) to say OpenGL but I thought Havok was non-proprietary?

Nope, Havok is owned by Intel. Perhaps you were thinking of Bullet, which is open source and can use OpenCL (doesn't use OpenCL for much at the moment, though).
 
its proprietary but but has several people backing it, intel, AMD, and such. I think your thinking of Bullet which is open source.
Nope, Havok is owned by Intel. Perhaps you were thinking of Bullet, which is open source and can use OpenCL (doesn't use OpenCL for much at the moment, though).
Yeah, must have been bullet I was thinking of
 
Glide was good but it still failed. Physx has been shit since day one, it failed even more.

Glide failed because competition arrived both in hardware (nVidia ironically) and software (Open GL, DirectX). People realized that vendor locked solutions always suck, and even with 3DFX opening up glide it was already on track for the grave.
 
the NVIDIA driver assumed it was the primary GPU in certain cases where it wasn't despite delegating the card to physx? Is that not supporting AMD cards or blatantly ignoring that they exist? that doesn't wash. if there was some basic incompatibility (other then marketing) I could see this but it really is nothing more then Nvidia taking a punitive measure against AMD owners that wish to run both. If you can enable physx with a very simple hack then there wasn't much wrong to start with.

It was a bug. Which is why it's been fixed for an upcoming driver. Which is why the conspiracy theory is uncalled for.

The point is look.. this shit happens and NVIDIA is never going to test their products in conjunction with AMD, and vice versa. So they blocked it, partially for support reasons, partially for business reasons.
 
This coming from Megagames. Now is it possible that they've taken that course of action only for the GTX 580?

This is absolutely and completely false. The guys are full of shit. GF110 has all the same features of GF100. I think some people laugh in their basements making shit like this up.
 
$100 says that that "bug" was intentionally put there, just like the reverse gravity "bug" when using an AMD GPU for graphics and an Nvidia GPU for PhysX.

There really isn't a technical reason for the decision at all. It was 100% a business decision, not a technical one. They lied about the compatibility/support reason. If they support using a dedicated card for PhysX (which they do), then they've already done all the work needed. People never seem to grasp that PhysX does *not* talk directly to DirectX/OpenGL. PhysX never does anything rendering related. PhysX and rendering are completely independent loops that never directly communicate (they don't even store data the same way). The game takes the PhysX results and then renders them. The drivers don't do anything to help with that. It's all done by the game. That is why the support claim is utter bullshit, the two technologies don't talk to each other at all. That is also why PhysX can run on just about everything, from the PS3 with Cell to a PC with a CPU or a GPU. Because PhysX doesn't give a shit how the results are rendered. It simply does math, nothing more.

RIght it was intentionally put there. Because NVIDIA has all day to sit around and think of convoluted ways of making their drivers appear buggier intentionally.:rolleyes: Nice conspiracy theory, although patently false.
 
Well this is 100% false :p at least the dropping GPGPU, if they drop GPGPU, the card can no longer be classified as a DX11 part, as DX11 requires DirectCompute which is a GPGPU implementation.

DirectX 11 features include:

*

Tessellation – Tessellation is implemented on the GPU to calculate a smoother curved surface resulting in more graphically detailed images, including more lifelike characters in the gaming worlds that you explore.
*

Multi-Threading – The ability to scale across multi-core CPUs will enable developers to take greater advantage of the power within multi-core CPUs. This results in faster framerates for games, while still supporting the increased visual detailing.
*

DirectCompute – Developers can utilize the power of discrete graphics cards to accelerate both gaming and non-gaming applications. This improves graphics, while also enabling players to accelerate everyday tasks, like video editing, on their Windows 7 PC.
 
It was a bug. Which is why it's been fixed for an upcoming driver. Which is why the conspiracy theory is uncalled for.

The point is look.. this shit happens and NVIDIA is never going to test their products in conjunction with AMD, and vice versa. So they blocked it, partially for support reasons, partially for business reasons.

if it was a bug and its been fixed then it had nothing to do with AMD cards? sorry but that makes it an even weaker argument.
 
Back
Top