PhysX: a dying tech?

Wolf-R1

[H]ard|Gawd
Joined
Aug 30, 2004
Messages
2,005
So of the bit of research I've done it looks like Havok is used in more games than PhysX. which is a shame considering the greater potential for performance granted by dedicated physics hardware. Is this true or have I found incorrect data to backup my assertions?

My other thread about adding an 8800GTS 512 that I have laying around for dedicated PhysX processing is stagnant and I can't find too much that supports PhysX other than random and mostly older titles.

Initial searches on current or not so old titles yields quite a few big name titles over the last few years that support Havok where the PhysX list is short and filled with older games.

Am I not seeing the right stuff here? Is it even worth adding dedicated physics hardware?
 
It is only worth adding a dedicated PhysX card if you want to play games that support hardware accelerated PhysX.

You could also use that 8800GTS to add support for more monitors which is about 99.9% of what my GTS250 gets used for.

I can't stand using less than 2 monitors anymore... and even then, only 2 monitors annoys me. I currently have 3 but woud like to add another at some point.
 
CPU PhysX is used in many more of the more recent games than Havock. But.....
GPU accelerated PhysX is almost never used at all and usually only if Nv codes it in for the devs.
In AAA titles it still looks like Havock has a lead on PhysX, however....
Many AAA titles use a proprietary physics solution and do not bother with PhysX or Havock.

While I don't think it will ever disappear, Nv could give PhysX a brighter future. If Nv decided to try and make money with PhysX by charging for it, instead of using it as a marketing tool to sell graphics cards, a task it utterly failed at. It could have a much brighter future. But they would have to stop using it as a marketing tool and embrace a hardware agnostic, OCL approach for that to happen. as well as de cripple CPU side PhysX. I am not sure they would, or even should do that. I am sure they have bean counters that will tell them what is what with that.
 
Last edited:
CPU PhysX is used in many more of the more recent games than Havock. But.....
GPU accelerated PhysX is almost never used at all and usually only if Nv codes it in for the devs.
In AAA titles it still looks like Havock has a lead on PhysX, however....
Many AAA titles use a proprietary physics solution and do not bother with PhysX or Havock.

PhysX will be around for a while, it is free after all. GPU PhysX, on the other hand, will prolly remain seldom used.

Yeah, though Starcraft 2 & Halo Reach(and 3, and ODST) are in the Havok camp probably means they have outsold nearly every Phyx game in existince.


heh, j/k, though I wouldn't discount physx seeing at least a few more game devs using if the SSE optimizations are all they are cooked up to be (vs X87)[sic]

Hell, Frostbite is the only other major physics solution I can think of, and it is nothing more than a premade list of destructables, not a real physics solution like Physx or Havok are.
 
Graphics card based physics never had much a future, too much communication delay to relay calculations to the CPU means only pretty particle effects with no impact on game play. The important stuff still has to be done on the CPU where it is available immediately.
 
Last edited:
DICE's Frostbite Engine proves definitively that Physx is worthless fluff. It's quite simply case closed.
 
DICE's Frostbite Engine proves definitively that Physx is worthless fluff. It's quite simply case closed.

Since when has quality had anything to do with whether or not people use something?
Cheap as bastards love free shit, regardless of how good or bad something is.
Frostbite and Havock may both be better than CPU PhysX, but neither are free.
 
Since when has quality had anything to do with whether or not people use something?
Cheap as bastards love free shit, regardless of how good or bad something is.
Frostbite and Havock may both be better than CPU PhysX, but neither are free.

Well, "cheap ass basterds" typically aren't the ones developing AAA titles, or even semi decent games.

It's rare for a small studio to make it big.
 
Well, "cheap ass basterds" typically aren't the ones developing AAA titles, or even semi decent games.

It's rare for a small studio to make it big.

True, but because it is free it will stick around.
 
It is only worth adding a dedicated PhysX card if you want to play games that support hardware accelerated PhysX.

You could also use that 8800GTS to add support for more monitors which is about 99.9% of what my GTS250 gets used for.

I can't stand using less than 2 monitors anymore... and even then, only 2 monitors annoys me. I currently have 3 but woud like to add another at some point.

I'm guessing you don't have matching resolutions, or don't actually game on two monitors at once, because wouldn't one have lower quality with the 250?
 
SoftTH is obsolete and incompatible or problematic with almost every game. But yeah, he's obviously using monitors of different size.
 
the only game that had decent physx was batman: AA and really it didn't enhance or decrease the realism of the game for me. It was just unnecessary eye candy.
 
the only game that had decent physx was batman: AA and really it didn't enhance or decrease the realism of the game for me. It was just unnecessary eye candy.

And the better selling console version didn't even have Physx... lol
 
With the multiple core CPU's growing and growing now i think devs should let engines have a couple of cores/threads and some ram for just processing these sort of effects. e.g 2 cores (4 threads) dedicated to physics, im sure that'd help out and expand possibilities.
 
i had a talk about gpu accelerated physics with a graphics engine programmer friend of mine whos experimented with it a ton.

The biggest problem hes mentioned that if you really do crank up super huge particles, fluid bodies, etc... Theres a TON more to draw on the screen and the needed gpu power needed goes WAY up. For state of the art, top of the line pc's its something loosely doable. But the problem is, everyone else's pc. Finding an investor that is willing to fund a multi million dollar, multi year development for a game only the top 1% of pc's can play well is like finding a needle in a haystack.

there was more too it than that i could tell, but he was breaking it down for me in laymans terms. i try not to bug him too much because when i do ask him for technical 3d engine information, he gives it to me, and its all WAY over my head the way he talks about it. guys a full on mathlete.
 
NV just needs to bite the bullet and turn physX loose for anyone to use. un-cripple it for CPU use and sell it like havok does so that devs actually have a reason to use it and integrate it more tightly into their games. the problem with PhysX is that devs of AAA titles don't want to alienate or exclude the huge numbers of people using AMD/ATI gpus, so they maybe make physx a little eye candy here and there or exclude it entirely. they can't really integrate it into the gameplay because that would mean that half of your potential install base is just gone.

The other half of the equation is consoles. God I hate them for what they've done to PC gaming... they lack the hardware that would let them run intensive physics calculations and no AAA title would be complete without a console version, so again, the PC user is limited to some eye-candy physics or whatever other sprinkles they throw on top of the stinking console-port turd the devs try to serve us.

Without integration into the actual gameplay, dynamic, real-time physics will remain nothing but eye-candy, meaning its pretty much expendable. Either that or we're left with pre-rendered and pre-determined nonsense like frostbite
 
i had a talk about gpu accelerated physics with a graphics engine programmer friend of mine whos experimented with it a ton.

The biggest problem hes mentioned that if you really do crank up super huge particles, fluid bodies, etc... Theres a TON more to draw on the screen and the needed gpu power needed goes WAY up. For state of the art, top of the line pc's its something loosely doable. But the problem is, everyone else's pc. Finding an investor that is willing to fund a multi million dollar, multi year development for a game only the top 1% of pc's can play well is like finding a needle in a haystack.

there was more too it than that i could tell, but he was breaking it down for me in laymans terms. i try not to bug him too much because when i do ask him for technical 3d engine information, he gives it to me, and its all WAY over my head the way he talks about it. guys a full on mathlete.

That's basically what it comes down to, yes. People seem to forget that to get real, proper, useful physics in a game, you'd need to add the equivalent of a GTX460 or better to the existing GPU. What one can see at the moment in games is just a tiny trickle of what's possible, further complicated by people with GPUs which are incapable of physics calculations, like all ATi/AMD cards, and even if the software support for it did exist, would perform in an inferior manner to nVidia cards (which are better at GPGPU, and CUDA steals OpenCL's lunch money performance-wise).

Basically adding real, proper GPU-accelerated physics support to a game is made utterly unappealing by the market itself as well as company politics (nVidia and AMD could work something out, API-wise). Then again, ten years ago people were still denouncing GPU-accelerated graphics and look where we are now :)
 
Elledan said:
Then again, ten years ago people were still denouncing GPU-accelerated graphics and look where we are now

Only a small step forward from where we were then? I mean, all this talk, all this work, all this time and all we get is a few extra waving flags or bouncing boxes.
 
Only a small step forward from where we were then? I mean, all this talk, all this work, all this time and all we get is a few extra waving flags or bouncing boxes.

i think he was talking about gpu graphics not physx. gpu accelerated graphics have jumped alot in 10 years.


for nvidia gpu physx to really become popular, nvidia needs to somehow make it so you can run both graphics and physx on the same card with minimal impact on the framerate. your average consumer has a budget graphics card at best and likely does not have a gtx 470 for graphics and a gts 450 for physx (just as an example). but right now the market that can effectively uses physx is a small one.
 
i think he was talking about gpu graphics not physx. gpu accelerated graphics have jumped alot in 10 years.
That's literally what I said, yes :)

for nvidia gpu physx to really become popular, nvidia needs to somehow make it so you can run both graphics and physx on the same card with minimal impact on the framerate. your average consumer has a budget graphics card at best and likely does not have a gtx 470 for graphics and a gts 450 for physx (just as an example). but right now the market that can effectively uses physx is a small one.

That's kind of ridiculous... the impact of physics will always be pretty severe if properly implemented (i.e. more than a basic physics engine). The reason graphics rendering was off-loaded to the GPU was because the GPU could do it much faster and more efficient than the CPU ever could. There was a real drive during the late 90s to get better graphics, yet Half-Life 1 was software-rendering by default, with optionally D3D (buggy) or Glide (better). Not until GPUs became common did game graphical quality really begin to soar.

The problem with physics is that it's so poorly understood. They see something move and they're like 'that's physics'. No, that's just some preprogrammed motion. Real physics means that objects have real mass, can really break or be deformed. A rag-doll isn't really physics as it's implemented today, as a human body has actual mass, but to incorporate that together with soft-body physic calculations into a physics engine requires some serious calculations.

Stuff like inverse kinetics for people walking and such would also be possible with proper use of the GPU for the calculations. This would make game development a heck of a lot easier: just tell the game, I want this character to move from this point to over there in this way, and the character in question will walk totally naturally. No more messy motion-capturing or key-framing.

I will keep blaming ignorance for the lack of demand for real physics in games. At least graphics had the easily understood 'oh, shiny!' factor to it. Swinging a few thousand boxes around in a primitive physics engine like with that Ghostbusters game does not real physics make.

Just saying :)

(also, I'm a girl ;) )
 
That's literally what I said, yes :)



That's kind of ridiculous... the impact of physics will always be pretty severe if properly implemented (i.e. more than a basic physics engine). The reason graphics rendering was off-loaded to the GPU was because the GPU could do it much faster and more efficient than the CPU ever could. There was a real drive during the late 90s to get better graphics, yet Half-Life 1 was software-rendering by default, with optionally D3D (buggy) or Glide (better). Not until GPUs became common did game graphical quality really begin to soar.

The problem with physics is that it's so poorly understood. They see something move and they're like 'that's physics'. No, that's just some preprogrammed motion. Real physics means that objects have real mass, can really break or be deformed. A rag-doll isn't really physics as it's implemented today, as a human body has actual mass, but to incorporate that together with soft-body physic calculations into a physics engine requires some serious calculations.

Stuff like inverse kinetics for people walking and such would also be possible with proper use of the GPU for the calculations. This would make game development a heck of a lot easier: just tell the game, I want this character to move from this point to over there in this way, and the character in question will walk totally naturally. No more messy motion-capturing or key-framing.

I will keep blaming ignorance for the lack of demand for real physics in games. At least graphics had the easily understood 'oh, shiny!' factor to it. Swinging a few thousand boxes around in a primitive physics engine like with that Ghostbusters game does not real physics make.

Just saying :)

(also, I'm a girl ;) )


i'll fine tune my position on this :)

for gpu phsyx to become popular

either

a) a physx standard needs to be set on both amd and nvidia cards. being available only on nvidia is very limiting when compared to the entire market

or

b) amd simply has to die so that everyone has a physx capable gpu, then game developers would not feel that gpu physx would be such a risk


i think it comes down to business decisions. just look at how little graphics have jumped since crysis came out, it's still the benchmark for all new cards. your average consumer games on a console and has a budget pc. we (you and I) are the small portion of enthusiasts who will spend the extra dollar for better performance and gpu phsyx.



and my mistake:
i always assume we're all nerdy males :)
 
Last edited:
That's literally what I said, yes :)



That's kind of ridiculous... the impact of physics will always be pretty severe if properly implemented (i.e. more than a basic physics engine). The reason graphics rendering was off-loaded to the GPU was because the GPU could do it much faster and more efficient than the CPU ever could. There was a real drive during the late 90s to get better graphics, yet Half-Life 1 was software-rendering by default, with optionally D3D (buggy) or Glide (better). Not until GPUs became common did game graphical quality really begin to soar.
First, let me say that I completely mis-read your post because I was half asleep, and yes, you're totally right about GPU-accelerated graphics. unfortunately the games market is no longer driven by PC gaming sales, so the impetus is from a different angle, one where you can't upgrade the hardware within a reasonable time. Its one thing to say "oh your GPU doesnt cut it anymore, get another" because thats only a peice of the whole system, but with a console you'd have to design a completely new console to integrate new functionality. On top of that, with both major consoles being sold initially at a loss, they need to be on the market for a long time before the redesign process will start.


The problem with physics is that it's so poorly understood. They see something move and they're like 'that's physics'. No, that's just some preprogrammed motion. Real physics means that objects have real mass, can really break or be deformed. A rag-doll isn't really physics as it's implemented today, as a human body has actual mass, but to incorporate that together with soft-body physic calculations into a physics engine requires some serious calculations.

Stuff like inverse kinetics for people walking and such would also be possible with proper use of the GPU for the calculations. This would make game development a heck of a lot easier: just tell the game, I want this character to move from this point to over there in this way, and the character in question will walk totally naturally. No more messy motion-capturing or key-framing.

Thats the real beauty of physics, and one reason why I'm not sure the argument for requiring greater GPU power is entirely valid. I will defer ultimate judgement to the game devs who know 10,000x more than I do on such matters, but there are many applications in which physics calculations do not need to increase the number of drawn objects. In my mind I'm picturing the physics analog to something like tessellation, where you can get much more realistic models without using intensive calculations to flesh out detail. like you say, they could use physics calculations within player models and such to improve movement realism etc, which doesn't require more things being rendered since its all internal to a "skin". If you've got a normally drawn "skin" over a fancy physics "skeleton" you dont need to draw more things to get the full benefit of physics. Again, the problem is that this would have to be integrated into the engine, which would require a physX enabled GPU in order to play.

I will keep blaming ignorance for the lack of demand for real physics in games. At least graphics had the easily understood 'oh, shiny!' factor to it. Swinging a few thousand boxes around in a primitive physics engine like with that Ghostbusters game does not real physics make.
So true, say the word "physics" and everyone falls asleep, but say "graphics" and you've got everyone slathering. The sad thing is that at the rate we're going, physics would be a much bigger improvement (goddamn DX9 consoles again)


(also, I'm a girl ;) )
Good on ya, not too many of your kind around here ;)
 
I'm guessing you don't have matching resolutions, or don't actually game on two monitors at once, because wouldn't one have lower quality with the 250?

Only game on my 23" 1920x1080 monitor. The two side monitors are used for other stuff when I am working, doing research, etc.
 
I would love to see someone adopt Physx, But I dont see it, too much money wanted for licensing I am sure as with everything
 
I would love to see someone adopt Physx, But I dont see it, too much money wanted for licensing I am sure as with everything

I heard Nvidia pays developers, at least AAA ones, to use Physx.

Doesn't mean it's cheaper than free though, for reasons listed above.
 
I heard Nvidia pays developers, at least AAA ones, to use Physx.

Doesn't mean it's cheaper than free though, for reasons listed above.


Pay is prolly not the right term. It appears that Nv will code GPU PhysX in for certain AAA games by certain devs, depending on the situation.
But direct cash payments do not seem to be the norm. We would have seen all sorts of sensationalized headlines and all manner of threads raging over the topic if they did. Nv is publicly traded, so while it is possible, it would be rather unlikely that they could hide direct cash payments of the size that would prolly be required to bribe a dev to switch from whatever they were using to GPU PhysX..
 
Supposedly they are sticking with the in house one they made for Crysis, and are not going to switch to PhysX, Havock, or some other 3rd party solution..

http://www.gamephys.com/2010/02/06/...-engine-no-havok-physics-or-physx-middleware/

and the thread about it:

http://hardforum.com/showthread.php?t=1493022

guess I missed that news...thanks...when I heard the game was being pushed back to early 2011 because of Nvidia paying them 2 million I assumed it also had something to do with adding PhysX
 
so, no one heard yet of the open source alternative to PhysX/Havok?

http://bulletphysics.org/wordpress/

It's the #3 most used physics library, so I'm sure at least some people have heard of it :)

What I want to see is the OpenCL-based GPU-accelerated build of Bullet they have been promising for a while now, together with AMD, IIRC, since Havok (being owned by Intel who doesn't do real GPUs) will get GPU-acceleration shortly after Hell freezes over and pigs take to the skies.
 
OpenCL has gone through a lot of changes, so it was probably hard for the devs of bullet to keep up in the past.
 
Yea I haven't seen any reason for PhysX, I sold my dedicated card that I had for it. It was too much of a hassle with drivers with my 5750. I really hope for a open GPU accelerated physics engine thats done across Nvidia and AMD GPUs.
 
The "game changer" (lol...) will be if Microsoft implements some sort of "Direct Physics" component to Direct X 12/13, likely concurrently with a new console.
 
The "game changer" (lol...) will be if Microsoft implements some sort of "Direct Physics" component to Direct X 12/13, likely concurrently with a new console.

Maybe if it was a cross CPU/GPU component. And you would still have to get Sony, Nintendo, & whoever else to agree, just to get enough attention.


Though by virtue of it being in DX, if it doesn't suck badly, means it would start having serious attention from the get-go.

Though if it's just a fragment of DirectCompute, it's dead before it starts, IMO.
 
Last edited:
I don't know, I feel Microsoft is strong enough to push physics gaming themselves. Though I am really approaching this from a "market" perspective then "technological."

For the Windows platform, both Nvidia and AMD would confirm to there standard, as having Direct X as a checklist feature is important to there markets marketing wise (even Intel to some degree). Though I suppose someone could cry "anti-trust" possibly in this case, as this would essentially kill PhysX, Havok, and etc.

Of course for the next xbox, Microsoft can set the specifications themselves, so no doubt they would support their own standard.

From a developer stand point you now have 2 major platforms capable of utilizing physics, which would make implementation in games more likely. I then find it unlikely Sony would not also have some sort of physics solution present for there next console either. That would mean all 3 "high end" gaming platforms would have strong hardware physics capability, which currently share a lot of cross platform titles.

Physics to me seems like it would also have good synergy with "motion" based gaming, that seems to be the new direction in terms of user interaction for the consoles.
 
GPU-accelerated PhysX is very much a chicken & egg situation. Very few people focus on PhysX hardware because it's used so little in software (both in the number of games that use it and its usage in those games). Software devs don't want to use it too much (as in requiring it or having it drastically change the game) because so many people without PhysX hardware would be excluded. People don't care about it because nothing uses it because people don't care about it...

I'd really like to see OpenCL take off. One good thing about it is that it can run on anything - Nvidia, ATI, Intel, AMD. Imagine something like PhysX, but it can run on either brand of video card, or even your CPU. Performance might be horrible in some cases, but I really think it would be a good way to ease into having some sort of physics processor as a fairly standard option. You could use your old video card for physics, regardless of model, so people might even be more likely to upgrade (knowing that they can still take advantage of the old GPU's power for something, rather than selling or trashing it). If you've got a beefy CPU but a weak GPU, you could offload it there instead. Games could add advanced features that require physics-processing, and which take advantage of a GPU's strengths, while not actually requiring a second GPU (especially only one specific brand of GPU) to be able to run the game. It should provide all the benefits of GPU-accelerated PhysX, but with basically none of the drawbacks that people complain about.


... like all ATi/AMD cards, and even if the software support for it did exist, would perform in an inferior manner to nVidia cards (which are better at GPGPU, and CUDA steals OpenCL's lunch money performance-wise).

You have more experience in this area than I do, but I'm wary of claims that any Nvidia card will simply outperform the ATI equivalent. It's obviously not exactly apples to apples, but dnetc actually ran almost six times as fast on my HD5870 as on my GTX285. When I had others run it on their Fermi cards, my 5870 was still 2-3x as fast. Generally speaking, as it relates to gaming, the 5870 is usually a little slower than the GTX480, yet it still managed to completely stomp it in dnetc. I'm not sure if that's due to Stream vs. CUDA, the type of calculations done by dnetc for RC5, some combination of factors, or something completely different. However, the fact remains that the 5870 simply destroyed even the 480.
 
I'd really like to see OpenCL take off. One good thing about it is that it can run on anything - Nvidia, ATI, Intel, AMD. Imagine something like PhysX, but it can run on either brand of video card, or even your CPU.
CUDA isn't limited to nVidia hardware either. It's just nVidia keeping it to itself. PhysX can work with CUDA, Brook+ (old Stream), OpenCL (new Stream), and nVidia is more than willing to let AMD, Intel, etc. use the API. Yay for company politics.

You have more experience in this area than I do, but I'm wary of claims that any Nvidia card will simply outperform the ATI equivalent. It's obviously not exactly apples to apples, but dnetc actually ran almost six times as fast on my HD5870 as on my GTX285. When I had others run it on their Fermi cards, my 5870 was still 2-3x as fast. Generally speaking, as it relates to gaming, the 5870 is usually a little slower than the GTX480, yet it still managed to completely stomp it in dnetc. I'm not sure if that's due to Stream vs. CUDA, the type of calculations done by dnetc for RC5, some combination of factors, or something completely different. However, the fact remains that the 5870 simply destroyed even the 480.

It's really hard to say what is going exactly without having all the details. With F@H the ATi client is just miserable, but supposedly it's pretty poorly coded as well. A 6870 performs about on the level of an 8800GTS 320. There is also a reason why only nVidia cards make an appearance in HPC applications including supercomputers (Roadrunner, etc.), though: as I said nVidia hardware is just plain better at GPGPU, plus their CUDA language/runtime and tools are mature and efficient.

ATi just never gave a damn about GPGPU, and it seems like AMD is set to continue that tradition.
 
Back
Top