Batman Arkham Asylum on AMD/ATI = blurry graphics?

guy_uoft

n00b
Joined
Jul 9, 2004
Messages
50
I've discovered something that's really quite odd to me and I'm just wondering if anyone can shed any light on what's going on.

I'm running 4870x2 with Cat 9.9, all options set to App. Vista 64, Q6600 @ 3.3Ghz

Everything in game has this soft filter look to it, like there is some sort of blurring post processing going on. It actually make the lack of AA look better, but everything's blurry, like I need glasses. Then I went to UserEngine.cfg and turned on Physx and went back to the game, now every texture is ultra crisp. The jaggies are more noticeable now, but damn the picture clarity is night and day. All I did was turn Physx on. I went back and forth several times to confirm and it's 100%.

This is not a demo only issue, I've heard from people who has the D2D full version that it does the same thing.

Does anyone know why it does this? If I don't want to use Physx, is there still someway I can turn off that blurring crap?

Thanks in advance.
 
How are you able to turn Physx on with an ATI card? Cos technically only Nvidia has physx licensed for their cards.
 
Probably because you cannot run PhysX on ATi cards. The game is probably freaking out because of it.
 
Probably because you cannot run PhysX on ATi cards. The game is probably freaking out because of it.

lol what a load of crap, should add PhysX is near worthless, sure it adds like movable paper on the ground ( they could have done this without PhysX to, its just clearly easier to impliment with it).

Has nothing to do with Blurry, I believe the graphics are a little odd, would not quiet call them blurry but some people might.
 
Last edited:
Sounds like the shitty DoF in Gears of War and UT3, which just makes everything more than six feet away look like you were viewing it through half an inch of Vaseline. For both of those, you just had to change a line in the Engine.ini from "DepthOfField=TRUE" to "DepthOfField=FALSE". If it's not there you could try adding it.
 
How are you able to turn Physx on with an ATI card? Cos technically only Nvidia has physx licensed for their cards.

PhysX is just a physics library. It runs on CPUs just fine. So it should fall back to the CPU if hardware acceleration isn't available, its just that Nvidia likes having devs force disable most of the effects if their card isn't present, regardless of if your CPU is powerful enough to drive it.

Why devs don't just make it a game option with various physics quality levels like everything else, I have no idea.
 
if you turned on Physx its running on your cpu since you have an ATI card. the game would be basically unplayable on "normal" physx and a slideshow on "high" physx. I have no idea what is going on as for as the blurry graphics though.
 
Probably because you cannot run PhysX on ATi cards. The game is probably freaking out because of it.

How are you able to turn Physx on with an ATI card? Cos technically only Nvidia has physx licensed for their cards.

Because a Core i7 is more than capable enough of handling PhysX by itself (this is what the tweak does). It allows ALL Cores to be used for PhysX (nVIDIA.. in an attempt to try and claim CPUs can't run PhysX... generally relegate it to a single core). In fact a Core i7 is nearly as powerful (GFLOP wise) in double precision mode as an nVIDIA GT200b Graphics card: http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230. A Nehalen Architecture based CPU is capable of around 102GFLOP in single precision (stock) and 55GFLOP is Double Precision while a GT200b based nVIDIA GPU can handle 622-933GFLOP in single precision and 77GFLOP in Double Precision. My understanding of PhysX is that it uses quite a lot of Double Precision calculations (to calculate physical interactions). So on that front they're quite close in performance. This also explains why enablong CPU PhysX (using the tweak) results in all of the PhysX effect while remaining playable.

I'm playing the game with PhysX turned on (since nvIDIA won't allow me to use the 9800GT anymore for dedicated PhysX) and it's entirely playable (I haven't seen it dip bellow 30FPS with ALL the PhysX candy turned on). To prove the point I will upload an HD video on YouTube... keep in mind I am ALSO recording while handling PhysX and the game.. so the CPU is under a lot of pressure yet does it all effortlessly.

HD Version should be up soon: http://www.youtube.com/watch?v=AUOr4cFWY-s
 
Last edited:
2/Due to the Securom DNA protection in the D2D version, at some points, the game will crach with PhysX on, Securom actually protects the PhysX effects!. So when you encounter the crach problem, just turn off the game, quit to the desktop, modify the UserEngine.ini, change "PhysXLevel=1" to "PhysXLevel=0" according to the step 5, save the UserEngine.ini file, run BmStartApp.exe to start the game again, play the game untill there's a Circle Circling at the lower right corner of the screen, that means it autosave the game, after the Circle disappears, you can quit the game, do step 5 again, to enable the physX, the run BmStartApp.exe to start the game with PhysX on.

Sounds like a lot of trouble to go through to get around gpu PhysX, and your game will STILL crash at certain points. Wow, talk about user-unfriendly.


Looks nice. But it would have been more useful if you had FRAPS running so we could have seen the actual framerates throughout that walkthrough.
 
Sounds like a lot of trouble to go through to get around gpu PhysX, and your game will STILL crash at certain points. Wow, talk about user-unfriendly.



Looks nice. But it would have been more useful if you had FRAPS running so we could have seen the actual framerates throughout that walkthrough.


You don't get GPU acceleration, you get SCALED down CPU PhysX:
http://forum.beyond3d.com/showpost.php?p=1332253&postcount=10

"Hardly "on Radeons". :D

I don't have the game/demo too, but I assume the original value for maxPhysicsSubsteps is 5. Changing this to 1 means reducing physics calculation up to 5 times less often, hence the FPS gain on CPU PhysX.

Physics typically calculated on a fixed time step basis, usually 60 steps per second, regardless of the actual frame rate. So -- for example -- if the FPS is only 10, the physics engine need to iterate 6 sub-steps perframe to keep the physics animation in sync with real-world timing. The amount of this sub-step iterations need to be limited to avoid the problem where physics calculation time (per step/iteration) is longer than the frame time. This is the 'maxPhysicsSubsteps' is all about.

The effect of cutting off the maximum sub-steps would be a physics animation that appear (moving) too slow."

http://forum.beyond3d.com/showpost.php?p=1332676&postcount=13

"Sounds likely. The only way you would get PhysX to actually run on Radeons is to have either a complete PhysX implementation for ATi Stream, or have some kind of translator from nVidia's PTX instructions to ATi's instructionset.
In which case you wouldn't have to bother hacking all sorts of suspicious ini file settings, and the fix would work for any PhysX game. So it doesn't seem like they have either.

Sounds a bit like the 3DMark Vantage thing they had a while ago. They claimed it 'ran on Radeon', but nobody has actually seen any binary or even a video or screenshots of the actual physics part running on Radeon. Sounded to me like they just created a dummy PhysX library, which made sure that 3DMark Vantage would run and produce a score, but wouldn't actually produce good results, let alone use the Radeon to do the calculations."
http://forum.beyond3d.com/showpost.php?p=1333290&postcount=16

"I don't think this allows PhysX to run on the GPU. PhysX is currently delivered with a CUDA backend that definitely isn't compatible with an ATI GPU.

A hack would only be feasible when NVIDIA starts to run PhysX on an OpenCL or DirectCompute backend."
PhysX can fallback to the CPU(and a whole range of other processors), so what is happning here is that they disable the GPU PhysX, HACK the game to run CPU physics..and what happens is poor performance and crashes.

This is (and really in a comical way) a poster example of why GPU physics is so much better than GPU physics :D
 
You don't get GPU acceleration, you get SCALED down CPU PhysX:


This is (and really in a comical way) a poster example of why GPU physics is so much better than GPU physics :D

I really dont see how GPU physics actually worth anything before it can maintain fps in a stable part. especially just for flying paper and few object colliding each other...

havok on the other side use CPU, and it simply offer way better performance without masive fps drop....

or maybe we can put it this way..
physX become a GPU hug after nVidia get their hand on it, and it later design to be on GPU than CPU, that is why we see physX on GPU is better than on CPU.
And that does not mean or proof GPU physics is better than CPU physics in anyway.
 
You don't get GPU acceleration, you get SCALED down CPU PhysX:
http://forum.beyond3d.com/showpost.php?p=1332253&postcount=10



http://forum.beyond3d.com/showpost.php?p=1332676&postcount=13


http://forum.beyond3d.com/showpost.php?p=1333290&postcount=16


PhysX can fallback to the CPU(and a whole range of other processors), so what is happning here is that they disable the GPU PhysX, HACK the game to run CPU physics..and what happens is poor performance and crashes.

This is (and really in a comical way) a poster example of why GPU physics is so much better than GPU physics :D

The crashes are for people using illegal versions of the game. I bought the game.. no crashes.

As for the performance.. you're looking at Frames per second in the 40s and 50s with dips as low as 30FPS.

Take into consideration that I was RECORDING using FRAPS during that HD Youtube clip. I was recording in full 1920x1200 at 30FPS and it looks quite smooth no?

The PhysX run times are just hogs. Havok can do cloth simulations and fluid simulations that look much more realistic using less resources. (As the OpenCL demos show).
 
Last edited:
The crashes are for people using illegal versions of the game. I bought the game.. no crashes.

As for the performance.. you're looking at Frames per second in the 40s and 50s with dips as low as 30FPS.

Take into consideration that I was RECORDING using FRAPS during that HD Youtube clip. I was recording in full 1920x1200 at 30FPS and it looks quite smooth no?

The PhysX run times are just hogs. Havok can do cloth simulations and fluid simulations that look much more realistic using less resources. (As the OpenCL demos show).

No, if you followed that guide you are running REDUCED CPU physics...
Like running 2xAA and claiming it looks just like 8xAA
 
Yes, you are running reduced PhysX effects. CPU is not better than GPU in PhysX. However, in that game, a half assed hack, (no offense meant to the creators of said hack), looks far better than the title does with regular CPU PhysX. The hack was done practically the day the game launched as well. This will invariably lead to people believing that CPU PhysX was intentionally gimped or that the coders were just incredibly lazy. With the recent exclusionary marketing moves Nv has made of late regarding PhysX and Cuda, I am inclined to believe the former.
 
PhysX can fallback to the CPU(and a whole range of other processors), so what is happning here is that they disable the GPU PhysX, HACK the game to run CPU physics..and what happens is poor performance and crashes.

This is (and really in a comical way) a poster example of why GPU physics is so much better than GPU physics :D

No, it is a poster example of Nvidia trying to ruin people's experiences to get them to buy their GPUs by purposely limiting the CPU fallback, and it is an example of why I am refusing to buy their GPUs.

Besides, the physics in Crysis are still miles beyond the ones in Batman Akrham Asylum (even with GPU PhysX enabled).
 
No, it is a poster example of Nvidia trying to ruin people's experiences to get them to buy their GPUs by purposely limiting the CPU fallback, and it is an example of why I am refusing to buy their GPUs.

Besides, the physics in Crysis are still miles beyond the ones in Batman Akrham Asylum (even with GPU PhysX enabled).
yeah how about this stupid physx video for GRAW 2 http://www.youtube.com/watch?v=5KHakdkKQH4


most of that is hardly a big deal. some of it, like the unrealistic little chunks that fly everywhere during explosions or bullet impacts, actually look awful. it sure as hell doesnt take physx for most of that crap in other games. the trees sway and environments are destructible just as fine without it. do people really think that trees dont sway in Far Cry 2 just as well as in that game. do people think that Red Faction doesnt have things blow up and look just as good as that? besides the cloth stuff, its clear they gimp the normal version of the game and then only apply these effects with physx. the idiotic consumer thinks... wow I can have swaying tress and interactive environments now that I have physx. thank you nvidia.
 
Well the physics effects in this Batman game were hardly impressive to begin with so it is no surprise they can be run on the CPU with a hack. Clearly the developers went out of their way to gimp the physics on non Nvidia GPUs which is not surprising in the least. That was already what I thought when I found out that the smoke/steam effects needed GPU Physx which is complete BS as games have been using smoke for many years with no problem on the CPU. This hack just confirms the obvious: GPU Physx is bogus.
 
Well the physics effects in this Batman game were hardly impressive to begin with so it is no surprise they can be run on the CPU with a hack. Clearly the developers went out of their way to gimp the physics on non Nvidia GPUs which is not surprising in the least. That was already what I thought when I found out that the smoke/steam effects needed GPU Physx which is complete BS as games have been using smoke for many years with no problem on the CPU. This hack just confirms the obvious: GPU Physx is bogus.
hopefully hacks like this will get more exposure and show physx for what it truly is which is basically a scam. I am sure there are three or four people on every forum who will defend this nonsense to the very end. of course I like most of the effects but its clear how they are being done and advertised is misleading.
 
Let us recap:
http://forum.beyond3d.com/showpost.php?p=1332253&postcount=10

Hardly "on Radeons". :D

I don't have the game/demo too, but I assume the original value for maxPhysicsSubsteps is 5. Changing this to 1 means reducing physics calculation up to 5 times less often, hence the FPS gain on CPU PhysX.

Physics typically calculated on a fixed time step basis, usually 60 steps per second, regardless of the actual frame rate. So -- for example -- if the FPS is only 10, the physics engine need to iterate 6 sub-steps perframe to keep the physics animation in sync with real-world timing. The amount of this sub-step iterations need to be limited to avoid the problem where physics calculation time (per step/iteration) is longer than the frame time. This is the 'maxPhysicsSubsteps' is all about.

The effect of cutting off the maximum sub-steps would be a physics animation that appear (moving) too slow.

http://forum.beyond3d.com/showpost.php?p=1332461&postcount=12

So the net effect is running on the CPU but only with 1/5th of the workload? Hmmmm.....
 

|
v

Well the physics effects in this Batman game were hardly impressive to begin with so it is no surprise they can be run on the CPU with a hack. Clearly the developers went out of their way to gimp the physics on non Nvidia GPUs which is not surprising in the least. That was already what I thought when I found out that the smoke/steam effects needed GPU Physx which is complete BS as games have been using smoke for many years with no problem on the CPU. This hack just confirms the obvious: GPU Physx is bogus.
 
Let us recap:
http://forum.beyond3d.com/showpost.p...3&postcount=10

Quote:
Hardly "on Radeons".

I don't have the game/demo too, but I assume the original value for maxPhysicsSubsteps is 5. Changing this to 1 means reducing physics calculation up to 5 times less often, hence the FPS gain on CPU PhysX.


Physics typically calculated on a fixed time step basis, usually 60 steps per second, regardless of the actual frame rate. So -- for example -- if the FPS is only 10, the physics engine need to iterate 6 sub-steps perframe to keep the physics animation in sync with real-world timing. The amount of this sub-step iterations need to be limited to avoid the problem where physics calculation time (per step/iteration) is longer than the frame time. This is the 'maxPhysicsSubsteps' is all about.

The effect of cutting off the maximum sub-steps would be a physics animation that appear (moving) too slow.
http://forum.beyond3d.com/showpost.p...1&postcount=12

Quote:
So the net effect is running on the CPU but only with 1/5th of the workload? Hmmmm.....


Again, we are not saying it is as good as when done with a gpu, but it is far better than the game is with default CPU PhysX. Meaning the game is gimped, intentionally or not. The non Nv experience with this game is far below what it could have been. Which is obvious from the simple little hack that has been available, pretty much, since the day the game shipped. I don't honestly believe the devs were that lazy. So that really only leaves intentional gimping. While I have no evidence that Nv asked the devs to do this, it does seem quite odd that the devs would gimp their own game, unless there were out side influences. Your decision regarding this was predetermined it seems. With the info out on internet of late, even trying some of the hacks and workarounds myself with my 4870 and 9800gtx+, I have reached a conclusion as well.
At this point I doubt Nv views PhysX, and prolly CUDA as well, in any other way except as marketing tools. Which leaves me wondering just long PhysX on Nv only, and PhysX being free to devs, will last once Havock and the others have OpenCL solutions available. We will have to see.
 
Again, we are not saying it is as good as when done with a gpu, but it is far better than the game is with default CPU PhysX. Meaning the game is gimped, intentionally or not. The non Nv experience with this game is far below what it could have been. Which is obvious from the simple little hack that has been available, pretty much, since the day the game shipped. I don't honestly believe the devs were that lazy. So that really only leaves intentional gimping. While I have no evidence that Nv asked the devs to do this, it does seem quite odd that the devs would gimp their own game, unless there were out side influences. Your decision regarding this was predetermined it seems. With the info out on internet of late, even trying some of the hacks and workarounds myself with my 4870 and 9800gtx+, I have reached a conclusion as well.
At this point I doubt Nv views PhysX, and prolly CUDA as well, in any other way except as marketing tools. Which leaves me wondering just long PhysX on Nv only, and PhysX being free to devs, will last once Havock and the others have OpenCL solutions available. We will have to see.


*taken from another post of mine*

furthermore, we don't know if whatever gpu physx effects that were implemented in a game like mirrors edge or batman aa were ever intended to be included in the game in the first place. usually it's a case of the game just being designed on a multiplatform basis, using consoles as the "l.c.d." (lowest common denominator). then perhaps the devs may have had a bit extra time towards the end of development to include the gpu accelerated physx effects as an afterthought (and without even bothering to delay the game further to enable cpu fallback effects), which may partially explain the delays of the aforementioned games in comparison to the console versions. this may be similar to dirt2 being delayed on the pc for dx11 support. imagine what could be done if more time and painstaking effort were taken to make them more meaningful to the game as a whole.

so i don't think it's a matter of, well this could have been done on the cpu with less particles or static effects or scripted animations, but more of the fact that perhaps the intention was never to have included them at all from the get-go when development was already in full swing with all assets in place. as a result, one can still wish to speculate that nvidia threw money to push a dev to integrate gpu physx effects in a game - which is fine - but that doesn't mean that the dev would have bothered in the first place to integrate said effects at all, whether via software or otherwise.

consider older games like graw1/2 and the accelerated physics effects. the games were designed to run on a lowly p4 cpu in terms of system requirements (oddly enough, the system requirements for batman aa only requires a single core cpu to run as well). so it may have been a similar situation where the game was designed for the "l.c.d." and the accelerated effects were added later on in development. could some of the effects have been made scaleable? of course! but one can't deny this is a relatively new arena for game developers, and it will certainly take some time for the technology to mature even further.

*end*

(in regards to your last statement)

i'm not saying that nvidia blocking ati users from taking advantage of gpu accelerated physx is justified, but it's their technology and if they want to lose potential sales as a result, well then it's their loss(es). furthermore, maybe they wish to optimize the performance of their hardware with it when they know they are dealing with an nvidia gpu as the primary graphics renderer, and can therefore optimize graphics and physics loads in that capacity (along with a secondary if available). or maybe that isn't really the case and they don't want to share with ati users. would that be retarded of them? yeah. otherwise, this kind of stuff doesn't surprise me at all. companies do this all the time. apple blocks my palm pre from syncing with itunes. i just shrug my shoulders and move on. no big deal. there are other options available. in this case, just wait until physx and/or havok and/or bullet are made into available and fully operational opencl solutions that should change the way things are now. until then, why doesn't anyone boycott/ petition like the l4d2 group until your voices are heard and demands met. if enough complaints are made, then perhaps the company will do something about. at any rate, one can just stick with the older drivers that work. it's not like drastic changes have been made with the physx system software for quite a while now.
 
Last edited:
And that video proves what exactly?

That PhysX has come to stay and won't die...even if AMD GPU owners hate it.


Like the fact that Batman:AA can be played with 1/5 scaled down physcis Physx rendering enabled without an Nvidia card or a PPU?

Fixed...And it wasn't smooth sailing from what I hear...the hack makes the game a lot more unstable.
 
Yeah it would help a lot if the PhysX haters actually educated themselves on the topic before mouthing off. But that's too much to ask I suppose.....
 
That PhysX has come to stay and won't die...even if AMD GPU owners hate it.




Fixed...And it wasn't smooth sailing from what I hear...the hack makes the game a lot more unstable.

I once thought the same thing about Glide.
If Nv continues, the Nv only CUDA path for PhysX after Havock and the other physics middle ware makers get up and running on OpenCL for Intel and AMD, if they continue to gimp CPU implementations, and if they continue to stymie users of their own products (like myself), PhysX could easily go the way of Glide. This has all happened b4, and can easily happen again. I can't wait to see what actually happens. As it stands that will take a year or more to find out.

Edit: Actually, the more I think about it, didn't 3dFx tighten it's grip on Glide a year or two b4 Open GL and Direct X made it mostly irrelevant? Stamping out Glide emulation on my TNT2, badmouthing everyone, giving incentives to devs that only used Glide, and that sort of thing? Well, at least they never stopped me from using a Voodoo 2 for Glide only games with a GeForce in the same system by way of a driver update.
 
Last edited:
I once thought the same thing about Glide.
If Nv continues, the Nv only CUDA path for PhysX after Havock and the other physics middle ware makers get up and running on OpenCL for Intel and AMD, if they continue to gimp CPU implementations, and if they continue to stymie users of their own products (like myself), PhysX could easily go the way of Glide. This has all happened b4, and can easily happen again. I can't wait to see what actually happens. As it stands that will take a year or more to find out.


It's not happining now:
http://www.hardforum.com/showthread.php?t=1451856
 
I once thought the same thing about Glide.
If Nv continues, the Nv only CUDA path for PhysX after Havock and the other physics middle ware makers get up and running on OpenCL for Intel and AMD, if they continue to gimp CPU implementations, and if they continue to stymie users of their own products (like myself), PhysX could easily go the way of Glide. This has all happened b4, and can easily happen again. I can't wait to see what actually happens. As it stands that will take a year or more to find out.


i think he is saying just because someone is on top doesn't mean it will always be that way. just like havok was #1 for a long time until physx came out of nowhere to take its place, doesn't mean it is impossible for something else to come along and topple physx. though the fact that physx did it in only a few years is still rather impressive. competitive forces will determine who will be the long term victor. could be the top 3 engines still continue to be in the same position, or one could continue to gain market share over the others. when opencl becomes more prevalent, things will probably become more apparent. as of now, physx still has a small lead that may grow with ea, sega, eidos, capcom and other publishers supporting it, and being integrated into ue3 and gamebryo certainly helps its cause. and nvidia is of course marketing the heck out of it. i don't think we can count out havok and bullet (or any another?) out of the race just yet though. it is still way too early to tell who will win this race. just like sony realizes now with the playstation brand, you're not guaranteed first place, you have to continue to earn it.
 
i think he is saying just because someone is on top doesn't mean it will always be that way. just like havok was #1 for a long time until physx came out of nowhere to take its place, doesn't mean it is impossible for something else to come along and topple physx. though the fact that physx did it in only a few years is still rather impressive. competitive forces will determine who will be the long term victor. could be the top 3 engines still continue to be in the same position, or one could continue to gain market share over the others. when opencl becomes more prevalent, things will probably become more apparent. as of now, physx still has a small lead that may grow with ea, sega, eidos, capcom and other publishers supporting it, and being integrated into ue3 and gamebryo certainly helps its cause. and nvidia is of course marketing the heck out of it. i don't think we can count out havok and bullet (or any another?) out of the race just yet though. it is still way too early to tell who will win this race. just like sony realizes now with the playstation brand, you're not guaranteed first place, you have to continue to earn it.

And that is where TWIMTBP comes in.
NVIDIA's thight realtions with gamedevelopers.

And I consider it no smalle feat to topple the previous leader (Havok) in just 3 years...
 
And that is where TWIMTBP comes in.
NVIDIA's thight realtions with gamedevelopers.

And I consider it no smalle feat to topple the previous leader (Havok) in just 3 years...

ATI do help game developers, it just they don't have the tag on...

a good example will be Crysis, they did mention that in their developer journal.

also, TWIMTBP does not mean it will run better on nVidia, it just a tag......

but hey, I love TWIMTBP logo back in UT2 and other old games...I want something like that back :S
 
And I consider it no smalle feat to topple the previous leader (Havok) in just 3 years...


Check the Glide time line, Voodoo and Glide hit the PC stage in 1995 or 1996 I think, irrelevancy by 1998 or 1999 , and death in 2K. Ascendancy to decline and then death for Glide was only 4 or 5 years. OpenGL and Direct 3d eliminated, the only on a 3DFx card, Glide in that little bit of time. I know it seems like it takes forever for this stuff to come out, but this shit moves fast.

If Open GL and Direct 3d can do that, Havock, Bullet, etc. on Open CL and/or Direct compute can do it again. Not saying it's a guaranty or anything of the sort, but the signs are similar. We are definitely OT here, thought we were in another thread, I'll stop now.
 
ATI do help game developers, it just they don't have the tag on...

a good example will be Crysis, they did mention that in their developer journal.

also, TWIMTBP does not mean it will run better on nVidia, it just a tag......

but hey, I love TWIMTBP logo back in UT2 and other old games...I want something like that back :S

ati seems to have close ties with valve. my first ati card came bundled with a hl2 voucher, which i didn't even get to play until a year later, lol.
 
Yeah it would help a lot if the PhysX haters actually educated themselves on the topic before mouthing off. But that's too much to ask I suppose.....

Many of those that "mouth off" are educated consumers. They are not programmers (like yourself). What they see (and me as well), is a consumer hostile implementation of something some likes to call middleware in games and they discuss with those who blindly defend the implementations. I can't recall anyone saying that GPU accelerated physics is bad, but many are saying that the way this (PhysX) is done is more harmful then good to gaming community.

Would you call todays physx implementation consumer friendly?
I am refering to the active blocking of Nvidia GPU's as PPU if they are not the main renderer. Is this consumer friendly to you?
Or the lack of scaling of PhysX, do you as a programmer find PhysX hard to scale?

PhysX is becoming more and more something to push hardware and is just limiting consumers choices and games become less optimized for everyone. Lets just pray that PhysX doesn't turn into anything worthwhile in games, since it will limit people's choices in hardware this way.

I'm one of those that can buy a new GFX card anytime without making too big dent in my wallet. I'm also one of those that loves new features. Still, I don't cheer PhysX, due to the way its done, I consider it more a limitation then something good. Havok on OpenCL or Bullet on OpenCL is something I would appreciate much more, regardless if my next card is Nvidia or not. Should Nvidia go the route to make PhysX more hardware agnostic and sell it as middleware instead of just pushing hardware on people, then I would probably be more cheerful about it.

Havok and Bullet are at least not going out of their ways to limit hardware choices for consumers (like Nvidia PhysX), so they are IMHO more consumer friendly and therefore deserves more support from consumers.
 
Check the Glide time line, Voodoo and Glide hit the PC stage in 1995 or 1996 I think, irrelevancy by 1998 or 1999 , and death in 2K. Ascendancy to decline and then death for Glide was only 4 or 5 years. OpenGL and Direct 3d eliminated, the only on a 3DFx card, Glide in that little bit of time. I know it seems like it takes forever for this stuff to come out, but this shit moves fast.

If Open GL and Direct 3d can do that, Open CL and Direct compute can do it again. Not saying it's a guaranty, but the signs are similar.

you also need to factor in that 3Dfx failed in their execution of products...From "Banshee" and forth their tripped themselfes...
 
Many of those that "mouth off" are educated consumers. They are not programmers (like yourself). What they see (and me as well), is a consumer hostile implementation of something some likes to call middleware in games and they discuss with those who blindly defend the implementations. I can't recall anyone saying that GPU accelerated physics is bad, but many are saying that the way this (PhysX) is done is more harmful then good to gaming community.

Would you call todays physx implementation consumer friendly?
I am refering to the active blocking of Nvidia GPU's as PPU if they are not the main renderer. Is this consumer friendly to you?
Or the lack of scaling of PhysX, do you as a programmer find PhysX hard to scale?

PhysX is becoming more and more something to push hardware and is just limiting consumers choices and games become less optimized for everyone. Lets just pray that PhysX doesn't turn into anything worthwhile in games, since it will limit people's choices in hardware this way.

I'm one of those that can buy a new GFX card anytime without making too big dent in my wallet. I'm also one of those that loves new features. Still, I don't cheer PhysX, due to the way its done, I consider it more a limitation then something good. Havok on OpenCL or Bullet on OpenCL is something I would appreciate much more, regardless if my next card is Nvidia or not. Should Nvidia go the route to make PhysX more hardware agnostic and sell it as middleware instead of just pushing hardware on people, then I would probably be more cheerful about it.

Havok and Bullet are at least not going out of their ways to limit hardware choices for consumers (like Nvidia PhysX), so they are IMHO more consumer friendly and therefore deserves more support from consumers.

just to clarify, gpu physx is being referred to specifically, not the physx middleware as a whole. and it still works as standard middleware for devs like most games that are coming out will have. standard physx won't prevent anyone from playing the game. neither will gpu physx since it's strictly optional (just like you don't need vista/7 and a dx10 card to play a dx10 supported game). gpu physx is still a niche. it will remain that way if and until it becomes widespread (meaning ati support), so there's nothing to worry about. devs aren't going to stop supporting ati hardware and more importantly, consoles, to make exclusive gpu physx only games. regardless of what one thinks, gpu physx is still a competitive advantage for nvidia, so of course they are going to push it for all it's worth - it's a business. just like eyefinity will be competitive advantage for ati. so until there are competitive forces that cause devs/ pubs to shift away support (opencl havok/ bullet?) to cause nvidia to take notice, things are unlikely to change. at that point, nvidia will either support gpu physx under opencl (and possibly continue with cuda if performance is still better), or it will let it diminish while another takes its place if that is what the market chooses and it can no longer push it. to recap, there are only 3 (actually 4) things that nvidia can do with physx at this point:

1) support physx in software alone; stop gpu physx support completely
2) support cpu/ gpu physx & port gpu physx to opencl for ati/ intel/ nvidia hardware
3) continue to support physx in software; gpu physx only on nvidia hardware
4) let physx die completely

#1 is unlikely since they are a business and will utilize competitive advantages to the fullest. #2 is unlikely until competitive forces take place. #4 is not going to happen since they want a return on their investment. therefore, #3 is going to be the most logical option.
 
to recap, there are only 3 things that nvidia can do with physx at this point:

1) support physx in software alone; stop gpu physx support completely
2) support cpu/ gpu physx & port gpu physx to opencl for ati/ intel/ nvidia hardware
3) continue to support physx in software; gpu physx only on nvidia hardware
4) let physx die completely

#1 is unlikely since they are a business and will utilize competite advantages to the fullest. #2 is unlikely until competitive forces take place. #4 is not going to happen since they want a return on their investment. therefore, #3 is going to be the most logical option.

Or alternative 5:
Support GPU Physx on Nvidia cards, without blocking them as PPU cards if they are not the main renderer. Instead of screwing their own customers.
Let developers scale physX instead of On/Off for showcase purposes, perhaps even Nvidia users want to use their shader power to something else. I'd prefer it with Nvidia card. As batman proved, physX wasn't all it could be for CPU users. Let game developer focus on optimizing their games for the consumers more then being a marketing tool for Nvidia.

Alternative 2 is the most consumer friendly and whats more middleware. Still, alternative 5 is at least less consumer hostile then current implementation.
 
Back
Top