Batman Arkham Asylum on AMD/ATI = blurry graphics?

Or alternative 5:
Support GPU Physx on Nvidia cards, without blocking them as PPU cards if they are not the main renderer. Instead of screwing their own customers.
Let developers scale physX instead of On/Off for showcase purposes, perhaps even Nvidia users want to use their shader power to something else. I'd prefer it with Nvidia card. As batman proved, physX wasn't all it could be for CPU users. Let game developer focus on optimizing their games for the consumers more then being a marketing tool for Nvidia.

Alternative 2 is the most consumer friendly and whats more middleware. Still, alternative 5 is at least less consumer hostile then current implementation.

STOP
THE
FUD
THX

All what is shown is that when scaling the physcis down to 1/5th accuracy, the CPU can pretend to run the physcis...unstable.

How many times does this need to be explained to you (and the rest of the anti-progress gang) before you take notice and stay true to the facts?
 
Or alternative 5:
Support GPU Physx on Nvidia cards, without blocking them as PPU cards if they are not the main renderer. Instead of screwing their own customers.
Let developers scale physX instead of On/Off for showcase purposes, perhaps even Nvidia users want to use their shader power to something else. I'd prefer it with Nvidia card. As batman proved, physX wasn't all it could be for CPU users. Let game developer focus on optimizing their games for the consumers more then being a marketing tool for Nvidia.

Alternative 2 is the most consumer friendly and whats more middleware. Still, alternative 5 is at least less consumer hostile then current implementation.

hmm, guess i could have made a 3-A and 3-B, lol. but yeah, i would agree with that first statement. as far as scaling goes and utilizing cpu physx more efficiently/ effectively, i would like to reply with my earlier post above.

*taken from another post of mine*

furthermore, we don't know if whatever gpu physx effects that were implemented in a game like mirrors edge or batman aa were ever intended to be included in the game in the first place. usually it's a case of the game just being designed on a multiplatform basis, using consoles as the "l.c.d." (lowest common denominator). then perhaps the devs may have had a bit extra time towards the end of development to include the gpu accelerated physx effects as an afterthought (and without even bothering to delay the game further to enable cpu fallback effects), which may partially explain the delays of the aforementioned games in comparison to the console versions. this may be similar to dirt2 being delayed on the pc for dx11 support. imagine what could be done if more time and painstaking effort were taken to make them more meaningful to the game as a whole.

so i don't think it's a matter of, well this could have been done on the cpu with less particles or static effects or scripted animations, but more of the fact that perhaps the intention was never to have included them at all from the get-go when development was already in full swing with all assets in place. as a result, one can still wish to speculate that nvidia threw money to push a dev to integrate gpu physx effects in a game - which is fine - but that doesn't mean that the dev would have bothered in the first place to integrate said effects at all, whether via software or otherwise.

consider older games like graw1/2 and the accelerated physics effects. the games were designed to run on a lowly p4 cpu in terms of system requirements (oddly enough, the system requirements for batman aa only requires a single core cpu to run as well). so it may have been a similar situation where the game was designed for the "l.c.d." and the accelerated effects were added later on in development. could some of the effects have been made scaleable? of course! but one can't deny this is a relatively new arena for game developers, and it will certainly take some time for the technology to mature even further.

*end*

(in regards to your last statement)

i'm not saying that nvidia blocking ati users from taking advantage of gpu accelerated physx is justified, but it's their technology and if they want to lose potential sales as a result, well then it's their loss(es). furthermore, maybe they wish to optimize the performance of their hardware with it when they know they are dealing with an nvidia gpu as the primary graphics renderer, and can therefore optimize graphics and physics loads in that capacity (along with a secondary if available). or maybe that isn't really the case and they don't want to share with ati users. would that be retarded of them? yeah. otherwise, this kind of stuff doesn't surprise me at all. companies do this all the time. apple blocks my palm pre from syncing with itunes. i just shrug my shoulders and move on. no big deal. there are other options available. in this case, just wait until physx and/or havok and/or bullet are made into available and fully operational opencl solutions that should change the way things are now. until then, why doesn't anyone boycott/ petition like the l4d2 group until your voices are heard and demands met. if enough complaints are made, then perhaps the company will do something about. at any rate, one can just stick with the older drivers that work. it's not like drastic changes have been made with the physx system software for quite a while now.

so overall, i see where you're coming from and respect your opinion. only thing is, why not attempt to communicate that directly to the decision makers at nvidia? you may be ignored but at least you can make your voice heard. otherwise, it's just like preaching to the choir, man.
 
<snip>

All what is shown is that when scaling the physcis down to 1/5th accuracy, the CPU can pretend to run the physcis...unstable.

How many times does this need to be explained to you (and the rest of the anti-progress gang) before you take notice and stay true to the facts?

I love how you constantly tell people to stop spreading FUD but you keep ignoring the FUD coming from Nvidia.

People aren't complaining that CPU PhysX may be less then GPU PhysX. They're complaining that Nvidia has artificially restricted the amount of the PhysX that runs on the CPU to a single core. Elmo proved that with his video of Batman using the PhysX hack. While it may not be stable or the same as GPU PhysX it shows that PhysX has been artificially neutered. That's the crux of the complaint.

Hell I think making PhysX run at even 50% using a CPU would be a good thing for NV because you have plenty of people out there who will get a taste of accelerated physics and want to be able to run it at 100% and without an FPS hit. Instead we get jack and shit and the uninformed people go...oh PhysX doesn't work with AMD and it slows down my computer. Which then translates into PhysX = Bad.

I wouldn't even be bothered by that if I could slap an NV card into my system with my Crossfired HD4870's and use that for PhysX. Again the consumer has been F'd in the A by Nvidia because they want to push their own hardware. In reality they're only stopping many people like myself from buying an NV card to use as dedicated PPU.

Then again when your ass is being kicked by the competition (and even more so tomorrow) you have to market something to seem special. :p

I don't hate PhysX. I just hate what Nvidia is doing with it.
 
I love how you constantly tell people to stop spreading FUD but you keep ignoring the FUD coming from Nvidia.

People aren't complaining that CPU PhysX may be less then GPU PhysX. They're complaining that Nvidia has artificially restricted the amount of the PhysX that runs on the CPU to a single core. Elmo proved that with his video of Batman using the PhysX hack. While it may not be stable or the same as GPU PhysX it shows that PhysX has been artificially neutered. That's the crux of the complaint.

Hell I think making PhysX run at even 50% using a CPU would be a good thing for NV because you have plenty of people out there who will get a taste of accelerated physics and want to be able to run it at 100% and without an FPS hit. Instead we get jack and shit and the uninformed people go...oh PhysX doesn't work with AMD and it slows down my computer. Which then translates into PhysX = Bad.

I wouldn't even be bothered by that if I could slap an NV card into my system with my Crossfired HD4870's and use that for PhysX. Again the consumer has been F'd in the A by Nvidia because they want to push their own hardware. In reality they're only stopping many people like myself from buying an NV card to use as dedicated PPU.

Then again when your ass is being kicked by the competition (and even more so tomorrow) you have to market something to seem special. :p

I don't hate PhysX. I just hate what Nvidia is doing with it.

Own goal...
 
In english please? And I'm talking about the hack being unstable. You can't tell me Nvidia doesn't know how to get PhysX running stable on a CPU.

Or the CPU can't handle 1/5th off the load in Batman AA, without exceeding it's limits.
It's not like PhysX isn't stable on the CPU.
UT3, Darkest of Days and NFS - Shift run fine on the CPU using PhysX as middleware.

But then again, those games are not trying to hold flower in the mouth and whistle at the same time.

But some physcis comes at a much higher cost that simple collisions physics.
Interactive liquids, smoke and (tearable) cloth ar some of these types.
Back in 2006 when Cellfactor came out, something like this happpend too.
People "hacked" Cell Factor, claming to run the same level of physics.
Looking at cloth tanked the game to 1-2 FPS.
The physics was off at times
The game got unstable.

Now it's 2009.
Someone has "hacked" Batman AA, claming to run the same level of physics.
At 1/5th accurate physics rendering.
The game is unstable.

You don't think this unstability has something to do with the choice, along side the fact that this unstability only nets 1/5th accurate physics?

It might not be a big issue to you, but if that is the bar that gets set, you will see a lot more flak towards NVIDIA than righ now...and it would be justified.
Unlike this "hoax", claimin that "Batman AA Physx runs fine on the CPU", posting youtbe videos...but leaving out the nitty, gritty details...1/5th of the accuracy...unstable.

From 2006 to now close to 2010 the arguments havn't changed...still just as flawed.
You would think that people would learn.
 
In english please? And I'm talking about the hack being unstable. You can't tell me Nvidia doesn't know how to get PhysX running stable on a CPU.

I can run (render the graphics) crysis on my CPU. It's unstable and the frame rate sucks and I have to turn down all the graphics settings. You can't tell me that Intel doesn't know how to get crysis running (rendering the graphics) on thier i7s.

This is just as much relevent as what you just said.
 
I can run (render the graphics) crysis on my CPU. It's unstable and the frame rate sucks and I have to turn down all the graphics settings. You can't tell me that Intel doesn't know how to get crysis running (rendering the graphics) on thier i7s.

This is just as much relevent as what you just said.

Spot on!
 
I can run (render the graphics) crysis on my CPU. It's unstable and the frame rate sucks and I have to turn down all the graphics settings. You can't tell me that Intel doesn't know how to get crysis running (rendering the graphics) on thier i7s.

This is just as much relevent as what you just said.

Except that PhysX already runs on the CPU but it's stuck on a single core thus causing huge performance hits for a much smaller amount of physics then with a PPU.

A hack shows that if you push it to multi-core you get far better performance and far more PhysX calculations but people then go oh well it's only "1/5th accuracy" or "unstable" blah blah blah. It's a hack. I don't expect it to be perfect. However, it does show that PhysX can do far more on the CPU then NV wants you to believe.

Oh and nobody ever comments about NV not allowing consumers to use their cards as a PPU with AMD cards. That's always glossed over by the fanboys. Instead they just put focus back on the hack and its stability.

For the record Intel is actually doing exactly what you said about Crysis with Larrabee. It's nothing but an bunch of x86 cores working together.
 
Except that PhysX already runs on the CPU but it's stuck on a single core thus causing huge performance hits for a much smaller amount of physics then with a PPU.

Only game i known of that used interactive smoke on the PPU was Cell Factor?
Care to name the others...or is this just another bogus claim?

A hack shows that if you push it to multi-core you get far better performance and far more PhysX calculations but people then go oh well it's only "1/5th accuracy" or "unstable" blah blah blah. It's a hack. I don't expect it to be perfect. However, it does show that PhysX can do far more on the CPU then NV wants you to believe.

We don't all like to game BSOD?
And your conclusion dosn't fit the data.
Lower FPS than GPU physics, while doing 1/5th of the workload and unstable.
Let me repeat that:
Lower FPS than GPU physics, while doing 1/5th of the workload and unstable.
And a 3rd time, just to make sure the key point isn't lost in bogus claims:
Lower FPS than GPU physics, while doing 1/5th of the workload and unstable.

Oh and nobody ever comments about NV not allowing consumers to use their cards as a PPU with AMD cards. That's always glossed over by the fanboys. Instead they just put focus back on the hack and its stability.

We, sorry we don't fall for false "PR".
~20%...until crash != 100%
What if a company claimed:
"Our product delivers 100 FPS stable"
But actually only delivered ~20 FPS...when not crashing.

I'd would call that a lie, like I would call the description and title of this video for deliberately false and lying:
http://www.youtube.com/watch?v=AUOr4cFWY-s


It's pretending to do the SAME as with GPU PhysX, not talking about the scaled down workload...or the unstability.

For the record Intel is actually doing exactly what you said about Crysis with Larrabee. It's nothing but an bunch of x86 cores working together.

That is hardly a fitting description, infact that is a lousy description of larrabee.
By that definition the R800 is nothing but a bunch of SIMD cores working together.

And how many CPU cores do you need to render the graphics in Crysis?
The CPU dosn't "magically" perform better in SIMD when you change the label from "graphics" to "physics".
 
hmm, guess i could have made a 3-A and 3-B, lol. but yeah, i would agree with that first statement. as far as scaling goes and utilizing cpu physx more efficiently/ effectively, i would like to reply with my earlier post above.

so overall, i see where you're coming from and respect your opinion. only thing is, why not attempt to communicate that directly to the decision makers at nvidia? you may be ignored but at least you can make your voice heard. otherwise, it's just like preaching to the choir, man.

Thank you. :)

Communicating in the forums is often to communicate directly to Nvidia. Nvidia Focus Group members are doing just that.

Its a good place for consumers to speak up.
 
Thank you. :)

Communicating in the forums is often to communicate directly to Nvidia. Nvidia Focus Group members are doing just that.

Its a good place for consumers to speak up.

Many of the people voicing "concern" in this thread are not people who would by NVIDIA...so...
 
Except that PhysX already runs on the CPU but it's stuck on a single core thus causing huge performance hits for a much smaller amount of physics then with a PPU.
Except that running on every thread of an i7 is only 8 threads. Running on every "thread" of a GTX 280 will give you 240 threads. When gulftown hits, you are going to be able to run on 12 threads, with the 380 you'll be able to run on 480 threads. An i7 in all it's glory is not the right ARCHITECTURE to run physics. Sorry.



A hack shows that if you push it to multi-core you get far better performance and far more PhysX calculations but people then go oh well it's only "1/5th accuracy" or "unstable" blah blah blah. It's a hack. I don't expect it to be perfect. However, it does show that PhysX can do far more on the CPU then NV wants you to believe.
So I can play it on my graphics card for 30 fps but instead I should pull it to my cpu and play it at 5fps. That's 1/5th the graphical accuracy.



Oh and nobody ever comments about NV not allowing consumers to use their cards as a PPU with AMD cards. That's always glossed over by the fanboys. Instead they just put focus back on the hack and its stability.

For the record Intel is actually doing exactly what you said about Crysis with Larrabee. It's nothing but an bunch of x86 cores working together.
And the last demoenstartion on larrabee architecture couldn't beat a 3 year old mid range grahpics card. I think you are proving my point quite well. Furthermore, it was originally slated to release this year with a 2TFlop and a die size of over 900mm^2. Well, have a look at what the rumored 380 specs and the 5870 specs. 2TF isn't anything fancy and furthermore, they are doing it in a LOT less silicon. Not like silicon size means cost and yield rates or anything.
 
Only game i known of that used interactive smoke on the PPU was Cell Factor?
Care to name the others...or is this just another bogus claim?



We don't all like to game BSOD?
And your conclusion dosn't fit the data.
Lower FPS than GPU physics, while doing 1/5th of the workload and unstable.
Let me repeat that:
Lower FPS than GPU physics, while doing 1/5th of the workload and unstable.
And a 3rd time, just to make sure the key point isn't lost in bogus claims:
Lower FPS than GPU physics, while doing 1/5th of the workload and unstable.



We, sorry we don't fall for false "PR".
~20%...until crash != 100%
What if a company claimed:
"Our product delivers 100 FPS stable"
But actually only delivered ~20 FPS...when not crashing.

I'd would call that a lie, like I would call the description and title of this video for deliberately false and lying:
http://www.youtube.com/watch?v=AUOr4cFWY-s


It's pretending to do the SAME as with GPU PhysX, not talking about the scaled down workload...or the unstability.



That is hardly a fitting description, infact that is a lousy description of larrabee.
By that definition the R800 is nothing but a bunch of SIMD cores working together.

And how many CPU cores do you need to render the graphics in Crysis?
The CPU dosn't "magically" perform better in SIMD when you change the label from "graphics" to "physics".

Oh for fucks sake. Quit drinking the green kool-aid for one damn minute.

It's a hack. The point is it proves CPU PhysX can do more calculations without a huge performance hit then NV wants you to believe. Never did I say it could do the same amount of physics as a GPU. If I did please show me where. I simply said if you turn on CPU PhysX (which is an option in Batman) you take a huge performance hit. The hack gives back some of that performance.

As for it being unstable: "IT'S F*CKING HACK." It's not meant to be stable. It's meant to prove a point.

And how is NV disabling AMD/NV GPU/PPU combinations false PR? It works with earlier driver sets. People are using it. Now with newer drivers it doesn't work. Well there's only one common denominator...those who write the drivers.

Many of the people voicing "concern" in this thread are not people who would by NVIDIA...so...

Actually PhysX will never factor into a purchase of mine. If I go Nvidia with my next build it won't be because of PhysX. It'll be because they have the best card for the money.

Except that running on every thread of an i7 is only 8 threads. Running on every "thread" of a GTX 280 will give you 240 threads. When gulftown hits, you are going to be able to run on 12 threads, with the 380 you'll be able to run on 480 threads. An i7 in all it's glory is not the right ARCHITECTURE to run physics. Sorry.



So I can play it on my graphics card for 30 fps but instead I should pull it to my cpu and play it at 5fps. That's 1/5th the graphical accuracy.

Once again...where did I ever say CPU PhysX would be the same as GPU PhysX? See above...

But I'm done with thread as the fanboys are out in force.
 
Many of the people voicing "concern" in this thread are not people who would by NVIDIA...so...

What do you know about that? Many people in this thread owns Nvidia hardware and would consider both ATI and Nvidia before making a purchase. We are consumers and though we don't like how companies play sometimes, we do like the products.
 
It's a hack. The point is it proves CPU PhysX can do more calculations without a huge performance hit then NV wants you to believe. Never did I say it could do the same amount of physics as a GPU. If I did please show me where. I simply said if you turn on CPU PhysX (which is an option in Batman) you take a huge performance hit. The hack gives back some of that performance.

Paradox detected.
It's a hack allright, that only (if you understand the hack and the data) proves that GPU physics is WAY more powerfull than CPU physics.

Despite the false spin it's given.

As for it being unstable: "IT'S F*CKING HACK." It's not meant to be stable. It's meant to prove a point.

If that point is to show why CPU physx is disabled in Batman AA...it's doing it's job just fine.
Otherwise it's a failure....pimped up by deceit and/or ignorance.

And how is NV disabling AMD/NV GPU/PPU combinations false PR? It works with earlier driver sets. People are using it. Now with newer drivers it doesn't work. Well there's only one common denominator...those who write the drivers.

Actually NVIDIA also bumped up the requirements for GPU-physics.
I am not denying that NVIDIA has disabled GPU PhysX if there is an AMD GPU present.
Like I am not denying that AMD rejected PhySx.
Like I am not denying that Havok is not opensource...but proprietary.
Like I am not denying OpenCL is not a physics middleware.

There is a lot af things that I am not denying, but those are irrelevant.

What is relevant is that false conclusions are being spread, based on flawed data (reduced workload, reduced accuracy, instability, lower FPS) as doing the same.

This hack is evidence for that the physics in "Batman - AA" cannot run on the CPU, neither in fidelity nor performance nor stabilty.
And evidence for that lies/deceit are the tools used against PhysX...either on purpose or due to ignorance.

That all I have gotten from this "hack"...sorry if my view isn't red enoguh for you...but the data simply isn't there to support your claims.



But I'm done with thread as the fanboys are out in force.

One less now it seems...
 
Except that PhysX already runs on the CPU but it's stuck on a single core thus causing huge performance hits for a much smaller amount of physics then with a PPU.

A hack shows that if you push it to multi-core you get far better performance and far more PhysX calculations but people then go oh well it's only "1/5th accuracy" or "unstable" blah blah blah. It's a hack. I don't expect it to be perfect. However, it does show that PhysX can do far more on the CPU then NV wants you to believe.

Oh and nobody ever comments about NV not allowing consumers to use their cards as a PPU with AMD cards. That's always glossed over by the fanboys. Instead they just put focus back on the hack and its stability.

For the record Intel is actually doing exactly what you said about Crysis with Larrabee. It's nothing but an bunch of x86 cores working together.

just wanted to comment on the first part. batman is designed to run on, at minimum, a single core cpu per the system requirements. the most logical scenario could be that the devs already designed the game with that lowest common denominator in mind with whatever cpu physx they had already intended in the development cycle prior to tacking on gpu physx at the end (via a patch), hence the 3 week "delay" behind the console versions. so the real question may be, is it possible to have a bit more scaleable cpu physx that won't afffect the ability of someone to play the exact same game on a single core cpu as someone on a multicore cpu? given that the gpu physx effects were again, more than likely implemented at the end of the development cycle as an afterthought - already delaying the game from a simultaneous release date as the consoles - i don't think one can reasonably expect the devs to delay the game even further to go back and add in some lower fallback cpu physics effects in an attempt to make the game look comparable to the game with gpu physx effects on; that may not even be feasible on a single core cpu, as per the base system requirements. if they could optimize some of the gpu physx effects into the software based physx for quad cores, while still running acceptably on a single core system, then that would be an amazing feat in and of itself. otherwise, i think some people might have been happier if the devs didn't put any gpu physx effects in the game at all so everyone plays the exact same looking game no matter what. just like all dx10 supported games in vista/7 should look exactly the same in dx9 in xp so everyone has the same exact visual experience, right? just my two cents.
 
I can run (render the graphics) crysis on my CPU. It's unstable and the frame rate sucks and I have to turn down all the graphics settings. You can't tell me that Intel doesn't know how to get crysis running (rendering the graphics) on thier i7s.

This is just as much relevent as what you just said.

or it just your rig sucks that can't handle it :rolleyes:

while mine can handle it smooth :eek:
 
just wanted to comment on the first part. batman is designed to run on, at minimum, a single core cpu per the system requirements. the most logical scenario could be that the devs already designed the game with that lowest common denominator in mind with whatever cpu physx they had already intended in the development cycle prior to tacking on gpu physx at the end (via a patch), hence the 3 week "delay" behind the console versions. so the real question may be, is it possible to have a bit more scaleable cpu physx that won't afffect the ability of someone to play the exact same game on a single core cpu as someone on a multicore cpu? given that the gpu physx effects were again, more than likely implemented at the end of the development cycle as an afterthought - already delaying the game from a simultaneous release date as the consoles - i don't think one can reasonably expect the devs to delay the game even further to go back and add in some lower fallback cpu physics effects in an attempt to make the game look comparable to the game with gpu physx effects on; that may not even be feasible on a single core cpu, as per the base system requirements. if they could optimize some of the gpu physx effects into the software based physx for quad cores, while still running acceptably on a single core system, then that would be an amazing feat in and of itself. otherwise, i think some people might have been happier if the devs didn't put any gpu physx effects in the game at all so everyone plays the exact same looking game no matter what. just like all dx10 supported games in vista/7 should look exactly the same in dx9 in xp so everyone has the same exact visual experience, right? just my two cents.

It hurts my eyes to read that.

A few things you can do to help:
Use paragraphs to seperate your thoughts.
Capatilize the first letter of your sentances.
Limit your self to less than 50 words per sentance.


otherwise, i think some people might have been happier if the devs didn't put any gpu physx effects in the game at all so everyone plays the exact same looking game no matter what. just like all dx10 supported games in vista/7 should look exactly the same in dx9 in xp so everyone has the same exact visual experience, right?
DX10 supported games in vista don't look the same in DX9. Go look at stalker clear sky and tell me the lighting godness that is DX10 in that game looks the same as DX9. The whole point of DX11 vs DX10 vs DX9 etc is adding to the feature sets so you can have better graphics.
 
It hurts my eyes to read that.

A few things you can do to help:
Use paragraphs to seperate your thoughts.
Capatilize the first letter of your sentances.
Limit your self to less than 50 words per sentance.

Red herring...you should try with arguments, his nationality, taste or spelling is not the topic....need more pointers?



DX10 supported games in vista don't look the same in DX9. Go look at stalker clear sky and tell me the lighting godness that is DX10 in that game looks the same as DX9. The whole point of DX11 vs DX10 vs DX9 etc is adding to the feature sets so you can have better graphics.

DX10 is about performance, not visuals...show me graphics that can't be rendered in DX10 and not DX9?
(Hint: DX10 let you do stuff for less work)

http://www.techspot.com/vb/all/wind...-10--What-it-Is-What-It-Takes-To-Have-It.html

There are penlty of games that look the same in DX9 vs DX10...
 
Red herring...you should try with arguments, his nationality, taste or spelling is not the topic....need more pointers?
It has to be an argument for it to be a red herring. Saying the "it hurt my eyes to read" is a statement, not an argument. If I had said "you suck because you can't spell", or "your point is invalid because you can't communicate well" that would be a red herring. Please read what is written.

DX10 is about performance, not visuals...show me graphics that can't be rendered in DX10 and not DX9?
(Hint: DX10 let you do stuff for less work)

http://www.techspot.com/vb/all/wind...-10--What-it-Is-What-It-Takes-To-Have-It.html

There are penlty of games that look the same in DX9 vs DX10...

http://www.hardocp.com/article/2008/10/01/stalker_clear_sky_gameplay_perf_iq

The game is still very scalable on low-end hardware, but has also been upgraded to scale with high-end hardware. Some of the new features include "Sun Rays" / &#8220;God Rays&#8221;, wet surfaces, volumetric light and smoke, depth of field blurring, and screen space ambient occlusion (SSAO) lighting. Two of the new features: volumetric smoke and wet surfaces require the use of the DirectX 10 API included with Windows Vista

Here is a look at this article on DX11 to see some of the new feature set of DX11.
 
Would you call todays physx implementation consumer friendly? I am refering to the active blocking of Nvidia GPU's as PPU if they are not the main renderer. Is this consumer friendly to you?

I think you've just summed up one of the basic problems plaguing this discussion. It's very easy to oversimplify the situation and blame the evil corporation. You might consider it unfriendly but have you put any thought into what it would mean for Nvidia to support mixed-vendor setups like that? What happens to QA and validation procedures? If something doesn't render properly on an Nvidia+ATi setup how do they debug it? Should they retest all PhysX games every time ATi updates their drivers? If a problem is ATI's fault how do they avoid people blaming PhysX? There are ATi users blaming PhysX for their problems in NFS:Shift and that's not even GPU PhysX! See, complaining is all well and good but unfortunately for them they actually have to run a business.

Or the lack of scaling of PhysX, do you as a programmer find PhysX hard to scale?

PhysX itself scales just fine. Actually it scales very easily since you're talking about systems of thousands of particles being simulated. That number can be trivially scaled up or down. The problem is that there isn't some smooth gradient of performance between CPUs and GPUs. There's a cliff where they meet. That's why some effects aren't even worth trying on a CPU because the horsepower just isn't there.

In terms of consumers being educated, I can't agree with you there. This thread is full of people making ridiculous comparisons between Glide and PhysX. Glide is an API that competed with DirectX. PhysX is not an API, CUDA is. If they need to make a comparison it should be between CUDA and OpenCL PhysX is a library, just like any other library and can be implemented in any programming language or API. Currently, there are implementations for x86, PowerPC, Geforce, Cell, the iPhone etc etc.
 
It hurts my eyes to read that.

A few things you can do to help:
Use paragraphs to seperate your thoughts.
Capatilize the first letter of your sentances.
Limit your self to less than 50 words per sentance.

lol, sorry that's my chosen writing style for posting on forums. and let's not exaggerate - i only used 45 words max :D

DX10 supported games in vista don't look the same in DX9. Go look at stalker clear sky and tell me the lighting godness that is DX10 in that game looks the same as DX9. The whole point of DX11 vs DX10 vs DX9 etc is adding to the feature sets so you can have better graphics.

my point exactly. both gpu physx effects options and higher dx10 graphic options are meant to look different/ better from the "regular" versions of their respective games.

I'm not sure if this is sarcasim, misunderstanding, or just facepalm. :(

look through past posts and my earlier response and you have your clues there
 
Last edited:
I think you've just summed up one of the basic problems plaguing this discussion. It's very easy to oversimplify the situation and blame the evil corporation. You might consider it unfriendly but have you put any thought into what it would mean for Nvidia to support mixed-vendor setups like that? What happens to QA and validation procedures? If something doesn't render properly on an Nvidia+ATi setup how do they debug it? Should they retest all PhysX games every time ATi updates their drivers? If a problem is ATI's fault how do they avoid people blaming PhysX? There are ATi users blaming PhysX for their problems in NFS:Shift and that's not even GPU PhysX! See, complaining is all well and good but unfortunately for them they actually have to run a business.



PhysX itself scales just fine. Actually it scales very easily since you're talking about systems of thousands of particles being simulated. That number can be trivially scaled up or down. The problem is that there isn't some smooth gradient of performance between CPUs and GPUs. There's a cliff where they meet. That's why some effects aren't even worth trying on a CPU because the horsepower just isn't there.

In terms of consumers being educated, I can't agree with you there. This thread is full of people making ridiculous comparisons between Glide and PhysX. Glide is an API that competed with DirectX. PhysX is not an API, CUDA is. If they need to make a comparison it should be between CUDA and OpenCL PhysX is a library, just like any other library and can be implemented in any programming language or API. Currently, there are implementations for x86, PowerPC, Geforce, Cell, the iPhone etc etc.

some good points i didn't think about
 
lol, sorry that's my chosen writing style for posting on forums. and let's not exaggerate - i only used 45 words max :D

I didn't exaggerate.:(
the most logical scenario could be that the devs already designed the game with that lowest common denominator in mind with whatever cpu physx they had already intended in the development cycle prior to tacking on gpu physx at the end (via a patch), hence the 3 week "delay" behind the console versions
53 words :(
given that the gpu physx effects were again, more than likely implemented at the end of the development cycle as an afterthought - already delaying the game from a simultaneous release date as the consoles - i don't think one can reasonably expect the devs to delay the game even further to go back and add in some lower fallback cpu physics effects in an attempt to make the game look comparable to the game with gpu physx effects on; that may not even be feasible on a single core cpu, as per the base system requirements.
97 words (dashes don't end sentences) :D

my point exactly. both gpu physx effects options and higher dx10 graphic options are meant to look different/ better from the "regular" versions of their respective games.
:) Sorry, I guess I missed the sarcasm.
 
lol, okay you got me. i looked more carefully now. still, they seem to be structurally sound sentences to me, even if they are fairly lengthy. furthermore, it's not even remotely close to what this guy wrote:

http://newliterarymagazine.wordpress.com/2007/12/10/longest-sentence-new-literature-vol17/

hehe :). You can structure some stupid long sentances. That doesn't mean it is good writing to do so. :p

No matter how good the information people will tend to ignore it if it is not well written. It may not be right, but that is what happens.:(
 
hehe :). You can structure some stupid long sentances. That doesn't mean it is good writing to do so. :p

No matter how good the information people will tend to ignore it if it is not well written. It may not be right, but that is what happens.:(

this is very true since it is easy for something to be miscommunicated or misconstrued if the message is not written in an effective manner. but in my case, i'm much too lazy to modify everything i write to communicate more effectively. this makes my writing habits even more transparent in an internet forum such as this.

o wellz; may.Bee i weel larn 2 right bet:ter 4 u?
 
I think you've just summed up one of the basic problems plaguing this discussion. It's very easy to oversimplify the situation and blame the evil corporation. You might consider it unfriendly but have you put any thought into what it would mean for Nvidia to support mixed-vendor setups like that? What happens to QA and validation procedures? If something doesn't render properly on an Nvidia+ATi setup how do they debug it? Should they retest all PhysX games every time ATi updates their drivers? If a problem is ATI's fault how do they avoid people blaming PhysX? There are ATi users blaming PhysX for their problems in NFS:Shift and that's not even GPU PhysX! See, complaining is all well and good but unfortunately for them they actually have to run a business.

I know exactly what it would mean for Nvidia to support mixed-vendor setups like that. Not a damn thing. Silus loves to make claims like yours as well, but they simply aren't true. There seems to be this notion that the PhysX library somehow relies upon the rendering graphics driver, which is completely false. The PhysX library does *not* talk to the rendering graphics driver at all. Why do people think that PhysX does? Physics engines have never been dependent upon the graphics driver. Hell, PhysX itself works perfectly fine with an ATI card rendering when it is run on the CPU. Doing the calculations on an Nvidia GPU instead of the CPU doesn't change a damn thing.

The flow is pretty straight forward:

Game code -> PhysX -> CUDA -> Nvidia driver -> Nvidia GPU -> Nvidia driver -> CUDA -> Physx -> Game code -> DirectX/OpenGL -> Graphics driver -> GPU -> Monitor

PhysX doesn't care or need to know *ANYTHING* about the driver or card rendering the image. It makes no difference whatsoever what ends up rendering the image. The result of PhysX calculations don't automatically go straight into the graphics driver, they go back to the game code. Nvidia made an *entirely* business decision to *artifically* restrict it to only Nvidia systems. It has *NO* technical reasons backing it, *NO* support reasons, etc... None of that bullshit. Honestly I was considering picking up a 9600GT or something for PhysX, but because of artificial restrictions I'm not going to, as I don't want to give up my 4850.

And if you don't believe me, download the PhysX SDK (which is free) and look through the samples yourself. You will quite clearly see that PhysX knows *nothing* about how to render a scene or anything like that. It quite simply says where in 3d a box should be, but the developer must explicitly invoke the calls to OpenGL or DirectX to actually render that box. Meaning no translation between drivers, no understanding ATI's driver APIs, no support issues, none of that crap. Which is also why until they explicitly prevented people from doing it, it worked just fine.
 
Trying to look at the PhysX SDK to figure out the flow is like trying to understand Nvidia's drivers by looking at DirectX. Bottom line is that unless you work for Nvidia you know nothing about their back-end implementation.

And if you're insinuating that PhysX and display drivers can't clash then you obviously haven't been using computers very long. Completely unrelated software cause conflicts, far less for the scenarios we're discussing here. And I see you conveniently avoided the issue of troubleshooting / debugging issues with ATI hardware.
 
Trying to look at the PhysX SDK to figure out the flow is like trying to understand Nvidia's drivers by looking at DirectX. Bottom line is that unless you work for Nvidia you know nothing about their back-end implementation.

No, it isn't. DirectX calls into Nvidia's drivers to render the image. PhysX doesn't call into OpenGL/DirectX directly, thus never goes into the rendering driver. Your analogy doesn't work. I KNOW this because when you use PhysX you have to make that call yourself. Just like Havok, Velocity, etc... don't call into OpenGL/DirectX either. People here are confusing "physics engine" with "game framework". The physics engine just computes physics calculations and *returns* the results. It does *not* know *anything* about how to render them, nor does it care.

Again, you can very easily verify this by looking at how to use PhysX. PhysX *never* calls into the rendering driver. It *never* makes *any* rendering calls whatsoever. You don't need to work at Nvidia to figure that out. If you were a developer you would know that the API of a library tells you a lot about how it works and its architecture.

And if you're insinuating that PhysX and display drivers can't clash then you obviously haven't been using computers very long. Completely unrelated software cause conflicts, far less for the scenarios we're discussing here. And I see you conveniently avoided the issue of troubleshooting / debugging issues with ATI hardware.

I didn't avoid the issue at all. Does Havok troubleshoot/debug issues with ATI or Nvidia hardware? Of course not. It isn't their problem. Just like Nvidia doesn't need to diagnose PhysX games on ATI hardware. If the game renders incorrectly, its a rendering issue. If you jump and suddenly skyrocket into the air, its a physics engine problem. When was the last time you updated your display driver and the physics in a game broke? I'm going to go out on a limb and say "never". But again, if PhysX *did* rely on the rendering driver (which it doesn't), Nvidia would *still* need to provide support since the CPU fallback works with ATI cards. The only difference between GPU accelerated PhysX and CPU PhysX is where the calculations are being done - neither of which rely on the rendering driver *at all*.

And completely unrelated software cause conflicts when library names with different versions collide. The PhysX library and ATI's drivers don't collide - at least, they don't until Nvidia decides to start naming PhysX DLLs after ATI's driver components (which honestly wouldn't surprise me given their recent behavior)
 
The physics engine just computes physics calculations and *returns* the results. It does *not* know *anything* about how to render them, nor does it care.

You keep referring to the API. The API is just a bunch of interfaces. It tells you nothing about how data is actually manipulated by the driver.

If you were a developer you would know that the API of a library tells you a lot about how it works and its architecture.

I am a developer. I write APIs and libraries for a living. And I can guarantee you than an interface tells you nothing about the implementation (that's the whole point of interfaces).

I didn't avoid the issue at all. Does Havok troubleshoot/debug issues with ATI or Nvidia hardware? Of course not. It isn't their problem. Just like Nvidia doesn't need to diagnose PhysX games on ATI hardware.

So who should diagnose it? The consumer? Nvidia's and ATi's drivers are buggy on their own and you don't think mixing them will cause even more problems? If you think unrelated software doesn't conflict you should talk to this guy who's instant messenger application was causing stuttering on his 4890.
 
You keep referring to the API. The API is just a bunch of interfaces. It tells you nothing about how data is actually manipulated by the driver.

My point is that it doesn't matter how the data is manipulated. The API is extremely clear in that the data does *not* go directly to the rendered output, thus it does *not* go to the rendering graphics driver. PhysX does not render anything. It doesn't go to the display output. It goes back to the game code. There is no need for Nvidia to translate from their data type to ATI's as that translation is *already being done by the DEVELOPERS*. It is not an automatic translation.

Just go LOOK AT THE SAMPLES. If you know C++ and OpenGL at all, you will very quickly see what I am saying. It will make sense. It is logical. And it is the same as every other physics engine.

And again, why would CPU PhysX work fine with an ATI driver but GPU accelerated ones suddenly require display driver access? The answer is a quite obvious "it doesn't".

I am a developer. I write APIs and libraries for a living. And I can guarantee you than an interface tells you nothing about the implementation (that's the whole point of interfaces).

Then as a developer you should know that for large and complex APIs like PhysX, Havok, DirectX, OpenGL, etc... knowing the basic flow is a requirement to using it. If you make your calls in the wrong order, it doesn't work.

So who should diagnose it? The consumer? Nvidia's and ATi's drivers are buggy on their own and you don't think mixing them will cause even more problems? If you think unrelated software doesn't conflict you should talk to this guy who's instant messenger application was causing stuttering on his 4890.

Diagnose WHAT? If it is rendering wrong, its a problem with the game and/or rendering driver. If its a physics problem, its a problem with the game and/or PhysX. There is a very clear and well defined line between where PhysX's code ends and the rendering code begins. They are two very distinct and separate libraries that do *not* talk to each other. The change Nvidia made also doesn't prevent two drivers from working at once or fix any issue with conflicts (as that is managed by Windows)

And as for the situation you linked to, that wasn't a conflict or bug, it was a resource sharing problem. Completely different and has no bearing on this. We are talking about two libraries that each have their own resources that don't talk to each other. Not only that, but problems like that still exist regardless of if you are using a GPU to accelerate it or the CPU. If anything, it is MORE likely that there will be issues like that when using a CPU and LESS likely if using the GPU, as Nvidia maintains complete control over its GPU. The CPU, on the other hand, is shared by both Nvidia's AND ATI's drivers! OMG OMG OMG! CONFLICTS ARE BOUND TO HAPPEN! :rolleyes:
 
Your logic is meaningless, Nv is the king, you should weep, beg forgiveness, and never again challenge the kings supremacy.


;)
 
I dont go for the compatibility argument either. Its a 100% bottom line driven business/mindshare descision. The real question though is did they make the right call ? I personally think it was the wrong descision, but that just me
 
Back
Top