Future of physics, PhysX?

YamahaAlex37

[H]ard|Gawd
Joined
Jun 23, 2005
Messages
1,775
I am needing a new motherboard, and in order to future proof (I am eligible for EVGA step up so may have a 470 or 480 in the near future), I am curios if I should be interested in PhysX, SLI, or neither.

My options would be a 750a SLI board for around $60, or a MSI 790X for the same.

In the first case SLI would be possible 470 or 480 (with a 1000w psu), but in the second case I could never SLI, only PhysX.

I am more interested in the direction that technology is moving rather than helping me choose between these 2. So is having a dedicated card for Physics going to be the future. or do you see things heading back towards the physics being handled with the GPU? Also, if having a dedicated physics card is the future, does ATI plan to have anything to rival PhysX?
 
PhysX adoption has continued to be extremely slow.

It hasn't helped that Nvidia intentionally crippled the performance of CPU PhysX to make their GPU PhysX look more impressive. That has most developers shying away from it as they don't want gameplay experience to be unacceptable when using a non-nvidia card.
 
PhysX adoption has continued to be extremely slow.

It hasn't helped that Nvidia intentionally crippled the performance of CPU PhysX to make their GPU PhysX look more impressive. That has most developers shying away from it as they don't want gameplay experience to be unacceptable when using a non-nvidia card.

Rubbish. cpu physx is the most popular platform out there for newer games. gpu physx adoption is slow. This is almost completely down to the fact that most games are console games and the consoles can't do gpu physx (and also only want it to run in a single thread).

Hence all the pc ports come through as single threaded cpu physx games only, unless nvidia puts dev effort in to get some extra gpu effects added. No one is putting dev effort into to multi-threaded cpu physx (which works fine but has to be coded that way), hence is why it stays single threaded.
 
Last edited:
Rubbish. cpu physx is the most popular platform out there for newer games.
If you want your games to run at 3 FPS maybe. Physx is crippled for CPUs. There are much better physics processing on CPUs that are platform-independent like Havok. The flip side of the coin in "the way it's meant to be played" is "and we'll make sure it doesn't play on yours". You don't see Intel trying to shut out AMD on Havok processing which is why it has been such popular middleware--it runs on everything. Physx will never be widely implemented in games until Nvidia makes it run on things that aren't Nvidia cards in an Nvidia-only configuration.
 
If you want your games to run at 3 FPS maybe. Physx is crippled for CPUs. There are much better physics processing on CPUs that are platform-independent like Havok. The flip side of the coin in "the way it's meant to be played" is "and we'll make sure it doesn't play on yours". You don't see Intel trying to shut out AMD on Havok processing which is why it has been such popular middleware--it runs on everything. Physx will never be widely implemented in games until Nvidia makes it run on things that aren't Nvidia cards in an Nvidia-only configuration.

Havok is also proprietary (it's owned by Intel). Intel has shut out havok from any gpu acceleration on amd, nvidia or anything else - and yes that was on the plans as amd were showing off gpu accelerated havok years and years ago.

Hence havok doesn't run on a gpu at all, it also doesn't run on some platforms that physx runs on (e.g. iphone). Havok's continued popularity is mostly from game engines that were built in havok's prime but are still used today (in particular the HL2 engine which is still used for many new games).

As for physx being gimped on the cpu - well dev's use physx in their games and they are a lot better informed then you are. If it was rubbish they'd use something else.
 
PhysX is the only API which runs both on CPUs and GPUs. It's also completely free. Bullet comes closest, being free as well, but it doesn't do GPU. Plans to have it OpenCL-accelerated haven't materialized yet, with the awful state of OCL support at the side of AMD not helping matters. Havok is just bloody expensive, which scares all the indie developers away, and makes the larger development houses reconsider.

In the end it means that PhysX really isn't a bad deal at all :)
 
Nvidia is always made out to be the monster when it comes to PhysX but if you read the gathering they had recently to show it off CUDA they talked about PhysX some during it and its got quite a future in front of it.

I think developers need time and really its up to them to find where its most useful. "Planting" physics in video games has always been hard. It would be nice if more developers took full advantage of it but lets face it , that would be quite a task ... regardless there are games on the way that will take better advantage of it.
 
I think developers need time and really its up to them to find where its most useful. "Planting" physics in video games has always been hard. It would be nice if more developers took full advantage of it but lets face it , that would be quite a task ... regardless there are games on the way that will take better advantage of it.

Don't you think the market for GPU accelerated PhysX is too small for developers (who earn their living of selling what they develop) to be of any value? It kills the performance of mainstream and lowend (most of the market) and is only available to the Nvidia part of the highend market.
 
PhysX adoption has continued to be extremely slow.

It hasn't helped that Nvidia intentionally crippled the performance of CPU PhysX to make their GPU PhysX look more impressive. That has most developers shying away from it as they don't want gameplay experience to be unacceptable when using a non-nvidia card.

EXACTLY. God...you know what? This is the first time I've ever felt anyone else has actually got this, no one ever seems to speak out on it!

Not only is CPU performance completely crippled but they also deliberately alienate people with ATI cards from using Nvidia cards for PhysX.

Not only that but Nvidia are incredibly dishonest in portraying the benefits, sure you need some kind of additional hardware acceleration for something like highly accurate cloth and liquid physics, but a lot of the videos displaying the difference between CPU/GPU are showing things like breakable objects and simple particle physics being left out for no good reason.

I wrote about dishonest physx in my blog a while back.
 
Sadly that's only cloth simulation which is being accelerated :( Still needs particles and soft bodies. Still, it's a step in the right direction :)

Cloth and particles acceleration. I'd say its more then a step in the right direction, considering its supported by both AMD and Nvidia. In this respect, its already come further then PhysX. ;)
 
Cloth and particles acceleration. I'd say its more then a step in the right direction, considering its supported by both AMD and Nvidia. In this respect, its already come further then PhysX. ;)

Oh sure, once AMD gets its OpenCL implementation fixed. So far that's the main thing holding OCL back. Well, that and the huge investment companies have in CUDA :)

Another issue is of course that Bullet is being used by almost no games. It's the #3 physics engine and decidedly so. My company didn't fully evaluate Bullet yet, but the chances of us changing from PhysX to Bullet within the next two years are slim to none.
 
Rubbish. cpu physx is the most popular platform out there for newer games. gpu physx adoption is slow. This is almost completely down to the fact that most games are console games and the consoles can't do gpu physx (and also only want it to run in a single thread).

Hence all the pc ports come through as single threaded cpu physx games only, unless nvidia puts dev effort in to get some extra gpu effects added. No one is putting dev effort into to multi-threaded cpu physx (which works fine but has to be coded that way), hence is why it stays single threaded.

The majority of the gpu bloatsx games are console ports.

When nvidia put its "dev effort"(the gpu bloatsx) in games, the performance hit is almost as big as your mom's ass, so most people just disable that rubbish to get the game running properly, that is where bloatsx fails.
 
Oh sure, once AMD gets its OpenCL implementation fixed. So far that's the main thing holding OCL back. Well, that and the huge investment companies have in CUDA :)

Bullet uses both directcompute from DX11 and OpenCL, so I don't think there are any dependencies to OpenCL here. When you talk about the huge investment companies have in CUDA, do you refer to the single game with Bokeh filter for water effects in CUDA, or are you talking about something unrelated to gaming and game physics?

Another issue is of course that Bullet is being used by almost no games. It's the #3 physics engine and decidedly so. My company didn't fully evaluate Bullet yet, but the chances of us changing from PhysX to Bullet within the next two years are slim to none.

I would assume that Bullet's GPU accelerated physics doesn't apply to existing games, so its kinda a pointless subject so far (though it would have been cool to get a Bullet physics patch for GTA IV!). GPU accelerated PhysX isn't used in more then a few games, and only games where it seems the developers have had some co-advertising campaign with Nvidia and where Nvidia has been participating heavily in the development of those games.

GPU accelerated PhysX is still nothing more then a FUD tool to scare people into believing that they miss out on something crucial, while they're not and thus sell GFX cards.

Considering Bullet have more/broader hardware support, I believe it has stronger chance of survival in the long run.
 
there is a physics section of the forum that this belongs in you know :p

Anyways, GPU PhysX has a major flaw in that currently the CPU decides what needs to be drawn, and the GPU does it. For GPU physics to be used, first the CPU makes a physics request to the GPU, the GPU does it, then sends the results back to the CPU, the CPU then has to use it to determine what the frame should look like, then sends the request back to the GPU. Problem? Yes! Instead of CPU->GPU to get one frame done, it is now CPU->GPU->CPU->GPU. Result? Massive performance hit.
 
there is a physics section of the forum that this belongs in you know :p

Anyways, GPU PhysX has a major flaw in that currently the CPU decides what needs to be drawn, and the GPU does it. For GPU physics to be used, first the CPU makes a physics request to the GPU, the GPU does it, then sends the results back to the CPU, the CPU then has to use it to determine what the frame should look like. Problem? Yes! Instead of CPU->GPU to get one frame done, it is now CPU->GPU->CPU->GPU. Result? Massive performance hit.

Gabe Newell noted this about GPU/PPU based physics a long time ago in interviews, saying that making the CPU wait while the work is done on another piece of hardware wasn't speeding anything up.

So far only effects that cannot interact with the environment are quicker, as soon as you want to decide if one of those chunks of rock being flinged out of an explosion has collided with something important such as a player or some kind of trigger in the game, then you have problems.

So far the GPU can only really make the effects more pretty, the only reason the CPU is providing more detailed physics is that there is a lot of older single core and slow dual core CPUs still around, fast quad cores and above can handle a very large amount of complex CPU based physics, see GhostBusters/Crysis/Red Faction Guerrilla for some good examples of CPU based physics.

PhysX on the CPU is very slow, for me it runs average about 30-40% usage, including the game code as well, really pathetic.
 
The main thing standing in the way of PhysX is that it doesn't work in ATI hardware, which means anything more than superficial after thought style PhysX wouldn't work.

I'm not convinced better CPU Physx performance is the way to go, and then it will be crippled by limiting it to how fast a CPU can manage. We just need GPU physics that works on both ATI and nVidia and is accessible to the masses, then it'll become mainstream.
 
The main thing standing in the way of PhysX is that it doesn't work in ATI hardware, which means anything more than superficial after thought style PhysX wouldn't work.

I'm not convinced better CPU Physx performance is the way to go, and then it will be crippled by limiting it to how fast a CPU can manage. We just need GPU physics that works on both ATI and nVidia and is accessible to the masses, then it'll become mainstream.


That's not the problem, PhysX is crippled on CPU because Nvidia intentionally crippled it. Even if ATI/AMD supported it, it would still be crippled because of how GPU physics works (read post #17 and 18). Gameplay physics has to the done on the CPU and modern CPUs are plenty fast enough to do it.
 
That's not the problem, PhysX is crippled on CPU because Nvidia intentionally crippled it. Even if ATI/AMD supported it, it would still be crippled because of how GPU physics works (read post #17 and 18). Gameplay physics has to the done on the CPU and modern CPUs are plenty fast enough to do it.

Yeah I know CPU PhysX is crippled, but the fact is even if it weren't crippled, physics processing is better suited to a GPU, so even if you made it work as well as it could on CPU and nVidia GPUs, whilst still excluding ATI GPUs, you're limiting the potential of "PhysX" to be anything more than any other engine that runs on the CPU.

The special thing about PhysX is its the closest thing to mainstream GPU physics, which has more potential than CPU physics... what's holding it back is it not working with ATI.
 
Yeah I know CPU PhysX is crippled, but the fact is even if it weren't crippled, physics processing is better suited to a GPU, so even if you made it work as well as it could on CPU and nVidia GPUs, whilst still excluding ATI GPUs, you're limiting the potential of "PhysX" to be anything more than any other engine that runs on the CPU.

The special thing about PhysX is its the closest thing to mainstream GPU physics, which has more potential than CPU physics... what's holding it back is it not working with ATI.

I'm also very much hoping for some sort of mainstream physics option. Havok is too expensive, I just heard about Bullet, but the reason there are games out there that use Nvidia's PhysX isn't because it is superior, but rather because they invested heavily to advertise PhysX in a few games by sending their own programmers to help make those games PhysX compatible. However, even when those programmers, who probably know more about PhysX and about how to program it efficiently than any developer could hope to, implemented it, the performance hit was a massive 40% in fps. This screams out that GPU gameplay physics is unsound as an idea, it adds too much communication latency between the GPU and CPU.

This is unacceptable to any developer's eyes.

I also don't have any hope for GPU OpenCL physics for the same reason, the best GPU physics can do is make cloth flap in the wind.

With upcoming sandybridge/bulldozer, there will be more cores than ever and a multi-core optimized physics engine will win this fight. Maybe not this year, but in a few years when everyone has 4, 8, 16, 32 CPU cores. High-end games will always use discrete cards, it costs too much to integrate GPU cores when a minority of people use them, and GPU physics in future games will always be hampered by 3X the normal CPU-GPU communication latency + many many wasted cycles.

edit: I think OpenCL and PhysX on CPU both could do the job, they just have to be optimized correctly (purposefully) to take advantage of extra core power. Oh, also reasonably priced i.e. not like Havok.
 
Last edited:
I'm also very much hoping for some sort of mainstream physics option. Havok is too expensive, I just heard about Bullet, but the reason there are games out there that use Nvidia's PhysX isn't because it is superior, but rather because they invested heavily to advertise PhysX in a few games by sending their own programmers to help make those games PhysX compatible. However, even when those programmers, who probably know more about PhysX and about how to program it efficiently than any developer could hope to, implemented it, the performance hit was a massive 40% in fps. This screams out that GPU gameplay physics is unsound as an idea, it adds too much communication latency between the GPU and CPU.

Perhaps, from what I understand the GPU is far better suited to physics calculations than the CPU, I guess the problem is that you are then asking the GPU to do two tasks at once and the extra time to communicate between the GPU and CPU if the GPU doesn't have enough RAM to deal with it.

I know it takes my computer a day to solve around 1 second worth of real life fluid flow, it'll be nice when we can solve fluid flows in real time ;) (probably another 100 years off, haha).
 
Here's the thing about Physics, it's VERY multicore friendly. A i7 920 has roughly 1/20th of the Gflops of a 480. Which means for intense physics, it's simply better to do it on your GPU.

What this translates to in the real world is how much Physics do you want? If you want just a little bit, then the CPU is fine. Most quad cores have an extra core or two lying around unused. However, If you want to do the kind of stuff we saw in Batman, then the CPU simply can't do enough calculations. It's not a matter of optimization, it's a matter of raw power.

Now with most games being bottlenecked by the GPU already, people really start to whine when they hear someone wants to use their GPU for something besides graphics. Well, I've got news for you, fog and debri flying around IS graphics, so guess what it's going to hurt your frame rates just like AA or anything else. Remember, this is something that by our above defintion requires more processing power than the CPU can supply, so thus it's only logical for it to consume a large portion of GPU resources.

The other problem is when they want to include physics into "interactive enviroments'. Then you can't really "turn it off" like you can turning off shadows, or AA. This is what stops the full scale adoption, because Nvidia for (right or wrong) has denied the ability for PhysX to be run on ATI/AMD cards.

So until something changes it will be something that is better off done by the GPU, but will only be extra eye candy.
 
Perhaps, from what I understand the GPU is far better suited to physics calculations than the CPU, I guess the problem is that you are then asking the GPU to do two tasks at once and the extra time to communicate between the GPU and CPU if the GPU doesn't have enough RAM to deal with it.

I know it takes my computer a day to solve around 1 second worth of real life fluid flow, it'll be nice when we can solve fluid flows in real time ;) (probably another 100 years off, haha).

Oh yeah, not real time physics = GPU all the way! Massively parallel stuff :D

Games unfortunately have FPS = time it takes to calculate and render a frame to take into consideration. It's not as if developers don't have enough to do anyways, but to add one set of physics instructions for CPU gameplay influencing physics and another set of eye-candy GPU based physics... I forgive them for not even trying, I don't want to pay for 100$ games :)
 
Don't you think the market for GPU accelerated PhysX is too small for developers (who earn their living of selling what they develop) to be of any value? It kills the performance of mainstream and lowend (most of the market) and is only available to the Nvidia part of the highend market.

I think its smaller and growing but its not something that should be outright ignored. Way more people have physics processors than they did when that PPU first came out as a stand alone card thanks to Nvidia buying out PhysX and porting it to the GPU.

And yes it is a performance beast but so is graphics processing , if they are creative enough in implementation of it in gaming then it'll be something that garners more interest.

What is going to be required is a killer app of some kind , some game that makes use of it in a way that impacts the game play in a meaningful manner.
 
I think its smaller and growing but its not something that should be outright ignored. Way more people have physics processors than they did when that PPU first came out as a stand alone card thanks to Nvidia buying out PhysX and porting it to the GPU.

And yes it is a performance beast but so is graphics processing , if they are creative enough in implementation of it in gaming then it'll be something that garners more interest.

What is going to be required is a killer app of some kind , some game that makes use of it in a way that impacts the game play in a meaningful manner.

Well, something else to consider is how much GPUs are catching up and passing games. It wasn't that long ago when 1600x1200 was considered "high resolution", and running AA at that resolution was challenging. Now most people are running at something near 1080p, with AA all for around 200$. If the choice is between PhysX +4X AA and No PhysX and 16X AA, it becomes much easier to sacrafice the graphics power than if the choice is lower detail level and no AA + PhysX vs higher graphics and AA w/o.

When was the last time you saw a [H]ard article reviewing a card at anything less than 1920x1200. How often did you see cards being able to run that resolution just 4 years ago? Now we're seeing [H]ard articles at 5760x1200.
 
Well, something else to consider is how much GPUs are catching up and passing games. It wasn't that long ago when 1600x1200 was considered "high resolution", and running AA at that resolution was challenging. Now most people are running at something near 1080p, with AA all for around 200$. If the choice is between PhysX +4X AA and No PhysX and 16X AA, it becomes much easier to sacrafice the graphics power than if the choice is lower detail level and no AA + PhysX vs higher graphics and AA w/o.

When was the last time you saw a [H]ard article reviewing a card at anything less than 1920x1200. How often did you see cards being able to run that resolution just 4 years ago? Now we're seeing [H]ard articles at 5760x1200.

I don't disagree but graphics are reaching an apex point where you can get all the eye candy and high resolution without breaking the bank as you pointed out. However where do you go from there? Monitor resolution starts to become a limiting factor so they added in multi monitor gaming. How many people can afford 3 monitors and have the proper space to use them? Not that many.

On this forum there are tons of multi monitor users but this is an enthusiast forum so thats expected. Real world figures of triple gaming monitor users is likely a tiny fraction of the user base.

However many people use Nvidia cards , 8800 series cards on up support PhysX quite well. As I said before PhysX needs a killer app and to be applied in a way that makes you want to have as if its a feature that has become useful and practical instead of just making clothing move more naturally. They have the installed base of users that can take advantage of it but until its done right it'll just hang in limbo like its currently doing.
 
I don't disagree but graphics are reaching an apex point where you can get all the eye candy and high resolution without breaking the bank as you pointed out. However where do you go from there? Monitor resolution starts to become a limiting factor so they added in multi monitor gaming. How many people can afford 3 monitors and have the proper space to use them? Not that many.

On this forum there are tons of multi monitor users but this is an enthusiast forum so thats expected. Real world figures of triple gaming monitor users is likely a tiny fraction of the user base.

However many people use Nvidia cards , 8800 series cards on up support PhysX quite well. As I said before PhysX needs a killer app and to be applied in a way that makes you want to have as if its a feature that has become useful and practical instead of just making clothing move more naturally. They have the installed base of users that can take advantage of it but until its done right it'll just hang in limbo like its currently doing.

I don't think multimonitor is going to be anywhere near mainstream. But as graphics get cheaper and cheaper, I think we'll have a big enough install base for people who have both "physX capable" AND "excess GPU power". We've got the first one, but we need the second. When that happens, say 3 years?, we'll have a developer who will spend the money to take advantage of the market and give us something better than flowing hair or flags. Fully destructable enviroments in the next Battlefield game or something. Like completly destructable ground and everything. Reshaping landscapes with airstrikes. Yeah, it'll be a killer "must have" game that'll push it over the edge.

Or consoles will do it for us. Next gen console comes out with something like a Nvidia 580, and we'll see them pushing PhysX on the console market, and we'll see it trickle back to the PC market in forms of Ports.
 
I don't think multimonitor is going to be anywhere near mainstream. But as graphics get cheaper and cheaper, I think we'll have a big enough install base for people who have both "physX capable" AND "excess GPU power". We've got the first one, but we need the second. When that happens, say 3 years?, we'll have a developer who will spend the money to take advantage of the market and give us something better than flowing hair or flags. Fully destructable enviroments in the next Battlefield game or something. Like completly destructable ground and everything. Reshaping landscapes with airstrikes. Yeah, it'll be a killer "must have" game that'll push it over the edge.

Or consoles will do it for us. Next gen console comes out with something like a Nvidia 580, and we'll see them pushing PhysX on the console market, and we'll see it trickle back to the PC market in forms of Ports.

Couldn't agree more dude. It might take console progression to make developers see what kind of games can be made creatively around PhysX in general. However there won't be any new consoles for a while , none of the game companies see a huge demand for new consoles. I read a poll recently from a gaming site (can't remember which) where people were asked if they are ready for another generation of consoles and the poll numbers where like 82 percent voted NO and 18 percent voted YES. I know Microsoft is working on the next Xbox but right now they are having huge sales and I just don't see them announcing a new system anytime soon.

Point being I hope that something bucks the trend and makes it a more worth while feature.
 
Last edited:
I am needing a new motherboard, and in order to future proof (I am eligible for EVGA step up so may have a 470 or 480 in the near future), I am curios if I should be interested in PhysX, SLI, or neither.

My options would be a 750a SLI board for around $60, or a MSI 790X for the same.

In the first case SLI would be possible 470 or 480 (with a 1000w psu), but in the second case I could never SLI, only PhysX.

I am more interested in the direction that technology is moving rather than helping me choose between these 2. So is having a dedicated card for Physics going to be the future. or do you see things heading back towards the physics being handled with the GPU? Also, if having a dedicated physics card is the future, does ATI plan to have anything to rival PhysX?

um, why that mother board? not what I would pick up. Nvidia does not provide good AMD support anymore all other issues aside. just get a good AMD one if you want to keep your CPU. you will not be able to SLI but you might save yourself a lot of other grief

as far as physx goes its nice in a handful of games. nothing more. right now its just not a make or break deal on anything. its a great marketing tool but really it doesn't do anything that developers can't do in other ways. hence the slow adoption.
 
PhysX adoption has continued to be extremely slow.

It's funny how people still can't make a distinction between "PhysX" and "GPU PhysX". The former is doing just fine and isn't slower than any other CPU based library, such as Havok. PhysX is slow on the CPU when you try to run the advanced stuff meant for GPUs as would Havok if you tried to run those things on it. For the run-of-the-mill stuff that we're accustomed to seeing there's no problem as far as I know....unless someone has evidence to the contrary which would be interesting to see.

By default every UE3 game on PC and console hardware is a PhysX game, not sure how that could be considered slow adoption.
 
Last edited:
If you want your games to run at 3 FPS maybe. Physx is crippled for CPUs. There are much better physics processing on CPUs that are platform-independent like Havok. The flip side of the coin in "the way it's meant to be played" is "and we'll make sure it doesn't play on yours". You don't see Intel trying to shut out AMD on Havok processing which is why it has been such popular middleware--it runs on everything. Physx will never be widely implemented in games until Nvidia makes it run on things that aren't Nvidia cards in an Nvidia-only configuration.

Not true, the way nvidia does physx is different. It makes it seem as if cpu physx is crappy. which is not enitrely true. I have seen games where cpu physx is just as impressive as nvidia, but it takes effort to do both. Example Crysis, it doesn't use Nvidia Physx, yet the physx is accelerated by cpu and it looks pretty damn impressive. Yes hardware based physx is going to have it's advantages. there are some effects that might be too taxin on cpu such as the effects done in batman, but to as long as shit breaks apart nice I am happy, don't need no smoke flying that reacts to me when I walk through it.

Nvidia will drag along its Physx, with some exclusive titles but more games will always use havok because not everyone has a dedicated physx card or a gtx 480.
 
Don't you think the market for GPU accelerated PhysX is too small for developers (who earn their living of selling what they develop) to be of any value? It kills the performance of mainstream and lowend (most of the market) and is only available to the Nvidia part of the highend market.

You could say the exact same thing about DirectX 11.
 
Not true, the way nvidia does physx is different. It makes it seem as if cpu physx is crappy. which is not enitrely true. I have seen games where cpu physx is just as impressive as nvidia, but it takes effort to do both. Example Crysis, it doesn't use Nvidia Physx, yet the physx is accelerated by cpu and it looks pretty damn impressive. Yes hardware based physx is going to have it's advantages. there are some effects that might be too taxin on cpu such as the effects done in batman, but to as long as shit breaks apart nice I am happy, don't need no smoke flying that reacts to me when I walk through it.

Nvidia will drag along its Physx, with some exclusive titles but more games will always use havok because not everyone has a dedicated physx card or a gtx 480.

PhysX is an Nvidia library and technology.

Physics is what it is simulating.

Two different words people.
 
um, why that mother board? not what I would pick up. Nvidia does not provide good AMD support anymore all other issues aside. just get a good AMD one if you want to keep your CPU. you will not be able to SLI but you might save yourself a lot of other grief
Which board are you talking about not getting?

Overall, I am still confused with my decision :confused: Lol sorry...

I'm wondering if I need a second PCIE slot on my motherboard at all , whether ATI or NVIDIA. Assuming having a seperate GPU for rendering physics is the future for this technology, I'm sure ATI would come out with some cards for GPU Physics...

If Physics technology is going to be CPU based in the future, I can just look at single PCIE motherboards, or possibly a nvidia SLI.
 
Which board are you talking about not getting?

Overall, I am still confused with my decision :confused: Lol sorry...

I'm wondering if I need a second PCIE slot on my motherboard at all , whether ATI or NVIDIA. Assuming having a seperate GPU for rendering physics is the future for this technology, I'm sure ATI would come out with some cards for GPU Physics...

If Physics technology is going to be CPU based in the future, I can just look at single PCIE motherboards, or possibly a nvidia SLI.

any board, only intel is really supporting sli, so if you want to run green cards your either going to have to use that 750a or the like (obsolete boards) or go to an intel platform. if your just looking for a good replacement MB for your current then look at a 790/890GX or go better and get a 790/890FX MB. you will not be able to run SLI on these without hacked drivers. You will be able to run a dedicated physx card if you wish. I would not bother unless your favorite game in the world uses it.
 
Not true, the way nvidia does physx is different. It makes it seem as if cpu physx is crappy. which is not enitrely true. I have seen games where cpu physx is just as impressive as nvidia, but it takes effort to do both. Example Crysis, it doesn't use Nvidia Physx, yet the physx is accelerated by cpu and it looks pretty damn impressive. Yes hardware based physx is going to have it's advantages. there are some effects that might be too taxin on cpu such as the effects done in batman, but to as long as shit breaks apart nice I am happy, don't need no smoke flying that reacts to me when I walk through it.

Nvidia will drag along its Physx, with some exclusive titles but more games will always use havok because not everyone has a dedicated physx card or a gtx 480.

I'm sorry but Crysis physics is shit. The trees jiggle in an unrealistic predetermined fashion, buildings break apart into predefined chunks and there's nothing dynamic about fluid flows, mists, smokes, etc. The only cool things about crysis physics is some (but not all) of the trees have progressive damage, and the way when you fire a rocket past trees they bend in the wake of the rocket. I've seen videos of people using boxes and barrels and tornados to simulate some pretty cool effects in Crysis, but they all also run at terrible framerates (far below playable).

BFBC2 is a step better in the destruction department, but has completely stationary foilage and destruction is more progressive but still predefined and rubble just vanishes.

I haven't seen a game that uses CPU physics in any sort of impressive way, nothing beyond simple rigid body physics and simplistic destruction.
 
Back
Top