Does 4870x2 have onboard Physics?

mr_zen256

2[H]4U
Joined
Dec 29, 2005
Messages
2,612
Well, I realise nVidia own PhysX but does ATI have any specialized physics features like nVidia does? I am trying to figure out if I should get the GTX 280 or a 4870x2 and the only thing that's keeping the 280 in the game is it's PhysX support. Aside from that the features of the 4870x2 are much more enticing, eg. 2GB GDDR5 ram, DirectX 10.1 support, and raw performance power.
 
I hear ATI has havok physics which are going to be used in Starcraft 2 and Diablo 3.
 
I hear ATI has havok physics which are going to be used in Starcraft 2 and Diablo 3.

Havok physics is offloaded to the CPU, doesn't matter whether you have an ati or nvidia GPU. AMD have publicly stated that they presently see no need for physics to be processed by the GPU. In any event, I recall reading that nvidia has ported the physx API to work on ATI gpus as well.
 
nvidia is helping Eran Badit from NGOHQ.com to port it over. http://www.tgdaily.com/html_tmp/content-view-38283-135.html

It's going to be awesome when they have a release ready. :D

Wow, that's awesome! So it looks like all that AMD and ATI have to do is clear it for development since nVidia is essentially opening the doors for the technology to be implemented on other GPUs. I guess that seals the deal to get a 4870x2 then.. Sorry nVidia :p
 
Actually, AMD is not supporting development, and is actually opposed to it, AFAIK.
 
Actually, AMD is not supporting development, and is actually opposed to it, AFAIK.

That seems really counter productive. I'm not sure I understand why they would oppose it as it would only strengthen the feature set of their cards. Maybe it's a pride thing and they dont want anything from team green. Man I hate polotics.
 
That seems really counter productive. I'm not sure I understand why they would oppose it.

Because it's not possible (at least on R6x0 and R5x0). Nvidia is not actually "helping", they are just going along with it for good PR. If anyone thinks NGOHQ can actually provide PhysX support to AMD cards, I got some land on the moon I want to sell you.
 
Because it's not possible (at least on R6x0 and R5x0). Nvidia is not actually "helping", they are just going along with it for good PR. If anyone thinks NGOHQ can actually provide PhysX support to AMD cards, I got some land on the moon I want to sell you.
Why do you think it's not possible? Any (working) SM4.0 hardware is capable of running the shaders CUDA compiles code to. All that's left is the driver run-time (CUDA support) to interface with GPU PhysX. Those things are well within probability, especially with support from nvidia.

Anyways, Eran Badit has run GPU PhysX on a RV670 card here: http://www.ngohq.com/news/14219-physx-gpu-acceleration-radeon-hd-3850-a.html and updated here: http://www.ngohq.com/news/14254-physx-gpu-acceleration-radeon-update.html and here: http://www.ngohq.com/news/14254-physx-gpu-acceleration-radeon-update-8.html (same thread, but shows what hardware is supported or not supported so far).

I can't really hold my breath longer than a few minutes, so don't worry about me. ;)
 
Why do you think it's not possible?

The whole point (one of the biggest benefits) of CUDA is shared memory, which R6x0/R5x0 lack (remember the ring bus?). Some how I don't think Eran has the abilities to emulate this functionality.

Anyways, Eran Badit has run GPU PhysX on a RV670 card here

Well if he says it works it must be true. I know a CUDA developer (who obviously works for Nvidia) and he has already called shens on the project. Sorry.
 
The whole point (one of the biggest benefits) of CUDA is shared memory, which R6x0/R5x0 lack (remember the ring bus?). Some how I don't think Eran has the abilities to emulate this functionality.



Well if he says it works it must be true. I know a CUDA developer (who obviously works for Nvidia) and he has already called shens on the project. Sorry.

My bubble has officially been burst... :(
 
That seems really counter productive. I'm not sure I understand why they would oppose it as it would only strengthen the feature set of their cards. Maybe it's a pride thing and they dont want anything from team green. Man I hate polotics.
If those rumors are true and ATI cards can run PhysX faster than Nvidia, you better believe they would allow it. What better selling point could you have then that? Its not like they have a stake in the physics war like Intel does.
 
The whole point (one of the biggest benefits) of CUDA is shared memory, which R6x0/R5x0 lack (remember the ring bus?). Some how I don't think Eran has the abilities to emulate this functionality.
That does not make any sense and shared memory is not "one of the biggest benefits" of CUDA. LOL Please download and read the introduction to CUDA programming because you're waaaaaaaay off-base.

Your anonymous friend who may or may not work for nvidia should probably explain to you why he thinks it's shens. My first reaction (as a novice CUDA programmer) was shens, but that was before any further details were available. My second reaction was that he had possibly captured the shaders and/or discarded the output, making it just a Vantage hack to get back at nvidia's "cheating." I'm still leaning towards that since no screenshots of the CPU test 2 have been shown using Radeon GPU PhysX, only the overall score, and it was supposed to be available in some form 2 weekends ago.

It does look like it *could* be legitimate to me, but I was interested in how Eran added CUDA support to the ATI drivers (hacking nvidia's nvcuda.dll?) and about a couple of other things. I won't be totally convinced until he starts distributing it as initially promised over 2 weeks ago.
 
If those rumors are true and ATI cards can run PhysX faster than Nvidia,
Where did you hear that rumor? Regeneration doesn't even claim to have support for the X2 cards (only in single GPU mode) and the HD 4800 series isn't supported either. Anyways, the output of the shaders in GPU PhysX will not be tuned for ATI's cards so it's pretty unlikely that ATI would run those faster.
 
shared memory is not "one of the biggest benefits" of CUDA.

Sure it's not.

Advantages said:
* It uses the standard C language, with some simple extensions.
* Scattered writes – code can write to arbitrary addresses in memory.
* Shared memory – CUDA exposes a fast shared memory region (16KB in size) that can be shared amongst threads. This can be used as a user-managed cache, enabling higher bandwidth than is possible using texture lookups.
* Faster downloads and readbacks to and from the GPU
* Full support for integer and bitwise operations


And if you want to read up on it.


Your anonymous friend who may or may not work for nvidia should probably explain to you why he thinks it's shens.

Ask him yourself.
 
The ring bus has nothing to do with local memory in each cluster. It's accessed differently when used directly. The R600 series also has local memory in each SPU cluster, but only 8KB in size. CUDA "gives" control over the memory through shader instructions. It could cause problems for ATI cards if the shader programs use more than 8KB of that space for data.

I can't find the thread where tmurray "called shens" on Radeon PhysX. He only has 40 posts, but I couldn't find where he even mentioned it. Care to link where you read it?

edit: I'm still not seeing where you came to this conclusion: "The whole point (one of the biggest benefits) of CUDA is shared memory". Care to elaborate?
 
I can't find the thread where tmurray "called shens" on Radeon PhysX. He only has 40 posts, but I couldn't find where he even mentioned it. Care to link where you read it?

In a private IRC channel, but if you don't believe me (as I expect you don't) ask him what he thinks about PhysX on AMD hardware with NGOHQ and see what he answers you. ;) Maybe you can ask him about shared memory on R6x0 too.
 
Like I said, ask him, shouldn't be a problem right?
Yeah, I'll get right on that.

tmurray, can you confirm that "this d00d named M0of45a~~++ sez you sez that Rad30n PhysX0rz is totally shens. AMIRITE?!?!?! LOLZ"

You're the one who made the claim. Back it up or withdraw it.
 
tmurray, can you confirm that "this d00d named M0of45a~~++ sez you sez that Rad30n PhysX0rz is totally shens. AMIRITE?!?!?! LOLZ"

"Hey Tim, I was wondering your thoughts about NGOHQ's project on porting PhysX to AMD. Even if the project is fake, do you think it's at least feasible on the R7x0 and R6x0 generation of cards?"

Yeah you're right, that was hard...

You're the one who made the claim.

I believe you are the one who claims that Eran can port CUDA/PhysX to AMD cards.
 
Hmmm, I'll be very interested to see how this pans out. If PhysX is indeed enabled on the R7x0 GPUs, ATI will be gettin my money for sure.
 
I believe you are the one who claims that Eran can port CUDA/PhysX to AMD cards.
LOL

And I linked to the places where he claimed that he ran it, the updated status of what hardware works and doesn't work, and where he also claims that nvidia is assisting him. See how that works?

Your turn.
 
And I linked to the places where he claimed that he ran it, the updated status of what hardware works and doesn't work, and where he also claims that nvidia is assisting him. See how that works?

Oh because Eran has such a great track record doesn't he?
 
Oh because Eran has such a great track record doesn't he?
Not really one way or the other as far as I know. But I had a good time ridiculing him when he came crying to [H] after nvidia cut him off for distributing the SLI hack.

But an ad hom attack doesn't really address his claims, now does it?
 
That seems really counter productive. I'm not sure I understand why they would oppose it as it would only strengthen the feature set of their cards. Maybe it's a pride thing and they dont want anything from team green. Man I hate polotics.

They're opposing it because in the end, PhysX is an nVidia technology and it would be beneficial to ATI for it not to gain dominance. Imagine if ATI supports it on their cards and all the new games that are coming out start using PhysX exclusively, and a few generations down the line nVidia decides to stop allowing ATI to perform hardware PhysX acceleration by adding some proprietary components to their GPUs or performing a driver check to disallow ATI cards from running it. I imagine ATI thinks something like this will happen and that's why they're reticent about allowing PhysX on their cards.
 
it makes no sense for AMD to support phsyx on their cards, because it's an nvidia product now, as nvidia bought up aegia......

if nvidia wanted to, they could let them port it to the ATI cards, then give them botched drivers/software for it next gen after the project is fully going....

but that's just how i see that -.-
 
ageia may have had some potential if they weren't bought up by nvidia. now there will never be a true PPU.:(

Err... Ageia was on their own for what, 2 years with the PPU? It got absolutly nowhere. nVidia acquiring them was the best thing to happen to them.
 
the RV770 is a very capible chip in terms of instruction sets. I would be suprised if CUDA couldn't be run on the RV770, but AMD is backing the Havok camp, so thats what will come first, at least officially.
 
Stop arguing and tell me what games do you want to play that support PhysX. Then tell me why only a few developers adopted PhysX before and what is going to make this change now?
 
That seems really counter productive. I'm not sure I understand why they would oppose it as it would only strengthen the feature set of their cards. Maybe it's a pride thing and they dont want anything from team green. Man I hate polotics.

Simple really, game developers will only use a technology like Physics on the GPU if all gamers can play it, imagine if something like Doom4 was released and required Physx done on the GPU and onl Nvidia cards supported that, it would alienate all the AMD owners. id would never release a AAA game like something from the doom series while alienating a huge chunk of their userbase, it means much less profit for them.

Nvidias goal is to get lots of people using Physx then make a boat load of cash out of it, all AMD have to do is not officialy support Physx and Nvidia are going to find it practically impossible to sell the engine to any developers. At least for the purpose of GPU physics.

ageia may have had some potential if they weren't bought up by nvidia. now there will never be a true PPU.

Huh?

We've had "true" PPU's for a while now, they were what Ageia originally tried to promote but the business model was too hard to sell to developers/users. I'm fairly sure you can still buy PPU's and they out perfom the GPU's in high settings in games like UT3
 
Simple really, game developers will only use a technology like Physics on the GPU if all gamers can play it, imagine if something like Doom4 was released and required Physx done on the GPU and onl Nvidia cards supported that, it would alienate all the AMD owners. id would never release a AAA game like something from the doom series while alienating a huge chunk of their userbase, it means much less profit for them.

Well... There is always the option of not making it a REQUIREMENT, much the same way it is with the handful of games that support the PPU. Which is much more than likely how it will turn out. Lets not forget, not all nVidia customers have 8 series or higher cards either so you can be pretty positive that no AAA titles are going to have that type of requirement.
 
Huh?

We've had "true" PPU's for a while now, they were what Ageia originally tried to promote but the business model was too hard to sell to developers/users. I'm fairly sure you can still buy PPU's and they out perfom the GPU's in high settings in games like UT3

Frostex It seems every time I run into you in a thread I have to disagree with you :cool:

No we don't. Agieas 128MB PPU could have been just like the Voodoo 2MB. It would have been the first ever step taken into the world of making video games more like reality through something other than audio and video, and would have marked the stepping stone into yet further physics development. In much the same way the graphics market evolved from simple pixel mapping into dynamic polygons, lighting effects, dynamic texture rendering, etc; The PPU could have grown from simple collision detection to force vector detection and multiplication, pressure calculations, momentum maps and energy equilibriums. Instead it was a one time deal with its architects being bought and merged with Nvidia who seems to have no intention of doing anything with it. I'd wager the designers behind the PPU were true visionaries in that they really did dream of all the things I mentioned above (just like I'm sure Jen-sun dreamed of the capabilities of today's GTX 280s).

The PPU doesn't process physics, it makes sure things don't bump together. Nvidia's PhysX through cuda doesn't process physics, it does "geometry". Nothing does physics today. Physics isn't inherently an obvious job for the GPU or CPU, its just a task, one whish so far has infant APIs, and thus so far everyone who's said they can do it has immensely disappointed me.
 
Where did you hear that rumor? Regeneration doesn't even claim to have support for the X2 cards (only in single GPU mode) and the HD 4800 series isn't supported either. Anyways, the output of the shaders in GPU PhysX will not be tuned for ATI's cards so it's pretty unlikely that ATI would run those faster.
I thought I had read that somewhere, but after double-checking it I guess I was mistaken. You are correct about the 4800 series.

But I still think AMD/ATI will at least consider it. It may be in their interest to support both standards and let Nvidia and Intel fight amongst themselves. Thats more what I was getting at.
 
Stop arguing and tell me what games do you want to play that support PhysX.
Sadly, almost nothing. The PhysX mod levels for UT3 are pretty nice, but those will get old. It does work very well, even on a single 8800GT.

The hope, for me at least, is that some of the dozen+ PhysX games coming out by the end of the year will be interesting and support hardware PhysX. The best thing about it is that PhysX acceleration on my nvidia cards will not cost a single extra cent, and if it can also run on my ATI cards, that's even better.

Then tell me why only a few developers adopted PhysX before and what is going to make this change now?
Here's a slightly outdated list of games that support hardware PhysX: http://www.hardforum.com/showthread.php?t=1141844

What's changed? I've seen estimates that Ageia only sold around 100,000 PhysX PPU cards. The installed base of G80 and G92/G94 cards is many, many times higher and nvidia has a much larger developer relations than Ageia had.

The problem of course is that it's still a small segment overall (the Steam survey shows ~9.4% share overall for the 8800 series). Even adding in ATI HD 3800 cards doesn't change it much (~1% share overall for the HD3850 + HD3870 in the same Steam survey). The other ATI cards are currently even less of a blip.

There is more of a reason now than before for developers to support it, plus the (PC) PhysX SDK went free if the developers enable HW support. Nothing is guaranteed of course, and other than effects physics that many "hardware" PhysX titles support, not many games are based around physics interaction anyways. The games have to support the widest audience even for the ones that do have interactive physics gameplay, not that there are many of those.
 
There is one important difference between Ageia physics and nVIdia physics

With the original PPU most everyone adopted a wait and see approach. It was a new concept, by a new company and it required an additional investment by the end user, so most people were going to wait and see how the support would turn out. Unfortunately, developers may take that same approach. They don't want to develop a game that uses hardware that no one has, so the support from both end users and developers was stagnant and never really materialized

With nVidia it's different. Anyone who buys or already owns an 8 series or higher card has the hardware already. The user base is already established, and it's quite a large user base at that so developers are more likely to adopt the technology than they were previously.
 
Back
Top