ATI Physics

Joined
Nov 22, 2004
Messages
822
I thought I read somewhere that the 1900 series cards had some sort of physics engine, am I way off on this.
 
There's a physics processing forum.

Check there for more details.
 
No, its like the SLI Pysics. You need Crossfire to use it. I think its a waste of GPU power. This is why we are having dedicated physics cards. GPU's are meant to draw are sexy looking graphics not calculate are physics...
 
Lazy_Moron said:
No, its like the SLI Pysics. You need Crossfire to use it. I think its a waste of GPU power. This is why we are having dedicated physics cards. GPU's are meant to draw are sexy looking graphics not calculate are physics...

That is not true, you don't need CrossFire to use it.
 
Brent_Justice said:
That is not true, you don't need CrossFire to use it.

...And, if ATi accellerates the same functions as Ageia, the ATi card is faster then Ageia by a factor of three or more times.
 
BBA said:
...And, if ATi accellerates the same functions as Ageia, the ATi card is faster then Ageia by a factor of three or more times.

I don't follow your reasoning. Could you elaborate please?
 
No, its like the SLI Pysics. You need Crossfire to use it. I think its a waste of GPU power. This is why we are having dedicated physics cards. GPU's are meant to draw are sexy looking graphics not calculate are physics...

incorrect, you don't need crossfire enabled for it to work, you can have a secondary GPU do physic's computations completely and have no impact on video performance

the secondary GPU can be anything, i'm not sure how far back it goes to supported units, but i think it might scale all the way back to the R420 cores, either way the set up ATi has built would work with an X1900XTX and a X1300pro, have the X1300 work as the aiding physic's GPU and have the X1900 be your main GPU
 
Exactly...

But you need an x1600 or higher. the x1300 doesn't support the functions required.

I have any of these I could use x1800XL, x1800XT & x1900XT to pair up with my x1900XTX.

But I am most probably going to ALSO buy an Ageia card as soon as they're available. I received 3K from my taxes I overpaid... so yeah.

No Crossfire though. I find it useless as I game on a 19" LCD @ 1280x1024.
 
Cadaver said:
I don't follow your reasoning. Could you elaborate please?


Sure, from here: http://www.megagames.com/news/html/hardware/atiageiaandnvidia-itsphysics.shtml


in-game physics relies heavily on floating point arithmetic, its R580 architecture is ideally suited to it since it features 48 pixel shaders and suggests that it has 375 GFlops per card available for such calculations. This number compares favorably to the 10 GFlops available in the fastest widely available CPUs and the 100 Ageia will offer. Other aspects of ATI architecture such as dedicated branching logic, unified shader units and a 3:1 shader/pipeline ratio also offer advantages when performing physics calculations.
 
I'm thinking this is going to be much more useful when everyone goes to upgrade their video cards down the road. You can get a brand new video card and then stick the old 1800/1900 or whatever in possibly a 4x slot and use it like a physics processor. It would still be significantly faster than the CPU at processing a lot of physics operations and give you something to do with the card.
 
OK so I really want to get a PPU but I am not going to go there till vista is out and things are rolling smoothly.
My main concern is getting good frames with GRAW with all the bells and whistles... at 1680x1050, thats all I hope I aint asking too much.

Man I really cant wait for this game, ghost recon is the game that started it all for me, after the NES I stopped video games till I saw recon, now I am a game whore! and recon comming back is making me itchy.
 
Cadaver said:
Ah, I see. However, that wouldn't necessarily mean an X1900 would be 3 times faster than an Ageia PPU, as the GPU would have to expend most of its efforts generating graphic content. Still, I'm very interested to see the direction that Ati will take. :D

Yup. This is why you don't have CPUs handle game graphics any more, despite the fact that your average CPU is lightyears more powerful than your average GPU. :D
 
Cadaver said:
Ah, I see. However, that wouldn't necessarily mean an X1900 would be 3 times faster than an Ageia PPU, as the GPU would have to expend most of its efforts generating graphic content. Still, I'm very interested to see the direction that Ati will take. :D


ATi is not driving just balancing Physics and Graphics on the same card, but they are driving letting you dedicate a graphics card to physics, such as when the X1900XTX is old, and you have the X2900XTX and such as your primary DX10 card with Vista...

At that point yes, it will produce 3 time the physics calculations as the Ageia PPU.
 
BBA said:
ATi is not driving just balancing Physics and Graphics on the same card, but they are driving letting you dedicate a graphics card to physics, such as when the X1900XTX is old, and you have the X2900XTX and such as your primary DX10 card with Vista...

At that point yes, it will produce 3 time the physics calculations as the Ageia PPU.

You are still talking wanna-be physics...
HavockFX has NOTHING to do with REAL gameplay Physics...

Unless you can show my how the pixelpipes can relay physcis information back to the CPU, wich is REQUIRED in order to have GAMEPLAY physics, your not making any sense...
Pixelpipes are for post-processing, not feedback...
You tried the same BS in this Thread:
PPU - Forum: Let me explain the business model here...
But you seem to have left you own thread, to come here instead, and continue the same path of false information

Why don't you come back and make that claim there?
You are more than welcme, and I promise you that you will get qualified feedback...
Instead of playing a ATI-PR machine...

This thread should be moved to the PPU/Physics forum

Terra - Stop spreding your FUD, please...
 
BBA said:
ATi is not driving just balancing Physics and Graphics on the same card, but they are driving letting you dedicate a graphics card to physics, such as when the X1900XTX is old, and you have the X2900XTX and such as your primary DX10 card with Vista...

At that point yes, it will produce 3 time the physics calculations as the Ageia PPU.

where do you get that 3X number?
 
Digital Viper-X- said:
where do you get that 3X number?

He is taking pure GFlops numbers and comparing them cross technology/platform :rolleyes:

Terra...
 
Digital Viper-X- said:
where do you get that 3X number?

Did you not read the article I linked? Physics is based on raw floating point calculation, and since ATi is currently over three times faster than Ageia, it implies better than 3X physics capability. Wether that pans out in the end...well, it's a waiting game.

ATi = 375 GFLOPs
Ageia = 100 GFLOPs

The question remains, does ATi want to support Ageia's physics SDK or make their own...and that depends on wether Ageia wants to let ATi either license it -or- if Ageia wants ATi to crush them in the market place with better hardware AND better physics (May not be better, but it would be a standard as soon as ATi includes it in the drivers).
 
fellas, fellas, please dont argue, I by no means wanted to start bickering with this thread... man no I dont even want a PPU.
 
BBA said:
Did you not read the article I linked? Physics is based on raw floating point calculation,...
That is one of the largest oversimplifications, I have ever heard...

and since ATi is currently over three times faster than Ageia, it implies better than 3X physics capability. Wether that pans out in the end...well, it's a waiting game.

Again, one of the worst oversimplifications I have heard...

ATi = 375 GFLOPs
Ageia = 100 GFLOPs

Again comparing pure GFlops numbesr cross platfom/technology is stupid.
Hell even when comparing CPU's it stupid.
Microsoft's Xbox 360 & Sony's PlayStation 3 - Examples of Poor CPU

Secondly, both floating point power numbers refer to the whole system, CPU
and GPU. Obviously a GPU's floating point processing power doesn't mean
anything if you're trying to run general purpose code on it and vice versa.
As we've seen from the graphics market, characterizing GPU performance in
terms of generic floating point operations per second is far from the full
performance story.

And:
Another way to look at this comparison of flops is to look at integer add
latencies on the Pentium 4 vs. the Athlon 64. The Pentium 4 has two double
pumped ALUs, each capable of performing two add operations per clock, that's
a total of 4 add operations per clock; so we could say that a 3.8GHz Pentium
4 can perform 15.2 billion operations per second. The Athlon 64 has three
ALUs each capable of executing an add every clock; so a 2.8GHz Athlon 64
can perform 8.4 billion operations per second. By this silly console
marketing logic, the Pentium 4 would be almost twice as fast as the Athlon
64, and a multi-core Pentium 4 would be faster than a multi-core Athlon 64.

Any AnandTech reader should know that's hardly the case. No code is
composed entirely of add instructions, and even if it were, eventually the
Pentium 4 and Athlon 64 will have to go out to main memory for data, and
when they do, the Athlon 64 has a much lower latency access to memory than
the P4. In the end, despite what these horribly concocted numbers may lead
you to believe, they say absolutely nothing about performance. The exact
same situation exists with the CPUs of the next-generation consoles; don't
fall for it.

I could turn this around, and use the same "logic" as you:
ATI = 49.6 GB/sec bandwith
AGEIA = 2 TB/sec bandwith

AGIEA = 40x faster than ATI...

But that would be stupid..since it cross platform/technology...

The question remains, does ATi want to support Ageia's physics SDK or make their own...and that depends on wether Ageia wants to let ATi license from them or it Ageia wants ATi to crush them in the market place with better hardware and better physics.

Again, I would loooooove to hear how pixelpipelines can feed collision data back to the CPU? :)

Terra...
 
Fistandantilis said:
fellas, fellas, please dont argue, I by no means wanted to start bickering with this thread... man no I dont even want a PPU.

I am not arguing, I am using arguments.
But someone in this thread has been feedning a lot of misinformation.
And that not good for the thread...or you..or any of us ;)

Terra...
 
Terra i agree with you, but ATi is not single handidly only supporting HavockFX

You are still talking wanna-be physics...
HavockFX has NOTHING to do with REAL gameplay Physics...

might not be as fast as the SLI implementation when the havock engine is used but it should work across all platforms

i'm just waiting for it to come out :D
 
Trimlock said:
Terra i agree with you, but ATi is not single handidly only supporting HavockFX



might not be as fast as the SLI implementation when the havock engine is used but it should work across all platforms

i'm just waiting for it to come out :D

I still don't understand how they are going to violate the laws of physics (pun intended) and make the pixel pipelines feed back collision data to the CPU? ;)

Terra...
 
I'm still waiting for something other than a demo for a game thats arriving in late 2k7 to tell me that this is all worth it...
 
quadnad said:
I'm still waiting for something other than a demo for a game thats arriving in late 2k7 to tell me that this is all worth it...

It's worth it :p

Terra - Done :D
 
quadnad said:
lol, newegg, HERE I COME!

Sorry, couldn't resist ;)
But just think how geared up people where about the low physics in HL2...
Multiply that by a VERY big factor...and that is what PPU's will bring to the table ;)

Terra...
 
I still don't understand how they are going to violate the laws of physics (pun intended) and make the pixel pipelines feed back collision data to the CPU?

hehe, either way i'm sure they can handle it through some emulation, which would bring performance down, but this isn't replacing a PPU, its pretty much just creating one in theory, if this does do the job with less speed through which ever process they come up with i'm still a happy camper
 
Trimlock said:
hehe, either way i'm sure they can handle it through some emulation, which would bring performance down, but this isn't replacing a PPU, its pretty much just creating one in theory, if this does do the job with less speed through which ever process they come up with i'm still a happy camper

Emulation?
That won't cut it.
For gameplay physics to operate, collision data have to be feedback to the CPU.
The problem is that once data enters the pixel pipelines, it's out of reach of the CPU.
You can't "emulate" a new GPU-stucture that is bi-directional...
The pipelines are a one way street...

Terra...
 
Emulation?
That won't cut it.
For gameplay physics to operate, collision data have to be feedback to the CPU.
The problem is that once data enters the pixel pipelines, it's out of reach of the CPU.
You can't "emulate" a new GPU-stucture that is bi-directional...
The pipelines are a one way street...

i'm not trying to pretend to know what ATi or Nvidia has in store, but they wouldn't exactly put in development time into it if it didn't produce ;)
 
Terra said:
That is one of the largest oversimplifications, I have ever heard...

Again, I would loooooove to hear how pixelpipelines can feed collision data back to the CPU? :)

Terra...

OWNED!
Nice counter points Terra.

But I have to take this a different direction
you're SOOOO WRONG
P4 > A64
Xenos > Cell
X1800 > X1900XTX
nVidia AF > ATI AF
Green > Blue
Horse > Dog
69 > 68
At > The

I think you know your roll!


Majin - Shaking my head at Terra for 6+ Months!
 
Forgive me if I'm being an idiot, but doesn't the current PhysX have to feed data back to the CPU over the PCI bus? 2Tb/s internal bandwidth is all well and good, but the PCI bus only offers ~132Mb/s (some of which is already eaten up). PCI-E x16 gives a more respectable 6.4Gb/s, and the bus itself is capable of double that (PCI-E mobos normally have 32 lanes afaik) Am I missing something, or does PhysX have a major achilles heel right there? The data can be moved internally at lightspeed, but the most it can afford to transmit is a meagre 4 megs per frame (to maintain 30FPS).
 
Terra said:
I could turn this around, and use the same "logic" as you:
ATI = 49.6 GB/sec bandwith
AGEIA = 2 TB/sec bandwith

AGIEA = 40x faster than ATI...

But that would be stupid..since it cross platform/technology...



Again, I would loooooove to hear how pixelpipelines can feed collision data back to the CPU? :)

Terra...


Think about what you said. Then explain how a PCI connected card is going to have a higher bandwidth back to the CPU than a PCI-E connected card is. The video card has a great advantage here as well, now that you bring it up. :D

The 'internal to the GPU/PPU' memory bandwidth means nothing, it's the processing that counts as well as getting the data back to the CPU and video card.

Thats why a reprogrammed X1800 is able to process video compression/rendering in seconds compared to the P4 doing it in almost an hour. (You can find that article link here if you look). All it takes is reprogramming the shaders and ATi is already making the drivers to do it.
 
rincewind said:
Forgive me if I'm being an idiot, but doesn't the current PhysX have to feed data back to the CPU over the PCI bus? 2Tb/s internal bandwidth is all well and good, but the PCI bus only offers ~132Mb/s (some of which is already eaten up). PCI-E x16 gives a more respectable 6.4Gb/s, and the bus itself is capable of double that (PCI-E mobos normally have 32 lanes afaik) Am I missing something, or does PhysX have a major achilles heel right there? The data can be moved internally at lightspeed, but the most it can afford to transmit is a meagre 4 megs per frame (to maintain 30FPS).

Exactly my point.

..looks like Terra has been owned once again.
 
Terra said:
Emulation?
That won't cut it.
For gameplay physics to operate, collision data have to be feedback to the CPU.
The problem is that once data enters the pixel pipelines, it's out of reach of the CPU.
You can't "emulate" a new GPU-stucture that is bi-directional...
The pipelines are a one way street...

Terra...

The thing you are not recognizing is the GPU does not emulate...it reprograms teh alrogithyms and does it in hardware.

By your logic, a GPU emulates video rendering as well...

Chew on that a while.
 
And...where is anyone mentioning Havok even relevant when we are discussing real physics?

But, if you were curious, both ATi and nvidia are working havok support in, as an added selling point. (Havok is nothing but sequenced events that look like physics but are not real interaction calculations like true physics calculations are).
 
Back
Top