ATI Physics

BBA said:
Why? What else will you do with the X1900 after you upgrade? Sell it? I tend to keep older video cards myself, this makes nice use of it.

Cause the PhysX is cheaper.
Ebay that X1900, buy a PhysX..and get real physics...

Terra - And even make some money...
 
BBA said:
Get real man...who in their right mind will buy a PPU if they are not already using a top of the line video card on a PCI-E system board?

I am using a AGP 7800GS(full 7800GT Core)
I would be one of them.
Look at the min requirements dammit..a P4 1.4Ghz(or AMD equvilant)
The PPU dosn't have anything to do with a the GPU

You should stop working for Ageia and see what comes out in real life before taking your hard stand.

1. Why didn't you answer my question?!
I'll repeat then:
Do you have any affiliations with ATI?

2. I know you better than you think:
http://www.hardforum.com/showpost.php?p=1029333760&postcount=49

And let me go on the record and say that I have NO affiliations with AGEIA...what so ever...
Other than I will be buying a PPU when it's released.

Terra - As I might suspect that "argument" will surface soon...."



Terra - Way ahead of you...so why don''t you answer my question now?
 
MrNasty said:
And this is an approximate calculation - but sorry BBA, your BBS is called: 25watts doesn't go nowhere.

so, let's look at your comparison now:

3.5*96= 336

6.5*48= 312

And that, as I mentioned earlier but you so carefully ignored, doesn't take into account that in physics any vector calculations performed usually have to be fed back into each other, something that the dispatch processor will have to handle, and as its already handling the instructions it will be pretty tied up I imagine.

Oh, and who'd you get that number off? a booth babe? :D

Nice freaking post! :D

Terra - This is why I wanted the thread out out the ATI subforum...real answers about physics ;)
 
This is getting insane. We have the Agea PPU for one physics engine, and both ATI and NVIDIA's physics solution for another physics engine...

I wonder if you could have an SLI setup, and both an Agea PhysX card and some random ATi card all in the same system? SLI can work to its fullest because the ATi card will handle one physics engine instead of the second SLI card, and the PhysX card can do its thing in all the games that use it…best of both worlds if you have a motherboard with enough slots for teh two extra physics cards plus whatever other crap you already have in there :p
 
MrNasty said:
Oh look, another phallacy: nVidia's solutions will no more "share physics with graphics" than ATI's will.

nVidia have also jumped on the hype bandwagon (along with ATI) in saying that the most interesting application of HavokFX will be on a 1 GPU system.

The only difference between their dual-card physics system and ATI's is that with nVid's you have to use 2 identical cards, whereas ATI's (like crossfire) allows you to use any two cards. (ATI gets my vote here :) )

I speculate that ATI is consistently denying you will need to run the cards in "crossfire" mode because they want people to associate crossfire with graphics, in actual fact 2 graphics cards operating in this physics/graphics mode will be indistinguishable from how they work in crossfire, just sans dongle.

nVidia has been far more forthcoming with details than ati have about their "unnamed, unknown, undeveloped" physics API.


ATi is definitely working on shared physics and GPU processing in single card use...that is straight from ATi when they talked about the Vista driver. I'm pretty sure I read similar about nvidia.

ATi is also going to support dedicated card use for physics in a two card system. nvidia may not, but they will probably do something to counter ATi so you never know.




Your really missing some data there, and stop talking about havok, you guys seem to be getting stuck on something that is really nothing more than an irrelevant side note.
 
MrNasty said:
Ageia has been keeping its clock speed details under HEAVY wraps - I know, I have a copy of the patent and even that doesn't have full specs and also in a lot of interviews Ageia have said they don't want to give the competition a heads up.

Lets examine what you've spouted anyway:

Power consumption is directly related to fab technique and clock speed.

a 125 million transistor processor dissapating 25 watts. 130nm.

25/125 = 0.2 w/million transistors

a 384 million transistor processor dissipating nearly 100 watts. 90nm.

100/384 = 0.26w/million transistors

ok, so lets see here, based on the fab technique decreasing the power consumption linearly wrt clockspeed (which is almost right - it should reduce by square but never gets there due to inefficiencies):

90/130 * 0.2 w/million transistors = 0.14

so, ATI's clock is 650Mhz which means that Ageia's PPU must be close to:

0.14*(650/0.26) = 350 Mhz

And this is an approximate calculation - but sorry BBA, your BBS is called: 25watts doesn't go nowhere.

so, let's look at your comparison now:

3.5*96= 336

6.5*48= 312

And that, as I mentioned earlier but you so carefully ignored, doesn't take into account that in physics any vector calculations performed usually have to be fed back into each other, something that the dispatch processor will have to handle, and as its already handling the instructions it will be pretty tied up I imagine.

Oh, and who'd you get that number off? a booth babe? :D

I actually got the number form the Ageia representative. Remember, this was during Quakecon, and a lot has probably changed since the first nda hardware PPU release did not happen until March 15, thats over half a year to ramp up clocks, so you may be right.


Anyway...lets say for argument sake you really know the current clock speed of the PPU, then by your own above statement, the performance difference is what? Worst case it's almost negligable. Don;t you think the GPU would be a better choice since you can still program it for better/different algorithyms as needed, instead of a hardwired PPU?
 
Unknown-One said:
This is getting insane. We have the Agea PPU for one physics engine, and both ATI and NVIDIA's physics solution for another physics engine...

I wonder if you could have an SLI setup, and both an Agea PhysX card and some random ATi card all in the same system? SLI can work to its fullest because the ATi card will handle one physics engine instead of the second SLI card, and the PhysX card can do its thing in all the games that use it…best of both worlds if you have a motherboard with enough slots for teh two extra physics cards plus whatever other crap you already have in there :p


So your saying have two graphics cards plus one for physics AND also add a PPU just to make sure all angles are covered?

Sounds good to me. Of course motherboard layout might complicate that a little but if the PPU goes PCI-E it might actually be workable.
 
BBA said:
Don;t you think the GPU would be a better choice since you can still program it for better/different algorithyms as needed, instead of a hardwired PPU?
NO! Doing software calculations is ALWAYS slower, and requires a faster clock just to catch up with hardware acceleration. I have said it multiple times, the GPU will more then likely need MULTIPLE clock cycles to do the same calculation a chip dedicated to physics processing can do in one, simply because its a hardware instruction. Every thing you gain with a faster clock is lost when you need to do 3 instructions just to do what a PPU does in 1. This is why we have 3D acceleration, because hardware acceleration is faster then having the CPU do rendering, even when the clock speeds are so different.
 
YOUR: The possessive form of you. "Your boots are in the hallway."

YOU'RE: Contraction of you are. "You're going down to the store today?"

I just had to.
 
ATI's GPU physics is Havok FX, not something different. There are two competing physics API's right now, Havok FX and Ageia's PhysX(formally Novodex).

Havok FX is not used for gameplay physics at all, and it costs the developer an additional licesnse fee to use over and above Havoks regular physics products.(which use the CPU)

From the Havok FX website:
So, for acceleration of game-play physics, along with the core game-logic, special purpose physics hardware is not the answer. Multi-core CPU architectures are the path to speed.

For effects physics, however, the GPU offers the most compelling and sustainable promise to added visual fidelity in games. Relative to proprietary hardware, GPUs also have a clear advantage as a pre-existing technology familiar and readily available to consumers and game developers, providing other benefits such as wide-spread availability, commodity pricing, and mature standards for hardware and software interfaces.

What Is The Difference Between Game-Play Physics And Effects physics?

Game-Play Physics affect how a game is played from moment-to-moment, and is generally computed on a computer’s central processing unit (CPU). Physical changes that you cause in the game or that happen to you or around you– like knocking over a box, and then climbing up on it - change what you may want to do in each instant of game play. Both game-play physics and game logic demand instant access and tolerate no detectible latency to preserve the game-play experience. The close proximity between physics, game logic, and memory, defines game play and generally demands that these systems execute together on a game system’s CPU. Effects physics is an emerging domain that promises to deliver an increasing array of visually impressive effects that are based on physical principles – but which place far fewer demands on the game’s logic.

Effects physics – a close cousin to visual effects now computed on GPUs – add to the visual complexity of a game and help increase a player’s immersive experience. As visual phenomena, effects physics need to be convincingly real but do not profoundly affect game play. They can merely fill in the player’s view of the game, creating a richer, more convincing environment- but may not affect the choices a player can make from moment-to-moment.

Does Havok FX Cost Extra?

Yes. Havok FX is not part of any existing product or product bundle. It is an optional add-on product that game developers will be able to license for their platform or console based games.

Will Havok FX Be Included As A Part Of Havok Complete™, Havok Physics™ Or Havok Complete XS™?

No. Havok FX will be an optional product than may be licensed separately, and used in conjunction with any of Havok’s products and product bundles.
The developer not only has to shell out cash for a regular Havok Physics license(CPU physics), they also are forced to purchase a Havok FX licesnse(GPU physics) seperately.

In contrast, if the developer chooses to use Ageia's PhysX software, they get both effects and gameplay licenses for free.(as long as they support the PPU)

How many upcoming games will be using Havok FX? As of now, zero.
 
BBA said:
I would expect no less from you...because your so good at BS yourself. :D

I take it that you are affilitated with ATI, since you won't answer my question about it.
I seem to remember that there are some rule about that if you are affiliated with companies, you have to make that clear in your profile.
Or be banned/deleted...

Terra - Am i mistaken?
 
Low Roller said:
ATI's GPU physics is Havok FX, not something different. There are two competing physics API's right now, Havok FX and Ageia's PhysX(formally Novodex).

Havok FX is not used for gameplay physics at all, and it costs the developer an additional licesnse fee to use over and above Havoks regular physics products.(which use the CPU)

From the Havok FX website:The developer not only has to shell out cash for a regular Havok Physics license(CPU physics), they also are forced to purchase a Havok FX licesnse(GPU physics) seperately.

In contrast, if the developer chooses to use Ageia's PhysX software, they get both effects and gameplay licenses for free.(as long as they support the PPU)

How many upcoming will be using Havok FX? As of now, zero.

Well, take in considaration that BBA thinks AGEIA is not a hardware company, but only out to sell software licenses, why do you think he would believe you?:
http://www.hardforum.com/showthread.php?t=1037928

BBA said:
Ageia is NOT a hardware company

Terra...
 
BBA said:
I actually got the number form the Ageia representative. Remember, this was during Quakecon, and a lot has probably changed since the first nda hardware PPU release did not happen until March 15, thats over half a year to ramp up clocks, so you may be right.

So we can take your post and ignore it as another exsample of you using fictional numbers? :)


Anyway...lets say for argument sake you really know the current clock speed of the PPU, then by your own above statement, the performance difference is what? Worst case it's almost negligable. Don;t you think the GPU would be a better choice since you can still program it for better/different algorithyms as needed, instead of a hardwired PPU?

You my friend are very misinformed.
The PPU dosn't have it's functions "hardwired" so they can't be changed/programmed.
Let's look what they are doing with the drivers:
http://www.ageia.com/developers/downloads.html

What’s New in v2.4:

Acceleration of jointed rigid bodies on AGEIA PhysX processor
Additional fluid collision types (spheres/capsules) accelerated on AGEIA PhysX processor
Hardware Scene Manager for the AGEIA PhysX processor with automatic connection between software and hardware scenes, particularly for instantiating particle fluids.
Heightfields support in AGEIA PhysX software runtime engine, including new shape type mainly for terrain, offers reduced memory consumption and increased speed.
User control of threads [cross-platform]
Common interface to support multithreading on different platforms, which gives developers more flexible control over the threading performed by the SDK
Enhanced collision detection support in AGEIA PhysX software runtime engine
Collision detection by dynamic versus dynamic rigid bodies
Visual remote debugger (VRD) [cross-platform]
Development tool to visualize physics data when optimizing game titles
PML import/export layer in the SDK build component
A common library for import/export included as part of the SDK build component. Supports binary and COLLADA physics formats

Or this:
http://www.pcper.com/article.php?aid=225&type=expert&pid=3

In fact, they have already seen the opposite occurring; upgraded firmware and drivers for the PPU has opened up new features and options to programmers on the same hardware. Cloth simulation is the first example of this as it wasn’t ready initially when the SDK went out to software developers but has since been perfected and added into physics engine, without a need for a hardware upgrade.

So the answer to you very misguided question must be:
No the GPU will not be better/faster for real physics!


Terra - And the saga continues...
 
I find this whole thing rather odd, if only because the last million or so years of evolution in biology or the last 70 or so years in micrprocessor or the last 10,000 years in human warfare has taught us anything it's that specialization will always make you better at task as opposed to general use. Why is this even a debate? We have one processor that has been designed from the ground up to do one function. It's like BBA has never heard of the DSP. Specialized chips will invarible be faster at the functions that they have been designed for as opposed to ones that have been disigned for general task or adapted from one task to another. What is even more troubleing about his mertiless arguement is that it ignores the most important point. The whole point of physics is to ulitmately change the gameplay, thats why it's a big deal, Since the return of the analog stick there hasn't been anything new that has allowed games to really go new places, gameplay wise. Not only is the ATI/Nvidia solution not a efficient it doesn't do the physics we really want them to do in games. I can only hope people see this red herring for it is and desist. The PPU needs to be supported so we can finally move beyond the pretty lights.
 
ikarinokami said:
I find this whole thing rather odd, if only because the last million or so years of evolution in biology or the last 70 or so years in micrprocessor or the last 10,000 years in human warfare has taught us anything it's that specialization will always make you better at task as opposed to general use. Why is this even a debate? We have one processor that has been designed from the ground up to do one function. It's like BBA has never heard of the DSP. Specialized chips will invarible be faster at the functions that they have been designed for as opposed to ones that have been disigned for general task or adapted from one task to another. What is even more troubleing about his mertiless arguement is that it ignores the most important point. The whole point of physics is to ulitmately change the gameplay, thats why it's a big deal, Since the return of the analog stick there hasn't been anything new that has allowed games to really go new places, gameplay wise. Not only is the ATI/Nvidia solution not a efficient it doesn't do the physics we really want them to do in games. I can only hope people see this red herring for it is and desist. The PPU needs to be supported so we can finally move beyond the pretty lights.

And this is why peoples "rank" mean squat.
Great post, hit right on the nail :)

Terra - Welcome to [H]ard, ikarinokami :)
 
Xipher said:
NO! Doing software calculations is ALWAYS slower, and requires a faster clock just to catch up with hardware acceleration. I have said it multiple times, the GPU will more then likely need MULTIPLE clock cycles to do the same calculation a chip dedicated to physics processing can do in one, simply because its a hardware instruction. Every thing you gain with a faster clock is lost when you need to do 3 instructions just to do what a PPU does in 1. This is why we have 3D acceleration, because hardware acceleration is faster then having the CPU do rendering, even when the clock speeds are so different.

The whole last sentence of your argument here negates the argument when you consider that 3D acceleration is done by reprogramming the GPU in the same fashion. In your comparison, you would say that all 3D in a GPU is handled in software.

My point is since the GPU is not hardwired and the PPU is, the PPU can become obsolete.

It only makes sense because if hardwired non programmable 3D processing was so much better, then the GPU's would not be programmable...and they have been programmable since the days of GForce 1 for one reason: flexibility. The PPU has none.
 
ikarinokami said:
I find this whole thing rather odd, if only because the last million or so years of evolution in biology or the last 70 or so years in micrprocessor or the last 10,000 years in human warfare has taught us anything it's that specialization will always make you better at task as opposed to general use. Why is this even a debate? We have one processor that has been designed from the ground up to do one function. It's like BBA has never heard of the DSP. Specialized chips will invarible be faster at the functions that they have been designed for as opposed to ones that have been disigned for general task or adapted from one task to another.

Not true. Specialized one purpose chips are only made for scale of economy. They may be more efficient as the ONE thing they do...but the fastest chips are typically more advanced and programmable in todays reality. Besides, in nature, only the adaptable survive, evolution has made many creatures extremely adaptable and many that were not have perished (the last figure I read was over 90% of all species that have existed on earth are now extinct, mostly as they could not adapt to the changing world.)


You need to read the above response from me as well. Your evolution does not let you understand reality.


What is even more troubleing about his mertiless arguement is that it ignores the most important point. The whole point of physics is to ulitmately change the gameplay, thats why it's a big deal, Since the return of the analog stick there hasn't been anything new that has allowed games to really go new places, gameplay wise. Not only is the ATI/Nvidia solution not a efficient it doesn't do the physics we really want them to do in games. I can only hope people see this red herring for it is and desist. The PPU needs to be supported so we can finally move beyond the pretty lights.

The whole point of physics is one of two things: Change gameplay or increase visual effects, and you can limit the degree of both in either case. There will be no successful multiplayer games with physics that have different gameplay with or without the PPU just because of alienation of non PPU customers but, there may very well be single player games that depend on PPU for certain types of gameplay.
 
BBA said:
The whole last sentence of your argument here negates the argument when you consider that 3D acceleration is done by reprogramming the GPU in the same fashion. In your comparison, you would say that all 3D in a GPU is handled in software.

My point is since the GPU is not hardwired and the PPU is, the PPU can become obsolete.

It only makes sense because if hardwired non programmable 3D processing was so much better, then the GPU's would not be programmable...and they have been programmable since the days of GForce 1 for one reason: flexibility. The PPU has none.

Again, the PPU is not "hardwired" so that new functions&features can't be added!
STOP LYING!

Terra...
 
Terra said:
Again, the PPU is not "hardwired" so that new functions&features can't added!
STOP LYING!

Terra...


So now your saying the GPU is capable of being programmed to do more than just physics? Blows most of your arguments.

Or are you saying it is advanced and programmable like a GPU?
 
I'm sorry to say, I do not have any desire to continue wasting my valuable time in this pointless thread Terra has taken so far off reality.

Believe what you want...it will not matter to me, so have fun.
 
BBA said:
So now your saying the GPU is capable of being programmed to do more than just physics? Blows most of your arguments.

Or are you saying it is advanced and programmable like a GPU?

You answer me if you are affiliated with ATI or not and I will give you an answer ;)

The [H]ard|FORUM Rules

(12) Do not IMPERSONATE other individuals or falsely represent yourself.

(19) ADVERTISING, site pimping, contests or any type of business promotion is not permitted. Soliciting for fraternal organizations, humanitarian causes or personal rewards is prohibited unless approved prior to posting.

And again, your earlier statments about the PPU being "hardwired" and unable to get new features ect. via driver/firmware updates was still a flat out lie!

Terra...
 
See, now it's reached the stage of arguing for the sake of arguing.

BBA - you need to read up more on the definition of "hard-wired" and "programmable" in relation to this topic, in addition to what makes a micro-architecture good at what it does. I wouldn't want you to be ridiculed or plain ill-read up any more than you already have been, but it's your call. I just glad you don't make chips - they'd be a mess. :(

Please leave that "I know it's a 100Mhz part" off the board - I've googled it and any such speed rating, were it given out freely, would have been shithot news right now, yet nothing's around. Sorry dude but that was bull or you were mislead. Also your explanation of nvidia's physics solution was incorrect and your assertion that PPU's are not programmable was incorrect too - PPU's are programmable straight through Ageia's PhysX API, while GPUs pixel shading units are programmable through shader programs interpreted by the D3D API or, in ATI's case, an as yet unconfirmed/undefined API - bear in mind this last bit of info as it is the crux of this discussion: can ATI pull off interactive physics through their GPU? If so it will have to be through ATI's own API.

Terra - stop trying to report him, I doubt he works for ATI - He seems to be a deluded ATI fan that has taken 1 article from an ATI representative as gospel. Sad but true. And don't back yourself into a corner for the sake of a rebuttal :p

The article BBA takes a lot of this from also states that Havok on an nVidia 7900GTX runs about 5% of the speed of an X1900XTX - I think it's safe to call the info in this article marketing bumf.

Let's get this discussion back OT - Cypher had some really intriguing points earlier and there were some goodies made elsewhere too (BBA, yes, you too - earlier).

Fact: We have hard performance numbers for ATI's part and (thanks to me thankyeverymuch ;) ) some approximate performance numbers for Ageia's part. Incidentally that calculation for clockspeed shouldn't be too far off.

The problem is these numbers aren't a great comparison. Not only will ATI use a physics API that is completely different to Ageia's, but the architecture each is using is substantially different.

What is funny about this situation is that there is a huge amount known about the PhysX API but little about the PPU, and nothing known about the ATI API and lots known about the ATI hardware.

Coming back to this - edit later.
 
BBA said:
The whole last sentence of your argument here negates the argument when you consider that 3D acceleration is done by reprogramming the GPU in the same fashion. In your comparison, you would say that all 3D in a GPU is handled in software.

My point is since the GPU is not hardwired and the PPU is, the PPU can become obsolete.

It only makes sense because if hardwired non programmable 3D processing was so much better, then the GPU's would not be programmable...and they have been programmable since the days of GForce 1 for one reason: flexibility. The PPU has none.

That's the thing, what I am talking about is the HARDWARE INSTRUCTIONS are made to handle graphical manipulation calculations for a GPU. I'm not saying one OpenGL or D3D API call is a single instruction, but the instructions that ARE implemented in hardware revolve around the kind of calculations for graphics manipulation. The PPU is similar, it just has them oriented for physics. They can add more features later on, using the instruction set currently included, and if needed, use the CPU for certain calculations if something is missing (but they could add that to the next gen card, and remap the API call too the new hardware accelerated version, and the game wouldn't need to change a thing too use it :))

The GPU Vertex and Pixel shaders, and the PPU arch are probably similar, but its the instruction set differences, and the fact it has dedicated memory that won't be taken up by texture and geometry data it can use. Don't forget, current gen games are using massive textures and high polygon models too look so good, that isn't exactly small data wise. Doom 3 can fill up a 256MB card on ultra (no compression for the textures) and make it stutter (recommended minimum for ultra is 512MB video card).
 
The whole of the PPU is to specialized in a single task. It's cheaper thats its not advanced as a GPU is meaningless all it has do is be better at doing physics.. This is the whole point of specialization. It does not need to do alot of different things, it only needs to what it does well.

I believe the cellfactor disagree with this propostion. currently their demo is only possible on a LAN connection, but technology moves foward. As for alienation, that what they said when 3D required accelarator games came out.The same can be said of flight simulator games that really only work well with flight sticks. Progress is not always good, but thats not the case.

I am still puzzled why you would prefer the limited and inferior technology of ATI/Nvidia, that don't bring anything new to the table. It's purely technology for the sake of technology. I could understand if this were at least a battle of equals but its not, other than the comfort of familiarity that nvidia or ATI offer I don't see any net gain in adopting either of these technologies.

P.S. you are not applying evolution correctly, but your not to be faulted for that, most people outside the biological sciences are very misinformed about the mechanics of evolutional thoery.
 
PWMK2 said:
Yup. This is why you don't have CPUs handle game graphics any more, despite the fact that your average CPU is lightyears more powerful than your average GPU. :D

Um, i hardly think a CPU is light years ahead of a GPU.... ? Why do you think that - if that is the case why are CPU's often the bottleneck for higher resolutions? And why arent developers putting more work on the CPU if it is so much better then a GPU....
 
I am really interested in if BBA is ATI affiliated or just trying to make people think he is by not answering on that question.

My take on the GPU vs. PPU battle boils down to which one brings something new to the table. The GPU method will be used just for eye candy physics, while the PPU will be used for eyecandy/gameplay physics. What would you rather have? An exploding object sending out bits of useless debris everywhere, or an exploding object sending out bits of environment/enemy destroying debris everywhere? Also, who wants to waste an expensive videocard on eyecandy physics when you can buy a $200 ppu?
 
this "ati physics" is still on the 'idea board' in ati's office. and thats a fact
 
But isn't everything just eye candy? I mean, isn't the whole premise behind new graphics cards just the introduction of more and better eye candy? Otherwise, we'd all be playing Ultima and Doom 1 non-stop. Realistically, I don't see how the ATI solution would fail in giving more candy and being cheaper, as you can toss your old card in or, by that time, buy an x1600 for less than a PPU.

So, with a GPU to do physics I get a better looking game than with a PPU while software can sufficiently direct the card to render physic-like effects--for less money? Count me in for this, if it works. Just think about it, all the PPU is doing is helping direct the graphics card on what to render. If software + hardware can create the same effect, then they are, effectively, PPU's, despite whether or not the physics are "real."
 
mrjminer said:
But isn't everything just eye candy? I mean, isn't the whole premise behind new graphics cards just the introduction of more and better eye candy? Otherwise, we'd all be playing Ultima and Doom 1 non-stop. Realistically, I don't see how the ATI solution would fail in giving more candy and being cheaper, as you can toss your old card in or, by that time, buy an x1600 for less than a PPU.

Becuase it only adds "eye candy"..not all the sweet stuff of real gameplay physics.
And no the PPU is about a jella lot more than "eyecandy"...

So, with a GPU to do physics I get a better looking game than with a PPU while software can sufficiently direct the card to render physic-like effects--for less money? Count me in for this, if it works. Just think about it, all the PPU is doing is helping direct the graphics card on what to render. If software + hardware can create the same effect, then they are, effectively, PPU's, despite whether or not the physics are "real."

Someone here claimed the PPU would get creamed by GPU's....but ran off.
And don't even get me started on how much FASTER the PPU are at physics than a CPU.
Try looking around at this page:
http://personal.inet.fi/atk/kjh2348fs/ageia_physx.html


Terra...
 
using graphics for physics is dum.
why?
because it will be using up power that is meant to go on graphics, so you will have to turn your settings down.
moreover, you will have to turn settings down even further as the extra eye candy (as this is all it can be as there can be no collision detection if it is done by the gpu) will require more graphics horsepower to show all of the rocks and smoke etc.

therefore doing it via the gpu is dum!!!
however at the moment as is buying a ppu. i would wait till second gen to see benefits etc. as with most things the true early adopters usually get a bit screwed. of course if I had the cash I would definately buy one!
f
 
freddiepm61 said:
using graphics for physics is dum.
why?
because it will be using up power that is meant to go on graphics, so you will have to turn your settings down.
moreover, you will have to turn settings down even further as the extra eye candy (as this is all it can be as there can be no collision detection if it is done by the gpu) will require more graphics horsepower to show all of the rocks and smoke etc.

therefore doing it via the gpu is dum!!!
however at the moment as is buying a ppu. i would wait till second gen to see benefits etc. as with most things the true early adopters usually get a bit screwed. of course if I had the cash I would definately buy one!
f

Yah, you bring up some good points about having to lower settings and the such. Also, I suppose that quite an awesome power supply would be needed to run two cards in Xfire + an gpu to do physics, which would cost money to upgrade. Personally, I'm probably going to do what many are doing and just run a DX10 card with a former ATI card for PPU.

Anyways, I guess we'll all see what the benchmarks bring when the come out, but I'm probably going to have to go this route anyway because I couldn't afford to buy a PPU if it turns out to be better :rolleyes:
 
Interesting thread to say the least.

After reading the whole thing. I have one question that keeps poping into my mind.

How is the new Vista API going to be incorperated into this PPU thing?

If i was a betting man . I would say that ATI's and Nv's approach to PPU is the way to go as Vista API is going to adderess all the games code including pyhics and treat it all the same . No more optimizations for either ATI or Nv. I don't know how this will affect PPU.

The only thing I am sure of is that ATI is working very closely with MS. So I will put my money on ATI.
 
mrjminer said:
Yah, you bring up some good points about having to lower settings and the such. Also, I suppose that quite an awesome power supply would be needed to run two cards in Xfire + an gpu to do physics, which would cost money to upgrade. Personally, I'm probably going to do what many are doing and just run a DX10 card with a former ATI card for PPU.

Anyways, I guess we'll all see what the benchmarks bring when the come out, but I'm probably going to have to go this route anyway because I couldn't afford to buy a PPU if it turns out to be better :rolleyes:


If what I read is correct about DX10 . I really don't see how the none DX 10 card is going to take instructions from Vista API using DX10. DX 10 requires unified shaders

How and if it can offload pyhsics to dx9card remains to be seen . Just a thought.
 
$BangforThe$ said:
Interesting thread to say the least.

After reading the whole thing. I have one question that keeps poping into my mind.

How is the new Vista API going to be incorperated into this PPU thing?

If i was a betting man . I would say that ATI's and Nv's approach to PPU is the way to go as Vista API is going to adderess all the games code including pyhics and treat it all the same . No more optimizations for either ATI or Nv. I don't know how this will affect PPU.

The only thing I am sure of is that ATI is working very closely with MS. So I will put my money on ATI.


I dont think Vista the GUI will use a PPU because it is not like when you click my computer it explodes into many pieces and will affect if you can click something else or not....
 
OMG wow I have to print this thread out as it is only getting WARM and soon we will all know the truth ;)
 
Back
Top