AMD On Game Physics, Talks about NVIDIA/PhysX

So does that means we'll now have two catagory, Physx and Bullet? Much like the then Ageia and Havok FX?

I don't think its going to work out well like that, unless for some reason game developers starts supporting both (rather than just one)
 
So does that means we'll now have two catagory, Physx and Bullet? Much like the then Ageia and Havok FX?

I don't think its going to work out well like that, unless for some reason game developers starts supporting both (rather than just one)

Bullet though is based on OpenCL meaning it is an open standard unlike PhysX. Developers should be backing things like Bullet because any modern GPU can make use of it.
 
normally w/e is easiest/most cost effective to work with becomes the standard. in this case, it looks like Bullet might just be that.
 
A lot of talk and no action in my opinion...

You heard the man. AMD is keeping their physics API open because it's in the consumer's best interest, what in the world is wrong with that? And why isn't Nvidia taking any action for this?
 
Sorry, no it doesn't. NVidia holds that title with its Quadros.

Nope, Quattro's get handed by the Teslas. (which, according to ATi, get's handed by a 5970, in terms of GFLOPS - but we all know they are calculated differenty, and more factors than that ambiguous number come into play).

But we aren't talking about those rackmount-GPGPUs.
 
Well even a Tesla is just the same thing as the Quadro and the consumer high end cards. They're all the same GPU just the Tesla has no display outputs and more RAM, the Quadro has more RAM and driver optomizations and the consumer cards have neither.

So, yes, ATi's 58x0 and 5970 are the most powerful cards on the planet.
 
You obviously don't understand the difference between a consumer graphics card and a professional workstation card.

Jeremyshaw worded it is "graphics card." Not gaming card, or workstation card. Watch your mouth.

Regarding the Tesla. Oops, always seem to forget that new line, but I'll bunch it in with the Quadro. I should have said that Nvidia holds the title with their cards.
 
Jeremyshaw worded it is "graphics card." Not gaming card, or workstation card. Watch your mouth.

Regarding the Tesla. Oops, always seem to forget that new line, but I'll bunch it in with the Quadro. I should have said that Nvidia holds the title with their cards.

Sorry sir, I'll watch my mouth, I wouldn't want you to scold me again. Also the 5870/5890 are the fastest cards, sorry. The chips in the workstation cards are the same ones used in our gaming cards with different cards and drivers.
 
Jeremyshaw worded it is "graphics card." Not gaming card, or workstation card. Watch your mouth.

Regarding the Tesla. Oops, always seem to forget that new line, but I'll bunch it in with the Quadro. I should have said that Nvidia holds the title with their cards.

They don't, actually.

They hold the real world usage title, but for actual power, the 5970 trumps even the Tesla S1075 in useable power (well.... according to AMD, that is:p).

The S1075 is sort of like 4way SLI GTX295s (4x gt200b cores - called gf100l, initially - workstation parts have retained the early working name), in terms of actual processing power.

SixtyWattMan is correct - while the Quattro can play games, that is not it's main task. The tesla... is not supposed to play games. Why some of them do have DVI output (single), they are designed for CAD-assistance, CUDA, and the likes.
 
They don't, actually.

They hold the real world usage title, but for actual power, the 5970 trumps even the Tesla S1075 in useable power (well.... according to AMD, that is:p).

The S1075 is sort of like 4way SLI GTX295s (4x gt200b cores - called gf100l, initially - workstation parts have retained the early working name), in terms of actual processing power.

SixtyWattMan is correct - while the Quattro can play games, that is not it's main task. The tesla... is not supposed to play games. Why some of them do have DVI output (single), they are designed for CAD-assistance, CUDA, and the likes.

I completely understand what Nvidia's workstation solutions are for. As for the 5970 being faster, I'm not quite sure. AMD can say whatever they want. Are there results from an independent source?
 
I completely understand what Nvidia's workstation solutions are for. As for the 5970 being faster, I'm not quite sure. AMD can say whatever they want. Are there results from an independent source?

You are completely off topic. This thread isn't about workstation cards and they have no relevance on games or in-game physics. You want the best gaming card? 5970. $700 You want the best workstation card? Quadro FX5800 which is $3000.
 
bit-tech: IF - given we don't know yet - when Nvidia's Fermi cards finally arrive they are faster than your HD 5970-

RH: -Well if it's not faster and it's 50 per cent bigger, then they've done something wrong-

LoL, so I suppose nVidia is going to be faster by the looks of it, just maybe of higher price range.
 
You are completely off topic. This thread isn't about workstation cards and they have no relevance on games or in-game physics. You want the best gaming card? 5970. $700 You want the best workstation card? Quadro FX5800 which is $3000.

I am off topic and have spurred an off-topic discussion. Now back on track.

What will bullet physics do for gaming? This AMD guy seems pretty adamant about it.
 
I think it's pretty much the same as PhysX, it depends on the developer what to add to the game using it, the difference being that it's not proprietary like PhysX, you can do Bullet in nV cards and Intel's too.
 
It sounds like developers are already backing Bullet Physics on other platforms. Bullet is ranked 3 in physics libraries. Unlike Havoc and Physx, Bullet Physics gives access to the entire source code.

AMD/ATI choosing Bullet because of its open standards can only lead to good things, correct?
 
I think it's pretty much the same as PhysX, it depends on the developer what to add to the game using it, the difference being that it's not proprietary like PhysX, you can do Bullet in nV cards and Intel's too.

If thats true, then it is good news. The fact that it can run on nVidia cards as well and not just limited to ATi card, I think is enough to propel it to the top:D

Users would be free to choose between both platform without having to worry about running physics on GPU, and maybe we will finally see a wide scale implementation of GPU physics.
 
PhysX was the most developer-friendly and full-featured physics API so far, seems like Bullet with OpenCL-accelerated physics may soon take over that role. I expect my game dev company to switch to Bullet if it works as expected.

I just hope AMD finally gets a full OpenCL implementation working. NVidia is still miles ahead of it on that point, making OpenCL still a primarily nVidia-only affair.
 
I don't think its going to work out well like that, unless for some reason game developers starts supporting both (rather than just one)

Many of them support both DX and OGL. Though given the choice, I'd bet they'd prefer a single standard. I would. Maybe Khronos will take up this challenge.
 
I just hope AMD finally gets a full OpenCL implementation working. NVidia is still miles ahead of it on that point, making OpenCL still a primarily nVidia-only affair.

ATi bet on brook and lost. We have Apple to thank for marrying ATi and NVIDIA with OCL. ATi will catch up soon, I bet.
 
AMD has zero credibility. i am still waiting for havoc fx. they put no money into gaming development. no any initiative they take is worthless, until i see them spending actually money to spur development.
 
AMD has zero credibility. i am still waiting for havoc fx. they put no money into gaming development. no any initiative they take is worthless, until i see them spending actually money to spur development.

Havok FX is dead until Intel says it isn't. Intel still has nothing in terms of a discrete GPU, so they won't for a long time. Bullet OpenCL is now where it's at for AMD. I wouldn't be surprised if they drop their Havok license before long.
 
Havok FX is dead until Intel says it isn't. Intel still has nothing in terms of a discrete GPU, so they won't for a long time. Bullet OpenCL is now where it's at for AMD. I wouldn't be surprised if they drop their Havok license before long.

this is the same company that couldnt spring the engineers to get AA for batman arkum asylum. So I remain skepetical about thier commitment. worse yet in thier interview they are still taking about havoc fx as though its a possiblity. Worse yet and i find even more annoying, thier bitching about nvidia nto doing Q&A on an ati video card, are you kidding me? if you care so much about the customer, get free license, do the Q&A. AMD makes nice GPU hardware products, but they for the lack of a better word, suck totally on the software side.
 
Last edited:
Here Huddy talks more about physX:
Advanced Micro Devices said that Nvidia Corp. had specifically altered its PhysX application programming interface (API) so that it could not take advantage of multi-core central processing units (CPUs) while making physics effects computations. According to AMD, the reason for such modifications was to increase importance of graphics processing units (GPUs) that are used to process physics effects in select games that are powered by PhysX.
http://www.xbitlabs.com/news/multim...ling_Multi_Core_CPU_Support_in_PhysX_API.html

Edit: Toms hardware figured out the same under their review of batman:

Rather than clearing things up, the results of this testing have only left us more puzzled. We can recall that the PhysX API is supposed to be optimized for multiple threads, yet on the Core i7, only a single thread seems to be stressed when PhysX is cranked up. We're trying to get clarification from the developers at Rocksteady about this phenomenon--it's almost as though the game is artificially capping performance at a set level, and is then using only the CPU resources it needs to reach that level.
http://www.tomshardware.com/reviews/batman-arkham-asylum,2465-10.html
 
Last edited:
this is the same company that couldnt spring the engineers to get AA for batman arkum asylum. So I remain skepetical about thier commitment. worse yet in thier interview they are still taking about havoc fx as though its a possiblity. Worse yet and i find even more annoying, thier bitching about nvidia nto doing Q&A on an ati video card, are you kidding me? if you care so much about the customer, get free license, do the Q&A. AMD makes nice GPU hardware products, but they for the lack of a better word, suck totally on the software side.

you must be talking about this one

bittech said:
bit-tech: Recently Nvidia disabled PhysX in the drivers if you're using an ATI card as the primary graphics adapter.

RH: They don't want to QA it. The PC is an open platform, though - you're meant to take any two parts and put them together. Intel don't say "we're not prepared to QA our CPUs with Nvidia or AMD's graphics parts" when they obviously spend time QAing them because you want to build a system that works.

You must be talking about a company that couldn't spring engineers to do Q&A on their drivers on a System utilising AMD Card so that customers will be happy doing PhysX on their System that utilise AMD Cards.

Of course AMD couldn't do the Q&A, it's nVidia's driver, you think AMD should do the Q&A on nVidia's driver? Unless of course you're talking different part on the interview.
 
It's good to see that AMD has plans for something to compete with Nvidia's Physx. I don't own a Nvidia card but hopefully this new technology doesn't cripple high-end Nvidia gpus like the way physx does to ATi's cards.
 
you must be talking about this one



You must be talking about a company that couldn't spring engineers to do Q&A on their drivers on a System utilising AMD Card so that customers will be happy doing PhysX on their System that utilise AMD Cards.

Of course AMD couldn't do the Q&A, it's nVidia's driver, you think AMD should do the Q&A on nVidia's driver? Unless of course you're talking different part on the interview.

you do realize that nvida offered them the lincense for free so that they could do there own driver right?
 
no I don't know, please elaborate, you're talking about AMD writing a driver for nVidia cards, or... i have no idea
 
nvidia offerred amd a free physx license so that amd could make their own driver. AMD's excuse for not accepting was that it wasnt made through "official" channels. so they should just be quiet, until they are willing to spend thier money to support thier products.
 
This just makes me wonder if I should sell my old ati card when I upgrade...in case they allow for physics calculations to be done on the old card.
 
nvidia offerred amd a free physx license so that amd could make their own driver. AMD's excuse for not accepting was that it wasnt made through "official" channels. so they should just be quiet, until they are willing to spend thier money to support thier products.

I see, I've done some googling on this, read articles and the truth is there's no exact reference or details about the offer nVidia makes to AMD, in general yes, AMD is making ingenious excuses.

So I'll interpret this as business negotiation failures instead of blaming either AMD or nVidia not wanting to spend money on Engineers.
And I will not debate your support for nVidia because of this reason, they are perfectly valid.
 
Nvidia] put PhsyX in there, and that's the one I've got a reasonable amount of respect for. Even though I don't think PhysX - a proprietary standard - is the right way to go, despite Nvidia touting it as an "open standard" and how it would be "more than happy to license it to AMD", but [Nvidia] won't. It's just not true! You know the way it is, it's simply something [Nvidia] would not do and they can publically say that as often as it likes and know that it won't, because we've actually had quiet conversations with them and they've made it abundantly clear that we can go whistle.

Ikarinokami, please stick to facts as everything you have said is flame bait.

Also ATI do spur development, they spend R&D money in developing gpu's which enable the software developers to do their job. It is not up to anyone other than the game or other developers to make their software as good as it can be. Encouraging their laziness is not a step in the right direction and resources by either compnay should not be wasted in doing so.
 
Back
Top