[SA] Kepler will be fast in PHSYX games, slower than pitcairn in non physx titles.

—§urfÅceЗ;1038336366 said:
Gentlemen, please!

Don't you know anything about Charlie?
The man is an entertainer. That's his actual job.

You have to have a tongue in cheek attitude to get what Charlie is doing in his writeups on Kepler.
 
Does anyone know if hardware Physx different from software Physx in any way (besides being hardware calculated of course)?
 
My thoughts,

If nvidia cards perform well for their price point all is well, but....

$200-600 are all high end cards imo. Obviously some cards are faster than others, but $300 is definitely a high end video card; just not the fastest. These are cards only gamers need. The $500+ video cards are the premium cards that people pay the extra 50% mark up for 20% more performance.

Mid-range is the $100-200 range like the HD6850 or HD6870, GTX550, GTX560 (with the GTX560 TI borderline between mid and high-end). These cards will handle all games well, but some settings may need to be reduced.

Low end cards are the sub $100 cards, like the HD6750, GTX440 or below that will only handle light gaming.
 
Last edited:
Does anyone know if hardware Physx different from software Physx in any way (besides being hardware calculated of course)?

I think there's some confusion here, software physics is handled by the CPU and cannot be acclerated by the GPU. API's such as Havok and such handle player collision detection and such, none of that stuff can be accelerated by the GPU.

Hardware gpu physics is basically PHYSX. If you have an NV GPU you can add additional features such as those you see in Batman: AC...ie flying glass shards and newspapers and such. This is different than CPU phsycis, which is a more complicated beast and cannot be accelerated by the GPU.
 
Thanks for clearing that up.

I do like the idea of Nvidia trying to differentiate themselves, but going solo on this seems like a bad investment for Nvidia, if in fact Charlie is right, and they "wasted" a lot of time including it into the GPU.
 
Thanks for clearing that up.

I do like the idea of Nvidia trying to differentiate themselves, but going solo on this seems like a bad investment for Nvidia, if in fact Charlie is right, and they "wasted" a lot of time including it into the GPU.

Kepler, having been in development for quite some time I'm wondering if Nvidia thought Physics was going to take off and be the next big thing by now? Kind of like AMD and BD's multiple cores thing?
 
This is false. It is not possible for the GPU to accelerate anything except GPU physics, the GPU will not accelerate APIs such as havok.

Although apparently this new chip will accelerate software physics that use the Physx API (as opposed to the Havok API) in addition to the traditional hardware Physx stuff. How significant that'll end up being depends on how many developers Nvidia can strong-arm into using Physx instead of Havok.
 
Although apparently this new chip will accelerate software physics that use the Physx API (as opposed to the Havok API) in addition to the traditional hardware Physx stuff. How significant that'll end up being depends on how many developers Nvidia can strong-arm into using Physx instead of Havok.

There are other APIs out there like Bullet, and being open and crossplatform they're generally more attractive. This is reminding me of Glide where a few people used it and it was cool but then more open platforms came along and everyone decided Glide was dumb.
 
fixed it for you :)

Also it is single threaded naturally on the pc (multi threaded for consoles), and the reason is because according to N "it is up to the developer" to optimize it.

I had it right it is compiled for a x86 since math coprocessor was intergrated into system back in the 486 days and was included in all chip variants since the Pentium days...
 
I had it right it is compiled for a x86 since math coprocessor was intergrated into system back in the 486 days and was included in all chip variants since the Pentium days...

Yes, but PhysX is coded in x87 :p :) so we both were right.
 
If this is true, to me it sounds like they will inflate their benchmarks, but it will only be due to games written specifically for a hardware feature built into Nvidia cards. Which Nvidia has a history of forcing on them.

Which means their cards aren't really as fast as they claim. It becomes a proprietary card. Its BS.

Any reviewer that reviews them, needs to leave out all benchmarks for games that are specifically written for the card. Anyone who doesn't, and you will know its a biased review.
 
If this is true, to me it sounds like they will inflate their benchmarks, but it will only be due to games written specifically for a hardware feature built into Nvidia cards. Which Nvidia has a history of forcing on them.

Which means their cards aren't really as fast as they claim. It becomes a proprietary card. Its BS.

Any reviewer that reviews them, needs to leave out all benchmarks for games that are specifically written for the card. Anyone who doesn't, and you will know its a biased review.

No, the cards will be as fast as they claim, in the games that are accelerated. If a game is written to take advantage of DX11 code paths (for example) then a DirectX 11 card might be faster than a DirectX10 card at the game since it can take advantage of those optimizations. Does that mean the DX11 card isn't as fast as the maker claims, or that it is cheating? I don't like the direction they are going, but performance is still performance.

And I don't see how you can say someone who tests that game is biased - again, you test what you can and make note of the fact that the reason for the gain is the optimizations. This isn't like the old quake/quack days (at least we hope not) where they were cheating the quality to improve performance, this is hardware accelerating a piece of code, and gaining performance advantages from that.

Is it biased to test an encoding program that uses Intel's Quick Sync hardware acceleration and compare that to performance by an AMD processor?
 
This is false. It is not possible for the GPU to accelerate anything except GPU physics, the GPU will not accelerate APIs such as havok.

I think you misunderstand, no one said anything about Havok. The software physx API is used in over 100 games, for both AMD and Nvidia, none of them use GPU acceleration. Its seems, going from Charlies article, that Nv have managed to make Kepler run this API instead of the CPU.
 
The software physx API is used in over 100 games, for both AMD and Nvidia, none of them use GPU acceleration. Its seems, going from Charlies article, that Nv have managed to make Kepler run this API instead of the CPU.

That would actually be really awesome, if true IMO. Though hopefully not at the cost of raw performance in general.
 
this seems to be a case of over hyping what the card will actually do. I see many dissappoint people when it magically doesnt "trade blow" with a 7970 because people expect it too because of all the rumors.
 
If physx is written in x87 that would basically cripple it from running on the CPU for no good reason.

http://techreport.com/discussions.x/19216 reported looooong ago...
Original article: http://www.realworldtech.com/page.cfm?ArticleID=RWT070510142143



If they put out a new PhysX code update in time for Kepler that intercepts the software calls and runs them on hardware even then it could make some sense to see that sudden performance boost on those games. This of course would expose even more that they can indeed tamper directly with PhysX to optimize it for modern cpu's if they so cared.

EDIT TO ADD:
Actually it makes sense, this will be the PhysX version 3.0 they have been "long in development".
http://www.tomshardware.com/news/phyx-ageia-x87-sse-physics,10826.html
 
Last edited:
http://techreport.com/discussions.x/19216 reported looooong ago...



If they put out a new PhysX code update in time for Kepler that intercepts the software calls and runs them on hardware even then it could make some sense to see that sudden performance boost on those games. This of course would expose even more that they can indeed tamper directly with PhysX to optimize it for modern cpu's if they so cared.

Its not an issue of optimizing it. What nvidia is doing is basically cripping it on CPU's by coding it in x87 :rolleyes: Ridiculous. Nvidia's games as usual
 
As I stated in another thread, it's why I don't want to use an Nvidia-based GPU card.

If I buy a game, I expect the game to have all features within it working including AA and physics regardless of the video card used. I'd rather have the game take advantage of the video card installed in my computer at the maximum settings possible regardless if it's from Nvidia or AMD. I'd like the game to have full functionality of all in-game settings regardless if the video card I use is from Nvidia or AMD.

Unfortunately, it's not like that.

Read the benchmarks for video cards and you can see that many (not all) games that have "The Way It's Meant to Be Played" logos on them run better on Nvidia cards and not AMD. There is something going on there within the game's code that makes it seem like it's INTENTIONALLY crippled on a non-Nvidia-based video card. That's not fair and is unfair competition.

I don't want to spend a couple hundred dollars on an Nvidia card so I can have a game run with all features available to me. I'd like to buy the video card that is affordable to me and runs the games I play at the best settings possible, and I'd like those settings available to me regardless of the video card I choose.

I'm not rich and I'm not made of money. I'm not going to spend more money just to build another Nvidia-based computer just so that the games run at all the available settings the game has including physics. Hell, f**king no!

Nvidia is working against the freedom of choice for gamers like us. I detest the fact that PhysX-enabled games resort to old and slow x87 instruction sets run on the CPU or is disabled completely when an AMD card is in the computer. Or, the Batman: AA fiasco, have the AA disabled because I don't have an Nvidia card installed, or Ambient Occlusion disabled, or better AA options, and so on..

That's not fair to me and it's not fair to PC gamers either. It's unsportsmanlike competition and makes the competition lopsided, and makes the playing field tilt in Nvidia's favor.

I detest cheaters and I detest unsportsmanlike behavior, and I refuse to support a company that does both.
 
Nvidia logo should be more like "The Only Way it can be played"

This is coming from a nvidia user b/c bf3 runs best on nvidia. I would prefer a choice thanks. Not cool with this issue. I would love to get a new card like a 7950 w more vram but am afraid to until this scene clears up a bit.
 
As I stated in another thread, it's why I don't want to use an Nvidia-based GPU card.

If I buy a game, I expect the game to have all features within it working including AA and physics regardless of the video card used. I'd rather have the game take advantage of the video card installed in my computer at the maximum settings possible regardless if it's from Nvidia or AMD. I'd like the game to have full functionality of all in-game settings regardless if the video card I use is from Nvidia or AMD.

Unfortunately, it's not like that.

Read the benchmarks for video cards and you can see that many (not all) games that have "The Way It's Meant to Be Played" logos on them run better on Nvidia cards and not AMD. There is something going on there within the game's code that makes it seem like it's INTENTIONALLY crippled on a non-Nvidia-based video card. That's not fair and is unfair competition.

I don't want to spend a couple hundred dollars on an Nvidia card so I can have a game run with all features available to me. I'd like to buy the video card that is affordable to me and runs the games I play at the best settings possible, and I'd like those settings available to me regardless of the video card I choose.

I'm not rich and I'm not made of money. I'm not going to spend more money just to build another Nvidia-based computer just so that the games run at all the available settings the game has including physics. Hell, f**king no!

Nvidia is working against the freedom of choice for gamers like us. I detest the fact that PhysX-enabled games resort to old and slow x87 instruction sets run on the CPU or is disabled completely when an AMD card is in the computer. Or, the Batman: AA fiasco, have the AA disabled because I don't have an Nvidia card installed, or Ambient Occlusion disabled, or better AA options, and so on..

That's not fair to me and it's not fair to PC gamers either. It's unsportsmanlike competition and makes the competition lopsided, and makes the playing field tilt in Nvidia's favor.

I detest cheaters and I detest unsportsmanlike behavior, and I refuse to support a company that does both.

Shogun 2 which has an AMD logo pop up when launching the game runs like shit on my nVidia card compared to an equivalent AMD card.

I'm not disagreeing with you or defending nVidia, but AMD is not innocent either.
 
Shogun 2 which has an AMD logo pop up when launching the game runs like shit on my nVidia card compared to an equivalent AMD card.

I'm not disagreeing with you or defending nVidia, but AMD is not innocent either.

I think what people should remember is that these things happen with closed platforms. Code this way so games will run better on X hardware as opposed to Y. It's pretty ridiculous, regardless of who does it. These game devs are usually run by pricks and are only in it for the $$, so getting some extra hardware/cash incentives in order to favor a particular piece of hardware is very common. The game studios deserve as much blame as AMD/NV do.

Nvidia is far dirtier in this respect, but on occasion AMD partakes in the poo-flinging as well.
 
Shogun 2 which has an AMD logo pop up when launching the game runs like shit on my nVidia card compared to an equivalent AMD card.

I'm not disagreeing with you or defending nVidia, but AMD is not innocent either.

I know that, but there aren't that many games out there with AMD logos on them that I can think of. Many of the games I've played have TWIMTBP logos on them.

It tells me which company throws the most support (aka "money") around to get games to run best on whose video card. AMD doesn't have a lot of money as we can tell if going by the Intel-AMD CPU competition (if there was ever any...).

I'd just like to see a game have every setting available to me from AA to physics calculations, and run at the best possible performance off my card regardless if I have an AMD video card installed or not. Yeah, AMD may not be as innocent as they appear to be. However, Nvidia is more like the muscle-bound jock getting all the ladies and pushing weaklings aside compared to AMD being that freckle-faced prankster hiding in corners not making any or much impact on anyone else.
 
Nvidia logo should be more like "The Only Way it can be played"

This is coming from a nvidia user b/c bf3 runs best on nvidia. I would prefer a choice thanks. Not cool with this issue. I would love to get a new card like a 7950 w more vram but am afraid to until this scene clears up a bit.

BF3 plays far better on the 7970, unless you're referring to something else (and I have compared both directly) Point taken though, we don't need nvidia crap middleware in games. Whats next , glide? Thanks but no thanks NV - lets face the reality that 95%+ of AAA games are multi platform.
 
I just do not think that the ATI cards are allowed their full potential compared to the say, the same new nvidia card whenever the hell that gets released. I want a bit more resolution on the picture if you know what I mean.

Looks like I will be going to Ati regardless though. I don't think nvidia is close to releasing their new cards and a 580 3gb would be an expensive almost sidegrade for me.
 
I may have to go AMD this time as well, not sure if I can hold out much longer for Kepler, especially when theres nothing but rumors and no concrete info/specs.

Once the 7970 Twin Frozrs come out, I may just have to pick one up.
 
Read the benchmarks for video cards and you can see that many (not all) games that have "The Way It's Meant to Be Played" logos on them run better on Nvidia cards and not AMD. There is something going on there within the game's code that makes it seem like it's INTENTIONALLY crippled on a non-Nvidia-based video card. That's not fair and is unfair competition.

I'm not saying this for sure isn't the case, but I think TWIMTBP is more like, nVidia has worked more closely with the game developers to get it running well on their hardware. AMD doesn't do that as often (although I have seen an AMD-similar logo on games before) and therefore usually they seem to have more issues at launch with a game than nVidia hardware. Part of that is AMD's inability to keep up with CFX profiles and such in their driver releases.

I honestly don't think there is some shady piece of code that cripples performance for one side or the other if it's not "certified" by them.
 
This world is rotten. Our politicians are bought and paid for and even the people who make video games are bought and paid for to give certain people advantages. Everyone blames politicians but this corrupt bullshit is everywhere, not just in government.
 
I honestly don't think there is some shady piece of code that cripples performance for one side or the other if it's not "certified" by them.

I don't know that they intentionally code it to cripple the competitor, but they definitely do optimize it in ways that benefits their hardware more. So if Nvidia cards are good at tesselation, they'll fix problems by adding more tesselation which improves performance on Nvidia cards but not AMD - even though there may be other ways to fix the code that are card-neutral (and yes, I realize there probably aren't any problems that tesselation actually fixes, but you get my drift).
 
I don't know that they intentionally code it to cripple the competitor, but they definitely do optimize it in ways that benefits their hardware more. So if Nvidia cards are good at tesselation, they'll fix problems by adding more tesselation which improves performance on Nvidia cards but not AMD - even though there may be other ways to fix the code that are card-neutral (and yes, I realize there probably aren't any problems that tesselation actually fixes, but you get my drift).

Right...I think there's a distinct difference between optimizing for your hardware versus intentionally limiting performance on the opposite team's hardware. That was exactly the point I was making as well. :cool:
 
I may have to go AMD this time as well, not sure if I can hold out much longer for Kepler, especially when theres nothing but rumors and no concrete info/specs.

Once the 7970 Twin Frozrs come out, I may just have to pick one up.

agreed, and, agreed!
 
If Kepler is fast in some games (because of PhysX, Ageia, whatever), crappy in others I'll be picking up AMD this round. I play too many games to get 120fps on one and 25fps on another with equivalent settings.
 
If Kepler is fast in some games (because of PhysX, Ageia, whatever), crappy in others I'll be picking up AMD this round. I play too many games to get 120fps on one and 25fps on another with equivalent settings.

I hardly think the disparity will be that extreme.

So far all we've heard is that it (being the high-end Kepler, not so much GK104) can be much faster in PhysX games and "on-par" or slightly faster than Tahiti in non-PhysX games. If this is true then it's not so much the nVidia card sucks on non-PhysX games as it really flies on PhysX games.
 
If Kepler is fast in some games (because of PhysX, Ageia, whatever), crappy in others I'll be picking up AMD this round. I play too many games to get 120fps on one and 25fps on another with equivalent settings.

Well, according to the rumors, the AMD card would also be getting 25fps, so your trade-off is 25 fps in all games (with AMD), or 25fps in some and 120fps in others (with Nvidia).
 
Back
Top