Geforce8 cards to get PHYSX support shortly

Nvidia may integrate a killer nic, and watercool the whole lot. Dont forget the peltier cooling and 25,000 way sli. :p
 
You know what's really sad? If I had infinity moneys... I'd buy all that shit. Buy it as quick as they could sell it to me.

So would i, to be honest. I would buy the most overpriced stuff just because i can, and for the little badges that come with it.
 
I'll consider it fine when it's consistent and 100%. Otherwise, it's throwing money at diminishing returns.




To each their own but SLI can mean the difference between games being playable or not at native resolution on a 30" display. SLI isn't for everyone but it is hardly as bad as the nay sayers make it out to be.
 
You know what's really sad? If I had infinity moneys... I'd buy all that shit. Buy it as quick as they could sell it to me.

I'd be running 3 Ultras Tri-SLI'ed all with vapor chills on them. Of course I'd have to have a second computer to play games on since that one would be doing nothing but bench marking 24/7 :D
 
If i had infinate money i would pay nvidia to make me a quad xeon board with tri sli ;)
 
If i had infinate money i would pay nvidia to make me a quad xeon board with tri sli ;)

I wouldn't want NVIDIA making the board. I'd rather have an Intel chipset board that could support it through the use of additional or different NVIDIA MCPs.
 
...but why stop at three? commission a one-off 6x SLI board. Put those extra Extended-ATX slots to good use. ;)

The E-ATX form factor compliant boards have no more expansion slots than standard ATX boards do. They are larger yes, but not in that direction.
 
The E-ATX form factor compliant boards have no more expansion slots than standard ATX boards do. They are larger yes, but not in that direction.

My dreams are crushed and I look silly to boot. Guess it's time to start the single slot phase head research.
 
My dreams are crushed and I look silly to boot. Guess it's time to start the single slot phase head research.

Well you could always go with a Skulltrail style board and four video cards each water cooled with a single slot solution. You could take it up a notch and use a chilled water system.
 
If I have infinite money, I would donate it to the third world so that there won't be any single human being starving anymore :eek:
 
If I have infinite money, I would donate it to the third world so that there won't be any single human being starving anymore :eek:

Are you kidding me? Many people around the world wouldn't use it wisely.
 
Just a thought...

Would SLI even be required necessarily, or could you just plugin a cheapo lowend 8400 or something and have it do the PhysX stuff?

That would be even cheaper than the addon PhysX cards if possible.
 
If I had infinite cash I would develop the next gen Internet incorporating VR.
While that was being developed I would buy up the best game houses and set them programming new VR Worlds and Games.
In the meantime I would take over Sony, Intel and Microsoft etc to develop a new client platform for the virtual world and Room filling 3D Projector Displays with full Sensory Body Armour and smell reproduction.
The Body Armour can be made weightless if you have the space through the use of powerful electromagnets surrounding the room.
I'd then build a massive warehouse to fly around in a new Virtual world :D

Now what could you do with that toy !
 
I cant wait for them to write phys-x profiles for all my games to get a stunning 1 fps improvement.

Phys-x was a worthless technology to begin with...it's amazing to see people be excited about it now that nvidia is incorporating it.

Phys-x card itself was fast and powerful, the problem was limited market. Developers aren't going to pour money into making levels and such that take advantage of Phys-X, as they just drastically cut their market or are basically making the same game twice (once with a ton of physics for phys-x, once for the CPU).

Now that it will be GPU accelerated, we will hopefully start seeing more advanced physics engines (realistic cloth movement, fluids, etc...)

It's like all the 8 series people think they are going to automatically get something for nothing.

they are...

Ultimately I think GPU accelerated physics is a stop gap at best, just like phys-x was. quad core CPUs are coming down to mainstream prices, and 8 core CPUs are on the horizon. Easy way for a game to take advantage of that is to use a multi-threaded physics engine, which I expect will take back any ground lost to GPU physics.
 
Its almost free, I've no doubt a few stream processors will be required for PhysX, so there might be a small performance loss but for a good deal more eye candy.
And you dont have to buy a second card to get it :)
 
Ultimately I think GPU accelerated physics is a stop gap at best, just like phys-x was. quad core CPUs are coming down to mainstream prices, and 8 core CPUs are on the horizon. Easy way for a game to take advantage of that is to use a multi-threaded physics engine, which I expect will take back any ground lost to GPU physics.

Hmm, I disagree. Standard CPUs are ill-equipped to do alot of parallel calculations like that. Even with 8, 16, 32, or 128 cores, a well designed GPU will chew through it faster.

It's the same reason the additional cores aren't used for rendering right now. Despite running at upwards of 3GHz, they just aren't suited for that sort of stuff.
 
To each their own but SLI can mean the difference between games being playable or not at native resolution on a 30" display. SLI isn't for everyone but it is hardly as bad as the nay sayers make it out to be.

I said it doesn't scale well, isn't consistent, and is not economical. If that makes me a nay sayer, so be it. I'm sorry folks with 30" monitors have no other choice.

If your buying a 8800 series graphics card your throwing money at diminishing returns. GTS, GTX, Ultra those must all suck because you get less FPS/$ going up the scale. Seriously, 100% thats a joke. You can't combine two of anything and get 100% scaling. Some of it has to go towards talking to each other.

IIRC, 2 8800 GTSs in SLI will out preform a 8800 Ultra at stock speeds. And the two of them will cost about the same. But it must suck, because it's not twice as fast as a single 8800 GTS.

:confused: :p


 
Hmm, I disagree. Standard CPUs are ill-equipped to do alot of parallel calculations like that. Even with 8, 16, 32, or 128 cores, a well designed GPU will chew through it faster.

It's the same reason the additional cores aren't used for rendering right now. Despite running at upwards of 3GHz, they just aren't suited for that sort of stuff.

yea, in another thread i showed that it wasn't until recently that a single CPU clocked at =~ 2Ghz could finally out render Quake2 when compared to a Voodoo 2... (and that is some pretty simple rendering comparatively)..


Specialized hardware is always > than generalized..
 
Nvidia has been running fluid dynamic simulation and the like on GPUs for more than a year now. I'm imagining all they had to do was write a wrapper of sorts -- PHYSX to CUDA?
 
Nvidia has been running fluid dynamic simulation and the like on GPUs for more than a year now. I'm imagining all they had to do was write a wrapper of sorts -- PHYSX to CUDA?

I would think Nvidia knew for a while a wrapper would make the PHYSX card work off the GPU.... it also knew people would be more apt to buy NVIDIA with that built in option...they also knew that AEGIA wouldn't license it because they would be dead after that....so NVIDIA just bought them

INTEL is doing the exact same thing with HAVOK and LARABEE

and one can assume Ati has something hatched as well

I dont think we will see a major PHYSX attack until DX11 when its supported by MS
 
If I'm not mistaken I thought there was some dedicated software/hardware which was supposed to be dedicated physics support, but I'm not sure I remember a big hype-machine thing about the GTXs was physics HARDWARE support parallel to ATi's GPGPU Physics... Maybe nVidia did have the hardware physics on the 8800 GTX G80 cards and now we'll get to benefit for being early adopters. It hasnt' been the first time something like this happened.

Take the PS3 for example, it had the PS2 emotion chip on the top-of-the-line launch PS3 60GB editions, now the 80gb has onboard software emulation. I hope something like this happens for us early g80 owners because the g92 owners shouldn't be able to reap what we sow. That's not a fanboy comment but it's about time us early adopters don't get fucked, and instead we get something nice, but who knows. If it's all software then I really don't care.
 
Am I wrong or does this make the 9800GX2 a much more attractive purchase?

Especially for us without SLi slots...
 
Take the PS3 for example, it had the PS2 emotion chip on the top-of-the-line launch PS3 60GB editions, now the 80gb has onboard software emulation. I hope something like this happens for us early g80 owners because the g92 owners shouldn't be able to reap what we sow. That's not a fanboy comment but it's about time us early adopters don't get fucked, and instead we get something nice, but who knows.

You already have gotten something nice: One year of G80 goodness while the rest of us were stuck with the 7-series or earlier, patiently (or not) waiting for the refresh or the next generation. ;) That should be reward enough. You also have tri-SLI capacity while G92-owners don't for what it's worth.

Anyway, it said in the article that it uses CUDA in hardware to do the non-graphics physics processing, and that all of the 8-series is capable of that, from the old G80 to the lowly G84, to the new G92.
 
I'm not sure where it came from, but I installed the newest Beta ForceWare driver last week, along with all my other drivers for my new PC, and I have PHYSX stuff in my Control Pannel. :eek:

EDIT: I've been informed that when I installed the Unreal Tourney 3 demo, it installed that PhysX stuff.

I'm still not convinced that this is the route to go however.

1) This will be Nvidia only, so developers are going to avoid intergrating gameplay important physics, this just means it will stay more of an eye candy thing.
2) AFAIK theres still issues with latency trying to run gameplay important physics from the PCI-e bus, wether from a PPU or GPU.
3) CPU's don't have the above issues, and they're becoming far more powerful than games require right now. All current games run absolutely fine on dual core CPU's and yet we have access to the same speed quad core kit, and we have 8+ core CPU's on the roadmap.

It just makes much more sense to put this load onto the CPU, games like Alan Wake are supposed to be dedicating one or more cores entirely to physics calculations, if thats successful it maybe that other developers follow suit, this allows them to reach the biggest audience and make the physics more than just eye candy and impliment them into gameplay more and more.
 
It just makes much more sense to put this load onto the CPU, games like Alan Wake are supposed to be dedicating one or more cores entirely to physics calculations, if thats successful it maybe that other developers follow suit, this allows them to reach the biggest audience and make the physics more than just eye candy and impliment them into gameplay more and more.

The problem is that probably more people right now have a GeForce 8-series video card than have a quad-core processor. And both are a minority of gamers I'd wager. In a couple of years though, that could definitely be the ticket. :) It's cheaper to get a decent quad-core now than it was to get a good dual-core 3 years ago!
 
You already have gotten something nice: One year of G80 goodness while the rest of us were stuck with the 7-series or earlier, patiently (or not) waiting for the refresh or the next generation. ;) That should be reward enough. You also have tri-SLI capacity while G92-owners don't for what it's worth.

Anyway, it said in the article that it uses CUDA in hardware to do the non-graphics physics processing, and that all of the 8-series is capable of that, from the old G80 to the lowly G84, to the new G92.

We did indeed really luck out. I bought my 8800GTX's about the first week they were available. I'm still using the same cards. Pretty good investment if you ask me. I've had countless hours of gaming on them. They've been powering my Dell 3007WFP for more than a year and I couldn't be happier. Crysis is really the first game that makes me feel like upgrading. I haven't had this kind of longevity in a video card purchase in years.
 
Still... I think it might also be an idea to offload it to one of the cores on a Quad core or better.

Sure, the CPU is not anywhere near as efficient as the GPU when caluculating these sorts of things, but Intel is already prepping their 8 core and 16 core designs. What else are we going to use them for? A full hardware TCP/IP stack with real time per packet filtering on one core would be nice too. Audio could also get a decent speed and quality boost by putting it all onto a processor.

i am sure that you have used 3dmark06 and witnessed the cpu test. ever notice that even an oc'ed quad core only puts down about 2-3 fps? thats a great demo about how inefficient a cpu is at doing floating point calculation such as graphics. wonder why it takes so long to rip and encode video and audio media? b/c cpu's arent optimized for it. audio/video/physics would have absolutely horrible quality if run on a cpu core, even if its a dedicated core. all of those are better run on hardware designed and optimized for the processes. plus it saves energy too ;) but i would agree that the hardware TCP/IP stack on a dedicated core would be useful and probably very good
 
Hmm, I disagree. Standard CPUs are ill-equipped to do alot of parallel calculations like that. Even with 8, 16, 32, or 128 cores, a well designed GPU will chew through it faster.

It's the same reason the additional cores aren't used for rendering right now. Despite running at upwards of 3GHz, they just aren't suited for that sort of stuff.

The problem is added cost for the player.
Which is more common right now, a fast cpu and slow gpu or the opposite.
Most store-bought computers seem to focus on cpu power over anything else (nice marketing I guess). Also the fact that the majority of gamers (kids) can't afford to get anything that costs a bundle, standard bundled computers is predominant.
Combine all these things together and the probable implementation of even more cores which even now are unused in games) and it makes most sense to first use that source of processing power rather than the gpu/discrete solution.
The software houses aren't going to make games that require either a Nvidia, Ati or whatever to run, but they already have to require a cpu.

Noone is disputing that the discrete sollution is the best but the argument is that "best" isn't enough. Viable economically for the softwarecompanies is.
 
The problem is added cost for the player.
Which is more common right now, a fast cpu and slow gpu or the opposite.
Most store-bought computers seem to focus on cpu power over anything else (nice marketing I guess). Also the fact that the majority of gamers (kids) can't afford to get anything that costs a bundle, standard bundled computers is predominant.
Combine all these things together and the probable implementation of even more cores which even now are unused in games) and it makes most sense to first use that source of processing power rather than the gpu/discrete solution.
The software houses aren't going to make games that require either a Nvidia, Ati or whatever to run, but they already have to require a cpu.

Noone is disputing that the discrete sollution is the best but the argument is that "best" isn't enough. Viable economically for the softwarecompanies is.

Companies don't care if one can afford to game or not, since the market for it is already there, and although growing potential would be nice, it's a market that doesn't makeup a significant overall share of players when it comes to physics support. Another thing is that it would cost much more to create software/technology that would emulate physics on spare cores to offload the stress on the GPU, the manufacturers would have to pay more in order for US to have the better experience, which in this case when it comes to physics can be more of a negligible point. I think I agree with your main point about the cost, but I don't think these companies care enough about if we can afford higher-end stuff, and as such wouldn't care about added cost because they sure as hell aren't paying for it.
 
Can only think of the many PPU defenders (well maybe not many..) that feel really dumb for early adopting this product without a single piece of show piece software (sorry but the free and crappy game they released as a download over a year and half later doesnt count worth a damn).

Not to toot my own horn here but I can remember many arguements on the PPU forums here with people like my-self firmly believing that PPU's were a doomed growing pain of introducing real time and complete Physics processing into the world of PC gaming, while being told at the same time that "PPU's are the ONLY product that can really pull this off in real time" ...ya right. Not many people realize that your GPU is the single most powerful CPU/Physics processor in your entire computer.

PPU's was doomed from the moment it came out.

But I gotta say if Nvidia thinks I'm gonna a third video card JUST for physics processing? They can kiss both sides of my ass since there isnt a SINGLE GAME that uses PHYSX worth a damn.
 
I cant wait for them to write phys-x profiles for all my games to get a stunning 1 fps improvement.

Phys-x was a worthless technology to begin with...it's amazing to see people be excited about it now that nvidia is incorporating it.


It's like all the 8 series people think they are going to automatically get something for nothing.


These must be the same people who thought a magic driver and magic patch would suddenly make Crysis run twice as fast.

For all the insulting you're doing you seem to be pretty clueless yourself.

The point of physics technologies is not to improve framerates... it's not a faster CPU or GPU. The point is to add to the IMMERSION/experience of the game, by adding an element that hasn't really been there up until now, without necessarily DROPPING your framerate.

If you could take your system and add massive physics calculation power without dropping graphics performance you've only improved your game and opened up tons of new gameplay possibilities.
 
I said it doesn't scale well, isn't consistent, and is not economical. If that makes me a nay sayer, so be it. I'm sorry folks with 30" monitors have no other choice.



:confused: :p



Actually, SLI and Crossfire are not inconsistent.. developers are inconsistent in how well they design their games to take advantage of more than 1 gpu. So it really comes down to them, as NVIDIA and ATI cannot recode and run their game on the fly for them :)
 
So we could buy a $40 PCIe 1x 8400GS and use that just for PhysX while the 8800GT does the other stuff?
 
Remember nVidia announcing that all future motherboards would include onboard graphics?

Has anyone heard whether the nVidia 8200/9200 northbridge will be supported by CUDA?

I am pretty sure that a nVidia GPU fabbed at .065u running CUDA will be able to perform in the same ball-park as a Physix PPU fabbed at .13u.
 
Why does everyone feel that writing games to support PhysX in GeForce8's won't happen?

Games have support for plenty of optional components or capabilities already.

EAX 2,3,4,5 is a hardware option, and is enabled or disabled in the game. It plays fine without it, but better with it. The game writers have written the game to run with or without it.

IIRC, Doom was written to use software rendering, 3dfx or OpenGL. That sounds kind of like writing critical game components differently to support optional hardware - no?

I remember reading in the Bioshock tweak guide about how different hardware rendering features were enabled were set when low/med/high was set on various options. Don't you think that also represents coding for hardware support of optional features?

If pretty much everyone with nVidia 8 hardware has PhysX when the dust settles, that's probably worth writing optional code for in future games, or upcoming patches of current games. I don't see why we would have a "PhysX support - YES/NO" or "Physics - software/PhysX/??" option.

Hopefully the drivers or the game software will give us some knobs to tweak to control how many resources we want to make available for physics at the expense of other rendering.

On, say, HL2 or COD4 I bet my G92 GTS has some cycles to spare at 1600x1200 so I may allow significant resources for physics instead of cranking AA/AF to the max. On Crysis, however...
 
Why does everyone feel that writing games to support PhysX in GeForce8's won't happen?

Games have support for plenty of optional components or capabilities already.

EAX 2,3,4,5 is a hardware option, and is enabled or disabled in the game. It plays fine without it, but better with it. The game writers have written the game to run with or without it.

IIRC, Doom was written to use software rendering, 3dfx or OpenGL. That sounds kind of like writing critical game components differently to support optional hardware - no?

I remember reading in the Bioshock tweak guide about how different hardware rendering features were enabled were set when low/med/high was set on various options. Don't you think that also represents coding for hardware support of optional features?

If pretty much everyone with nVidia 8 hardware has PhysX when the dust settles, that's probably worth writing optional code for in future games, or upcoming patches of current games. I don't see why we would have a "PhysX support - YES/NO" or "Physics - software/PhysX/??" option.

Hopefully the drivers or the game software will give us some knobs to tweak to control how many resources we want to make available for physics at the expense of other rendering.

On, say, HL2 or COD4 I bet my G92 GTS has some cycles to spare at 1600x1200 so I may allow significant resources for physics instead of cranking AA/AF to the max. On Crysis, however...


Why anyone would choose higher than 2xAA/8xAF with a big monitor and a high rez in a fps shooter escapes me.... YOU WILL NOT SEE THE DIFFERENCE REAL TIME. :)...

so a few resources for PhysX as you stated ...should be a easy option
 
Back
Top