NVIDIA Purchased AGEIA Technologies

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
From the front page:

NVIDIA, the world leader in visual computing technologies and the inventor of the GPU, today announced that it has signed a definitive agreement to acquire AGEIA Technologies, Inc., the industry leader in gaming physics technology. AGEIA's PhysX software is widely adopted with more than 140 PhysX-based games shipping or in development on Sony Playstation3, Microsoft XBOX 360, Nintendo Wii and Gaming PCs. AGEIA physics software is pervasive with over 10,000 registered and active users of the PhysX SDK.

"The AGEIA team is world class, and is passionate about the same thing we are—creating the most amazing and captivating game experiences," stated Jen-Hsun Huang, president and CEO of NVIDIA. "By combining the teams that created the world's most pervasive GPU and physics engine brands, we can now bring GeForce®-accelerated PhysX to hundreds of millions of gamers around the world."

"NVIDIA is the perfect fit for us. They have the world's best parallel computing technology and are the thought leaders in GPUs and gaming. We are united by a common culture based on a passion for innovating and driving the consumer experience," said Manju Hegde, co-founder and CEO of AGEIA.

Like graphics, physics processing is made up of millions of parallel computations. The NVIDIA® GeForce® 8800GT GPU, with its 128 processors, can process parallel applications up to two orders of magnitude faster than a dual or quad-core CPU.

"The computer industry is moving towards a heterogeneous computing model, combining a flexible CPU and a massively parallel processor like the GPU to perform computationally intensive applications like real-time computer graphics," continued Mr. Huang. "NVIDIA's CUDA™ technology, which is rapidly becoming the most pervasive parallel programming environment in history, broadens the parallel processing world to hundreds of applications desperate for a giant step in computational performance. Applications such as physics, computer vision, and video/image processing are enabled through CUDA and heterogeneous computing."

AGEIA was founded in 2002 and has offices in Santa Clara, CA; St. Louis, MO; Zurich, Switzerland; and Beijing, China.

The acquisition remains subject to customary closing conditions.
 
I wouldn't buy a separate physics card but you integrate it onto a graphics card and i'll buy it. Good move, and good for us consumers.
 
With the games that have Ageia support today, I'm surprised to see this happen, but considering the cash they spend on "developer relations" I won't be surprised to see Ageia physics in every single game that arrives in a few generations time once they've got this physics in their GPUs, making TWIMPTB something more than mere framerate optimizations (various games), improved stability (hellgate london on an ATI card vs Nvidia card) and so on.

Just hope they use this wisely instead of just locking out ATI/Intel/etc graphics. Or did they do this to try to get their graphics hardware into the next generation of consoles since console devs are happily slurping up Ageia code?
 
Idle musings on my part...

Aside from the obvious GPU physics, and the advantage that will leverage in bringing physics acceleration to the industry...

I seriously wonder if a chip on the nForce boards could also be added to pump up the nForce platform. The only thing propping it up right now is SLI, and I think many people here will agree that ultimately this is harming SLI adoption and damaging goodwill towards nVidia.

By leveraging an onboard GPU, I wonder if an nForce mobo could be viewed as a great platform for physics acceleration...thereby giving a more logical advantage to go with nForce.
 
Interesting, pretty cool, and where does this leave Havok????

Hopefully nvidia will find a way to retro-fit the physics to the 8800 series of GPUs.
 
Idle musings on my part...

Aside from the obvious GPU physics, and the advantage that will leverage in bringing physics acceleration to the industry...

I seriously wonder if a chip on the nForce boards could also be added to pump up the nForce platform. The only thing propping it up right now is SLI, and I think many people here will agree that ultimately this is harming SLI adoption and damaging goodwill towards nVidia.

By leveraging an onboard GPU, I wonder if an nForce mobo could be viewed as a great platform for physics acceleration...thereby giving a more logical advantage to go with nForce.

It would make much more sense to integrate PhysX into the GPU itself, or at least as a seperate chip on a graphics card than into the motherboard... It wouldn't suprise me if Nvidia has far more cards in 3rd party chipsets than in Nforce...
 
Actually, the onboard idea is rather interesting. Already ATi (can't recall, but nVidia might be doing something along these lines as well) is working on allowing you to have integrated graphics and standalone graphics both working at once and both complementing each other. The present design of gpu's is such that you have stream processors capable of doing just about anything graphics/physics related. Thus, if you have an integrated gpu with stream processors, it can provide baseline physics support while the standalone card picks-up the slack. Potentially, in some of the lesser-demanding games for physics, the onboard could handle physics entirely, giving the standalone complete reign over traditional graphics.
 
I never thought I would see consumers happy to see competition go and the cost of PC gaming go up.

What exactly is there to be happy about here? This is like when Intel bought Havok to stop Havok FX, a bigger company buying out a smaller one to kill any threat of innovation to their profit margins. Now we can go back to the stagnation of waiting for CPUs to progress inch by inch while Nvidia makes future GPU requirements even more ridicules.

Oh happy days…
 
Hmmmm...............AMD next for Nvidia? I now take that back....didn't Nvidia take over some cpu company (Stexar) not that long ago? Why would they feel the need to buy Ageia to compete with Intel if they did not have a cpu in the pipeline?
 
I never thought I would see consumers happy to see competition go and the cost of PC gaming go up.

What exactly is there to be happy about here? This is like when Intel bought Havok to stop Havok FX, a bigger company buying out a smaller one to kill any threat of innovation to their profit margins. Now we can go back to the stagnation of waiting for CPUs to progress inch by inch while Nvidia makes future GPU requirements even more ridicules.

Oh happy days…

The difference? Alone, Aegia probably didn't stand a chance- why should I get a PPU if only a few games support it and I can get the same thing from a gpu or cpu? However, w/Aegia under nVidia's belt, I think Intel/Havoc have a real challenger, whereas they did not before- neither nVidia nor Aegia alone could have really done it, but together...
 
I never thought I would see consumers happy to see competition go and the cost of PC gaming go up.

What exactly is there to be happy about here? This is like when Intel bought Havok to stop Havok FX, a bigger company buying out a smaller one to kill any threat of innovation to their profit margins. Now we can go back to the stagnation of waiting for CPUs to progress inch by inch while Nvidia makes future GPU requirements even more ridicules.

Oh happy days…

Uh, well, I'd rather some company take Ageia's ideas and do something meaningful with them. The alternative was Ageia floundering because of this conundrum:

No one wants to implement support for a card no one has.
But no one wants to buy a card for something without support.

Working things like this into an existing product that people will be buying anyway is of HUGE benefit, because it almost forces developers into supporting it. Ageia never had the horsepower to give companies incentives to use their stuff. Whereas companies like Intel, Nvidia, AMD, they have significant leverage in the industry and can get support for their stuff implemented.
 
Smart move for the big "N", bad news for consumer...The way I see it, nvidia has SLi, chipset and now physic API biggest competitor to Havok which is now part of intel, the only counterpart pushing nvidia is ATi which is lacking an in house physic API, moreover missing a chipset of its own...

I can see 3dfx and ageia employees in nvidia cafeteria exchanging dreams about how the futur of gaming and hardware would have turned out had they continued their quest on a separate way!
 
Smart move for the big "N", bad news for consumer...The way I see it, nvidia has SLi, chipset and now physic API biggest competitor to Havok which is now part of intel, the only counterpart pushing nvidia is ATi which is lacking an in house physic API, moreover missing a chipset of its own...

I can see 3dfx and ageia employees in nvidia cafeteria exchanging dreams about how the futur of gaming and hardware would have turned out had they continued their quest on a separate way!

I'd wait and see what Intel does with Larrabee when it hits in '09. Wouldn't surprise me if they do graphics and physics on that thing, or graphics on the Larrabee and physics on the CPU. Intel's not stupid (like AMD has been as of late).
 
It would make much more sense to integrate PhysX into the GPU itself, or at least as a seperate chip on a graphics card than into the motherboard... It wouldn't suprise me if Nvidia has far more cards in 3rd party chipsets than in Nforce...

nVidia is already planning on including an integrated GPU for power savings and hybrid SLI. What I'm imagining is that physics capability is a perfect use (relatively lightweight) for what would otherwise be a dormant integrated GPU in 3d apps.

As it stands, I imagine they wouldn't even need to do much to enable such a feature...if they get physics working on their GPUs then it's almost an obvious step to use the integrated GPU for it.

I don't imagine them developing this to the exclusion of GPU physics, but rather as an additional feature of the nForce based platform. This would allow them to leave one or both GPUs up for graphics rendering while offloading some or all of the PPU work to the integrated video.
 
I understand the rationality behind it, but I also understand what affect this is going to have on the future PC gaming.

The portion of my upgrade budget for the GPU is going to have to get a butt load bigger; god knows what affect this will have on the mainstream range. The majority of the physics are going to have to be effects until the mainstream performance range catches up, we are back were we started only it is GPUs this time.

[edit]

Not forgetting a GPU physics solution will do jack for the game play of a console orientated game, which makes up the majority these days. Console GPUs won't have all the CUBA and GPGPU tweaks modern ones have, there will be incompatibility issues.
 
I always wondered when one of the video companies were going ot make a bid for Ageia.

A seperate chip on the card or built into the GPU would be nice. It's like the old co-processors. I'm not too sure which would work better but I bet it's just a matter of time we something. At least I hope this was to make a better performance situation and not to just cockblock someone.
 
WooHoo, my physics card is no longer a useless piece of shit, its now a useless piece of Nvidia shit, you see, the Nvidia monicker makes all the difference :D

Seriously, I hope that nvidia turn it around and actually do something that will help me actually be able to use it for more than 1 or 2 games, I am glad that nvidia bought ageia as it means much much more support for future physics products, but I doubt if nvidia will support the p1 physx cards, hopefully they will but I will just have to wait and see.
 
It seems to me that the gpu already has all the workload it can handle, and I'm not keen on paying more money for added physics hardware. Meanwhile, Intel's Havok could run on all those extra cpu cores games aren't using yet.
 
It would be cool to see a while new line of video cards by Nvidia. 8800GTX, 8800GTX/PPU, 8800GT, 8800GTPPU, etc etc. Basically we'd get the same choices we have except each one of those offers a PPU stuck on it also. That'd be kinda cool ... IMHO
 
Wow, this is first class news, Physics is finally getting a direction to go in.
Not so good news for AMD unless they get some Physics tech onboard that competes.
A bit of a rough time for them as they dont really want to diverse the techies they have.
 
Why do people want to put this on the video card? I suppose next you will want your sound acceleration on your video card too... heck why not usb, LAN, etc. (yes, I'm being sarcastic here).

This is better off as an add-on card (cough...PCIE...cough). What would be really nice is if they would raise the bar on the card. Make it so powerful and so full featured nobody would want to be without it (like 3DFX did back when accelerated graphics was in the infant stages).

The video card is for rendering acceleration... let's keep it that way. I personally don't want to see multiple components bundled together. It's the same reason gamers don't buy AIO video editing gaming cards anymore. Every time you upgrade your video card you have to update your capture component too. If your capture component is seperate, you can upgrade 'that' component when you want instead of buying the same capture component over & over again. Also bundling it together would hurt the consumer IMO because it could force the user to stay with brand-x video cards regardless of actual rendering power beacause the user can't afford to lose the PPU acceleration.
 
Next step for Nvidia?

Possible buyout of Via and Centaur in the next 2 years since they now have a halfway decent chip? Possibly even buy AMD (and sell off ATI)... if they don't get their act together.

speculation...

I'm definitely happy about this buyout. PhysX was such a great idea on paper... and with TWIMTBP support, it might actually come to fruition.
 
Damn, wish this could have happened sooner. Something in me makes me think that this technology wouldn't become standard until it was on a GPU within a console. I'm a die hard PC gamer but it just feels like games lately have been driven by what they can do on a console.
 
Why do people want to put this on the video card? I suppose next you will want your sound acceleration on your video card too... heck why not usb, LAN, etc. (yes, I'm being sarcastic here).

This is better off as an add-on card (cough...PCIE...cough). What would be really nice is if they would raise the bar on the card. Make it so powerful and so full featured nobody would want to be without it (like 3DFX did back when accelerated graphics was in the infant stages).

The video card is for rendering acceleration... let's keep it that way. I personally don't want to see multiple components bundled together. It's the same reason gamers don't buy AIO video editing gaming cards anymore. Every time you upgrade your video card you have to update your capture component too. If your capture component is seperate, you can upgrade 'that' component when you want instead of buying the same capture component over & over again. Also bundling it together would hurt the consumer IMO because it could force the user to stay with brand-x video cards regardless of actual rendering power beacause the user can't afford to lose the PPU acceleration.

There are 2 basic types of Physics that need generating in a PC, graphical and physical, both need to be produced to give the overall effect.
A graphics card is best suited for doing the graphics for physics and a CPU is best suited for the physical calculations describing the motion etc.
Therefore when NVidia produce a Physics solution it will likely have drivers that use spare cores on a CPU and extra silicon on graphics cards.

Buying AGEIA not only gives NVidia advanced in house expertise on how to do physical and graphical Physics simulations but it also gives them license to use the techniques without paying royalties.
And they will be ahead of the game!
A triple win for NVidia as they were no doubt going to expand the Physics side anyway.
 
It was only a matter of time. Stick a PPU on my GPU, and I'll embrace the technology.

If you have an 8+ series nVidia GPU, you already have a PPU on your GPU- that PPU is called a stream processor. The whole idea behind the unified architecture and stream processors was that you can saddle a single stream processor with a single task from a rather wide selection (including physics). Thus, if you have 128 SP's on a card, it can split those 128 however it wants/needs to. Why would you want a separate card that can only do physics? If nVidia allows for SLI between two different cards (ex- 9800GX2 and 9600GT), you could still get your big, expensive, beefy graphics card and then a cheaper card to cover most/all of the physics and even put a bit more towards graphical power when its physics power isn't needed. Having a separate card for physics is like going back a generation (pre-SP's).
 
There are 2 basic types of Physics that need generating in a PC, graphical and physical...

???Graphical Physics??? ...You're kidding right?

Graphic effects should not be confused with physic modified effects.

Graphic effects are just that...Graphic effects!!! They have some programming attached to it to animate. The video card is not calculating the "effect", it is rendering the "effect". Do not confuse lighting effects as physic effects either as that a rendering process. For example... when a car blows up in a game all the video card is doing is handling the rendering, the explosion itself is dictated by the programming.

Physic effects are generally those that may change depending on collision or evironment variables (the physic calculations may or may not be accelerated).

...and a CPU is best suited for the physical calculations describing the motion etc.

Still kidding right?

The CPU is not the best for physic calculations. Physic calculations are parallel in nature, CPU's are generally serial.

Buying AGEIA not only gives NVidia advanced in house expertise on how to do physical and graphical Physics simulations but it also gives them license to use the techniques without paying royalties.

I'm reading this 2 different ways...(so I'll agree and disagree at the same time.)

Ageia's Physx was basically a free API so royalties were not being paid. However, now that a new company owns it that might change... Not a plus in my book.

It will however give them the right to improve upon the existing design without worry of patent infringment... I see this as a plus.
 
If you have an 8+ series nVidia GPU, you already have a PPU on your GPU- that PPU is called a stream processor. The whole idea behind the unified architecture and stream processors was that you can saddle a single stream processor with a single task from a rather wide selection (including physics). Thus, if you have 128 SP's on a card, it can split those 128 however it wants/needs to. Why would you want a separate card that can only do physics? If nVidia allows for SLI between two different cards (ex- 9800GX2 and 9600GT), you could still get your big, expensive, beefy graphics card and then a cheaper card to cover most/all of the physics and even put a bit more towards graphical power when its physics power isn't needed. Having a separate card for physics is like going back a generation (pre-SP's).

Although technically correct that stream processors could be programed for PPU functions, how efficient is it going to be vs. a dedicated PPU card? With your logic, why get an add-on video card when a video card built into the motherboard can render an image? The answer is as simple as dedicated cards usually focus on specific tasks better than AIO solutions. Plus there's the whole upgrade thing I spoke about in a previous post which I won't get into here. I personally want my video cards SP's handling rendering and not be bogged down by full-scale physics.
 
Although technically correct that stream processors could be programed for PPU functions, how efficient is it going to be vs. a dedicated PPU card? With your logic, why get an add-on video card when a video card built into the motherboard can render an image? The answer is as simple as dedicated cards usually focus on specific tasks better than AIO solutions. Plus there's the whole upgrade thing I spoke about in a previous post which I won't get into here. I personally want my video cards SP's handling rendering and not be bogged down by full-scale physics.

Its not his logic, thats part of its design.
Seriously, read up on this.
 
Man, Ageia's crap it horrid. I dont know if it's the game developers or hardware but after this long I can't ever justify touching Ageia's PhysX or 99% of the games that use them.
 
Its not his logic, thats part of its design.
Seriously, read up on this.

If I haven't read up on any of this stuff I wouldn't have commented.

I'm talking about the effectiveness of running a single video card as a GPU and a PPU at the same time. We all probably seen tech demos of both BrandX cards being used as a PPU, however that was not going to see the light of day until the next major revision of DX... well at least until now maybe.

For the near future, the PPU really needs to be seperate from the main video card. Not saying you can't make a GPU work as a PPU. The "switching modes" of a video card between GPU/PPU mode has been detailed in the past "press-releases" would be welcome. However your still sitting with 2+ video cards then, so the whole arguement about having everything being handled by one single card is not really valid.
 
:)
I'll give you an example.

When an explosion occurs, with the basic physics we have been used to for example, we would get a few rocks and a cloud of dust.
With additional physics processing, you can have many different sizes and types of particle ranging from mini dust swarms to larger rocks, trees to splintered wood etc. These can interact with each other and the environment in more complex ways.

Each object can be destructible when acted upon by a strong enough force from another object, causing it to break into smaller particles which require drawing.
Wooden splinters can slice into objects, the distance penetrated and damage done being determined by the pre-programmed physical properties of the object being struck, the splinter of wood and the velocities at impact.
A CPU will use spare cores to process this information and the GPU will draw the extra Graphical results.

These extra objects will therefore add an extra load on the graphics processor at some point so extra silicon for the Physics Graphics side will be needed. The current generation graphics cards have a massive number of Stream processors some of which can be used for this purpose.
The graphics processor may also need to play a role in any collision detection.

It makes sense to incorporate these parts of the physics processing on the graphics card to reduce latency and increase bandwidth for Physics data transfers
 
good news, hopefully this will accelerate the release of a die-shrink PPU on a PCIe interface.

OT - with nVidia adding onboard unified shader graphics to every motherboard it would be interesting to see this offered as a low-end hardware-accelerated Physics option for those also using discrete video cards.

Hybrid SLI is useless for anyone with a G92 class GPU from a graphics performance POV, but the onboard graphics could be a very creditable low-end Physics accelerator.
 
:)
I'll give you an example.

When an explosion occurs, with the basic physics we have been used to for example, we would get a few rocks and a cloud of dust.
With additional physics processing, you can have many different sizes and types of particle ranging from mini dust swarms to larger rocks, trees to splintered wood etc. These can interact with each other and the environment in more complex ways.

Each object can be destructible when acted upon by a strong enough force from another object, causing it to break into smaller particles which require drawing.
Wooden splinters can slice into objects, the distance penetrated and damage done being determined by the pre-programmed physical properties of the object being struck, the splinter of wood and the velocities at impact.
A CPU will use spare cores to process this information and the GPU will draw the extra Graphical results.

These extra objects will therefore add an extra load on the graphics processor at some point so extra silicon for the Physics Graphics side will be needed. The current generation graphics cards have a massive number of Stream processors some of which can be used for this purpose.
The graphics processor may also need to play a role in any collision detection.

It makes sense to incorporate these parts of the physics processing on the graphics card to reduce latency and increase bandwidth for Physics data transfers

That's dead wrong, and ridiculous. There is no "graphics physics". The graphics card outputs colors, precisely 1 color for each pixel. Nothing more. It doesn't calculate any physics, it calculates what color goes to what pixel when handed information about the scene by the application through the directx runtime.

You're making this up as you go along. There is graphics, and there is physics. The GPU can potentially do physics calculations, and has in the past (see Havok FX), but there is no such thing as "graphics physics." Physics is physics, whatever happens on the screen is part of the scene and is sent to the GPU to render.

There is no extra load on the GPU, as it's simply getting objects in the scene to render and produce the appropriate colors for like it always has. It just so happens that these objects can be altered in the scene via physics. That is all.
 
Back
Top