AIPU

Brent_Justice

Moderator
Joined
Apr 17, 2000
Messages
17,755
I'm not even sure where to post this, heh, it is certainly unique and new. So I'll post it here in this thread since it has good discussion on dedicated processors.

I give you, the AIPU

http://www.aiseek.com/

AIseek's flagship product, the IntiaTM processor, is the first dedicated processor for artificial intelligence (AI). By accelerating and optimizing behavioral computations, the Intia processor empowers developers to build entirely new game worlds, populated by intelligent life.

Demos

http://www.aiseek.com/Demos.html
 
Once I see benchmarks, then I'll know it's not just ghostware or another fancy-looking card that does nothing...eventually we won't have high-power CPUs, just a bunch of add-in cards for a low-power CPU to coordinate...
 
The concept is great...and I'd love for someone to get a working version for a game like FF XIII or something :D
 
Interesting. I love singleplayer games, so this could really make for some interesting stuff. Unlike the PPU (which I have decided is unnecessary, but that's another thread), this will be more than a processor and some RAM slapped on a PCB. This processor will have to actually have an AI decision-making engine and such. I can only forsee two problems:

Unlike the PPU, processed information will have to get back to the CPU to make this work. Is the PCI(e) bus fast enough for this?
The other problem I can see is that every game's players will think exactly the same. FPS games will be stereotyped even more, since you will know how to avoid them all.

All in all, this is an interesting concept, and I'm very intrigued to see how it spans out. :D
 
I personally think that this has more potential for adding uniqueness and believability than the PPU will...

I mean, sure with the PPU you might be able to shoot through a hole you've made in a brick wall (grenade launcher anyone?) but if the enemies you're aiming at are running around like a bunch of scripted, headless chickens... what's the point!

I wonder if this'll have anything to do with NPC interactions as well... Like, even in newer RPGs NPCs have a list of like 3-4 things to actually say, and you can cycle between each set of phrases. Pretty boring!

A PCIe 1x slot has like 500mbps of bandwidth right? A 4x slot should be able to run at 2GBps... More than enough IMO.
 
I hope all these little add in cards eventually give intel/amd enough incentive to develope a one for all solution into the cpu.

/me wonders whats next ;o)
 
InorganicMatter said:
The other problem I can see is that every game's players will think exactly the same. FPS games will be stereotyped even more, since you will know how to avoid them all.

Actually, if their FAQ tells the whole story, AI could be just as diverse with AIPU. If the card simply accelerates calculation on common low level algorithms, then there could still be high level routines that create more complex behaviors. They simply wouldn't have to wait so long on the calculation of the low level pathfinding, line of sight, etc.
 
PPU's Blu-Ray/HD-DVD and now this...

when i can see real benefits and those benefits in games that i play, it is of no import.

the possibilities are interesting, and hold promise. I will t ake a look as this develops, just like i do with PPU and the next gen format debacle.

as long and infinity labs is not involved :p
 
Too bad it's not both an AIPU and a PPU- it would be much more appealing. As it is, people are going to run out of motherboard PCIE/PCI slots at this rate. You've got the option of popping in a sound card (X-Fi), a PPU, a Killer NIC, and now a AIPU; good luck trying to fit all of that AND SLI/Crossfire in one case to build the ultimate ultimate gaming rig.

EDIT: Although, I must say that I like the idea. Current AI is too "dumb" for my tastes and ramping up the difficulty doesn't actually make them smarter. It just buffs them while nerfing you (RPGs) or gives them unhuman traits (FPSes, RTSes). I don't want to play against something that wins only because it cheats; I want to play against something that beats me because of its skill. At this point, the only place to find that is by playing against other people, but this isn't always an option (SP-only games, those lonely hours when you have no web access, etc.).
 
Ozymandias said:
Too bad it's not both an AIPU and a PPU- it would be much more appealing. As it is, people are going to run out of motherboard PCIE/PCI slots at this rate. You've got the option of popping in a sound card (X-Fi), a PPU, a Killer NIC, and now a AIPU; good luck trying to fit all of that AND SLI/Crossfire in one case to build the ultimate ultimate gaming rig.

This is a good point. Even by combining PCIe 1x and PCI cards there still may not be enough room/slots for all of these combinations. It is a growing problem that needs to be addressed somehow IMO.
 
I thought that we already had one of these. What was it called? Oh yeah, a CPU.



---alternately---



Wait, guys! I have a novel idea: We make one processor that's really good at processing things like AI routines, and another one that's good at rendering video and physics. We'll call one a "Central Processing Unit" (CPU), and the other one a "Video Card". I'm sure that with this scheme, consumers will be able to customize their personal computers without a fear of over-complication....etc.
 
The "available expansion slots" problem comes to mind... IMHO just like the ppu, this is another gadget that needs to be built into an existing addon card, like say a video card or even a sound card... Many of us just don't have room for all of it.. EATX here we come..
 
Quad SLI, PPUs and now this. They must think we dont have enough things to already spend our money on :rolleyes:
 
I can definately see the possibilities of this card, but it doesn't really make sense to be a completely seperate card.

Civilization 4 is brutal once the map starts filling up. Late game turns can take several minutes if there are 7 or 8 AI players left. This card could definately speed that up.

However, with quad core processors due out at the end of the year, wouldn't it make more sense to offload the line of sight and pathfinding to one of the other cores? That's all this card is doing in essence (although I don't know if this card has a limited but specific instruction set). Multiple cores would also aid in the physics processing as well. Most games today don't take full advantage of the multiple cores, and that's if they do it at all.

My money is on multiple cores, not seperate cards.
 
Sovereign said:
Once I see benchmarks, then I'll know it's not just ghostware or another fancy-looking card that does nothing...eventually we won't have high-power CPUs, just a bunch of add-in cards for a low-power CPU to coordinate...

..or the other way around. A lot of the add-inn cards will be dropped for a multicore CPU environment.

As it is, the PPU and the AIPU could just as well be processed on another core of the CPU, which will have the added benefit of saving slots for really necessary cards like dedicated sound and GPUs. Two tasks that (at the moment) really requires specialized hardware - as opposed to the AIPU (or PPU for that matter).

I'd rather that we begin to exploit the raw power of dual (and quad+) core CPUs than fill our towers with useless, expensive, high tech junk.
 
Again, offloading the tasks to multiple core CPUs is still a lot slower and a lot less processing power than a dedicated chip.

In regards to 3D and Physics it is a tremendous difference, a difference between having realtime gameplay or not. That's why we have dedicated 3D accelerators now. CPUs cannot render current 3D graphics in real time.

I do not however know how a dedicated chip for AI would compare to CPU performance, but I'm guessing there is a similar revolution in potential using a dedicated chip vs. the CPU.
 
Why would that be slower?

The core is right there. Not all the way down at the other end of the PCI bus.
 
Great, more shit to add another few hundred £ to the cost of a pc. :mad:
 
I heard befor about a company in Israel planning to make a AIS1 Chip for AI.
Is this the same company. And how far and how much Dev support the get. Or are the in a early stage. To unveil there bussnes plans.

Yeah it will cost more but what will it add to games. What will it do for nextgen games.

I like the Idea.
For
Destructable enviorment. Wich impact drasticaly Pathfinding. Wich a Fix precomputed Pahting wouldn't do it for a solution.
But where this low level AI must be computed in realtime.
With AIS1 support JTF would drop Gameplay Physics for effects Physics because Pathfinding limitation on a dynamical map due to debri that blok.

I guess they use a terrain representation, stored on the AI local memory and with AI objects, to compute this AI load with the highbandwith of that local mem.
The frame setup and result return is a low bandwith compared to the computational bandwidth needs.
This Computational bandwith is more bottle neck for the CPU then PCIbus is for a a dedicated add-on card.

If the future brings more add-on's. Add-on are first optional. Mobo makers must jump on it.
Most mobo have six slots. So a Mobo must have most basic things on board. And enough slots for dedicated hardware
That would be
1 or 2 G-card slot with enough room
And 3 slots for dedicated hardware with also enough room.

problem is Mobo's have a lot of slot 6 to 8 but cramped together. Like only card with 10Watt with pasive HSF can use it.

A top High-end card would take 3 card. 1 for the PCB 1 for the HSF and 1 more for airflow.

Yes E-ATX sound interresting if you want it all.

1 G-card
1 Px-Card
1 Ai-Card
1 sound card.
1optional a GpGPU slot.

Not a problem. unless you use Triple GPU or Quad.

If Some game use those extra AI and Physics power to good use.
I would put a AIS1 and PPU before a dedicated sound card.

A mobo with a lot onboard. Even with Dedicated sound like the Envy24 Via hardware sound or better would be interesting. even more with a brakout box addition.

So with all those Addon mayhem from
Ageia
AIseek
ATI GPGPU HavokFX

The Mobo choice get important.
 
Brent_Justice said:
Again, offloading the tasks to multiple core CPUs is still a lot slower and a lot less processing power than a dedicated chip.

In regards to 3D and Physics it is a tremendous difference, a difference between having realtime gameplay or not. That's why we have dedicated 3D accelerators now. CPUs cannot render current 3D graphics in real time.

I do not however know how a dedicated chip for AI would compare to CPU performance, but I'm guessing there is a similar revolution in potential using a dedicated chip vs. the CPU.

As long as the PPU is a niche product, and PPU developers have a hard time convincing hardcore gamers of their necessity in games, support from game developers will be spotty. And support is the keyword for these new niche products.

AIPUs may suffer the same issues as the PPU has - and the CPU market as a whole is moving towards multicore solutions. Meaning the CPUs has the 'availability' advantage to both the consumer and the developer.
The benefits of the dedicated add-in card had to be extreme in order for the card to have a place in an upgrade cycle (which is already quite expensive). Or the cost of the AIPU card itself had to be significantly lower than the current PPU card. Especially if much of the same effects could be achieved with multicore optimizations.

I could be wrong, and maybe this AIPU card is the best thing to happen for the gaming community.

But to me it seems like the wrong path to pursuit, considering the options.
 
Many believe that desktop microcomputer architecture will inevitably move in an asymmetric parallel processing direction. Meaning, there will be many less powerful processors (CPU, GPU, PPU, etc...) that handle specific tasks more effectively and more efficiently than the system, as a whole, could do so without it. Of course, there will always be a place for a very powerful CPU (ALU+FPU) to perform typical desktop application functions, and to provide backup for the system's overall number crunching needs and data collation needs, but many think we will see a trend involving removing load from the CPU, moving it toward specialized processing units.

Personally, I am always excited to see engineers advancing the art and science of gaming and game development. Whether or not the AGEIA PhysX or AISeek Intia processors are successful, they have introduced a concept that has merit: providing dedicated processing power for special tasks.

Our need for processing power is still growing, and is growing out of proportion with the power of our CPUs. Personally, I think the best way to overcome this problem is asymmetric parallel processing, as described above. It all hinges on cost and usability, of course. The best tech in the world isn't worth a damn without a reasonable price, just as it isn't worth anything if nobody can use it.

-Mark.
 
I think this is a good idea. I'm not going to complain about more cards and more heat. I'm going to be glad that new technology is being developed that has potential to improve our gaming experience.

IMO, the impact of this chip on gaming and the eventual impact of a PPU will be about the same.

As the above poster said, I think that we are seeing a trend towards using seperate chips for specific tasks rather than one chip for all tasks. We've had just a GPU and CPU for a while now. But recently, sound, physics, networking, and now AI have all gotten a seperate processor. Graphics was done on one processor, but now two- and four-GPU setups have become options. Even the CPU is no longer one unit; it is a collection of several seperate units that each work on a single task at a time. No doubt about it, parallell processing is here to stay. As much as we may want to see it, we will probably never have everything combined onto one chip until we get quantum processors. This, however, is probably a good thing. Seperate, dedicated chips all working in tandem will usually work better than one chip trying to do everything.

Of course, there is the problem of not having enough room for everything. This will be solved in one of two ways:

The ideal solution would be for a motherboard to have a socket for a CPU, GPU(s), PPU, AIPU, DSP, and NIC, and memory banks for each. Accomplishing this would take major cooperation between hardware companies, however. After getting past the initial hurdle of the massive infrastructure change required to implement the multi-socket system, companies would have to coordinate their socket and memory-type changes, to avoid forcing you to buy a new mobo every six months. It is unlikely we will see anything like this in the near-future.

The other solution is a move to E-ATX, or even another whole form factor. This is what I perdict we will see within a few years. DFI, ASUS, or Gigabyte will release a massive E-ATX board with seven or eight slots. Eventually, others will follow suit, and the industry will switch to E-ATX. We will continue with our present add-in board upgrade cycle. This option has the disadvantage of more expensive upgrades, as you have to replace the entire add-in board every time, memory and all. But, many cases already support E-ATX, and it has other advantages, such as providing more airflow to the CPU and other chips.

Think about it. Windows is going modular after Vista. The CPU is being broken up into many seperate cores. Two GPUs are more popular now, and four GPUs are being introduced. We have a seperate processor for physics, sound, and networking. Soon, we will have a seperate AI processor. The move towards specialized processors is unstoppable at this point. All that remains to be seen is how heat and space problems will be taken care of. And either of the two solutions I posted above would work fine.

What else could we use a dedicated processor for? Video encoding? Security?

Perhaps the ultimate solution would be motherboards available with a certain number of sockets. One for the CPU, and others for dedicated tasks. Also on the mobos would be integrated sound and video chips with very limited power. If you only wanted to surf the internet, you could buy a mobo with only one socket for a CPU. Or, if you wanted to do more things with your computer, you could buy a mobo with more sockets. Each socket would be universal and would have its own memory bank, which would also be universal. When you wanted to add advanced AI capabillities to your computer, for example, you would buy and AI chip and some memory for it. You would then put the chip into one of your board's empty sockets, and put the memory in the memory bank for that socket. And, if you wanted to play games, you would buy one, two, or four GPUs and put them each in an empty socket.

Unfortunately, the chances of us seeing something like that are virtually zero.
 
I'm not convinced you'll see AI get any smarter with hardware acceleration.

It's a different situation to graphics or physics. It's not a problem you can solve by throwing more processing power at it. If you want more intelligent AI you need more sophisticated algorithms, and as far as I can tell, even the best in use today aren't putting any strain on modern CPUs.

An AIPU might serve a purpose in large-scale situations (like the Total War series, Supreme Commander, or the Elder Scrolls games), but I can't see a dedicated procassor making AI any smarter.
 
LuminaryJanitor said:
I'm not convinced you'll see AI get any smarter with hardware acceleration.

It's a different situation to graphics or physics. It's not a problem you can solve by throwing more processing power at it. If you want more intelligent AI you need more sophisticated algorithms, and as far as I can tell, even the best in use today aren't putting any strain on modern CPUs.
.

my thoughts exactly. The challenge in making good computer AI isn't having available processing power...it is actually writing out algorithms that make a believable image that the computer opponents are "thinking." Writing AI for things like chess come down (more or less) to sheer processing power, but in typical video games (like a FPS) that is not the case.

Now, perhaps having a dedicated chip would make that process easier by allowing programmers to take more "brute force" approaches to making believable AI. Maybe. Just trying to think of any sort of justification.
 
Brent_Justice said:
Again, offloading the tasks to multiple core CPUs is still a lot slower and a lot less processing power than a dedicated chip.

See here's my problem, this sems to me to be another product creating a need where none exists.

Yeah, it's faster, but does that really matter?

When playing, say, Half-Life 2, I never found myself thinking: "Man I wish the AI and Physics were more and faster!", and that was on an old-fashioned single-core A64 3800+ CPU running a 32-bit process.

So in Civ4 or whatever, it chugs a bit towards the end game, would it still chug if the same code could fully use that second 64-bit core that I already have that's effectively idle?

Now these guys are coming along and saying that aswell as a E6800 or whatever I need this to make my gaming experience better?

I don't think so.

From a purely logistical/practical standpoint I think these discrete add-in cards are doomed (in my opinion, I'm not an expert). Games are getting more and more expensive to make, and finding and training staff to use these toys, then coding them in is going to add time and expense to the development process for sweet FA benefit, because only a tiny % of even hard-core gamers are going to buy these things.

Why would developers throw away money like that?

Show me the Games, screw benchmarks, in current games the GPU not the CPU is the bottle-neck on how pretty everything looks, off-loading from the CPU won't make any difference unless the game is coded to be especially ai/physics heavy to begin with, which is kind of stupid unless you already know that the vast majority of your target market: A: Has the accelerator; B: Wants to play your game C: There's enough of them in A+B to make a profit.

So maybe they parallel develop two different modes?

Nope, they won't do that, a fight against 10 dudes in a FPS is a whole other game than a fight against a 1000 dudes, they'd have to effectively design a whole other game to make use of your physic/ai acceleration, they ain't gonna do that.

So in summary, nice Idea, but it ain't gonna fly.
 
HOCP4ME said:
..Think about it. Windows is going modular after Vista. The CPU is being broken up into many seperate cores. Two GPUs are more popular now, and four GPUs are being introduced. We have a seperate processor for physics, sound, and networking. Soon, we will have a seperate AI processor. The move towards specialized processors is unstoppable at this point. All that remains to be seen is how heat and space problems will be taken care of. And either of the two solutions I posted above would work fine...

You still need effective software to support this new dedicated hardware. That is trickier than it sounds, as more dedicated hardware requires more programming, thus the cost will increase.
And as MartinX mentioned - our dualcore machines second CPU are effective idle in most games, there is a lot of processing power left in the modern CPU as it is.

And I agree on the point that you can't just throw hardware at the AI; you need 'smarter' coding for that (pun intended)
 
Klintor said:
I thought that we already had one of these. What was it called? Oh yeah, a CPU.



---alternately---



Wait, guys! I have a novel idea: We make one processor that's really good at processing things like AI routines, and another one that's good at rendering video and physics. We'll call one a "Central Processing Unit" (CPU), and the other one a "Video Card". I'm sure that with this scheme, consumers will be able to customize their personal computers without a fear of over-complication....etc.

The CPU is a general purpose processor, which means that it does everything. The AIPU is dedicated to processing AI, which makes it several times more efficient and faster than a CPU at AI.
 
Am i the only one who has visions of Judgement day with this. Its basically Skynet on a card.
 
It is completely smoke and mirrors. That thing will never fly. The application of AI is very different from game to game. AI is parts black magic and parts art. AI is just a common word we use for all the dirty tricks and clever programming that is being done behind the scenes for a comptuer game to sport somewhat challenging actors, without just giving them super weapons or run them on rails. It would only make sense if you had a library of common AI routines, but there isn't. The reason? Because there is nearly no situation where you can reuse the AI from another game.
 
AIPU, PPU, NPU... all these have yet to find their 'killer app'.

Look back at the 3DFX days, the Voodoo card was only as successful as Quake at the time of release.

Until we see something like HL2:Ep2 taking advantage of these cards, with a substantial difference in gameplay - not the Aegis PPU with GRAW 'few more boxes', people will not warrant a $150-250 expenditure.

I've no idea what the API's are like for the AIPU cards, but I know that the whole GPU(PPU) and PPU cards already use established engines (Havok). It's unlikely that any software developers will go out on a limb to program a proprietary API/engine.
 
MartinX said:
See here's my problem, this sems to me to be another product creating a need where none exists.
There is always whining about bad AI. But forget FPS there are more genres. Who are more AI intensive.
New Features like LArgly destructable enviorments or geo destructable terrain. Has a large impact on Pathing.alos for FPS games. Sounds simple this low level AI. But If á lot of unit must be proscessed in a life like way simulating real behavior.
It's AI to the max.
AI now is commonly excepted and the level for FPS games is often enough.
This AIPU brings also new features

Yeah, it's faster, but does that really matter?
yes it matters. ? Where Graphics have evoluted far AI and Physics are staying behind.
If there is one day a third very more powerfull Geometry shaders. And PPU and GPU computing destructable enviorments. AI must handle this Dynamical Pathing. Must take into acount off tactical highmaps. Cover, choke points, high ground into account.
CPU get more powerfull. And mutiple cores make them More parralising but the lot of task wich are offloaded must be setup but with a always growing heavier load.

Vista is more efficent means Game will use this efficentie to cranck up The Graphics load. Wich the CPU must setup. Games get evolve to a higher level faster.
game Devs will use the power wich will or is avaible in the market.
Besides that. Most people are budged bount. The largest reason agains any dedicated hardware. So a large Part will be against it because of there budged. They cant or don't want to due to budged restrictions. Because there are no games in the early beginning. Price is the only point. But don't lookin forward.
Then others want to but have no budged.
Then other are more interrested in the large step games will make with al this dedicated hardware That it will cost something will be a secundairy point.
If a small amout off audience from this world go for it. It could be millions off people.
These things are optional and will be for a very long time.
If you are agains it. Play games without it. You get the Low detail option what Fit's a game rig with a CPU and GPU and no dedicated hardware.
While other gamers play the games in a way that CPU can handle after 3 years. And have paid for it. A choice every one must make.
When playing, say, Half-Life 2, I never found myself thinking: "Man I wish the AI and Physics were more and faster!", and that was on an old-fashioned single-core A64 3800+ CPU running a 32-bit process.
Wel we where used to nothing more, that was the top then compared to games with stupid AI. thatw as the best there is then.
So in Civ4 or whatever, it chugs a bit towards the end game, would it still chug if the same code could fully use that second 64-bit core that I already have that's effectively idle?
Uh yes? 64 bit don't bring much. -5 to 15% difference CPU range have more difference then that. But Application prosces that can handle more the 2 GB the aplication limmit.
There it is good for.
The lak of innitiative a stupid pathing and unlogic behaviour kills gameplay.
It's like babysit your units.
Like Harvester pick the deposit on enemy territory.
If one unit get's attackt it schoots back but the one in its close surrounding don't act.
Massive movement cluch down in mahem.
these are little thing. But destructive to gameplay and fun killers.
Now these guys are coming along and saying that aswell as a E6800 or whatever I need this to make my gaming experience better?

I don't think so.
There is a large possibility it does.
All those Hardware dedicated stuf. Is each capable of a very much larger load. So a CPU get a very much large load in managing all those dedicated hardware.
Not so for the first incarnation wich where build on a old Production prosces. But if they have setteld in the new born market they can use more competing produktion prosces and get a lot more powerfull in there next generation chips.
The CPU get very much larger task to manage off to dedicated hardware. This means games get richer more emersive and realistic in many ways. That would be very noticable also in gameplay and fun.
But the biggest problem it will cost some. A choice to be made.
From a purely logistical/practical standpoint I think these discrete add-in cards are doomed (in my opinion, I'm not an expert).
No, such company need some healty volume production to Excist. That doesn';t mean that 90% of the total Gaming audience on this plannet must buy it.
Most people are budged limited. The come only more in to play wenn.
1) Introduction fase is over the period it will cost more. $200 to 300 and the lowest Dev support and only a few games.
2) Game support is higher and it becomming a bit of a optimum requierment. Not minimum. Fase 2 Starting exceptance brakin through.
3) Fase three. Volume sales are high enough and modern nextgen chips are on the market in many price ranges. from $100 to $400 these chips become more of a recomended requierment.
4) Fase for it a requierment for most games. this would be far away and this discution in Fase 0 will be forgotten.

Point is there always be many agianst dedicated hardware because of Price. Feeded by e there budged restrictions.
But there will be a small audience growing from
Early adopter 0.5%
HArdware entausiast 1%
hardcore gamers 3%
High budged gamers 10%
Midium budged gamers 20%

While people against it or with no interrest slinks from 99% to 50%

With still 70% agianst it due to budged limits. But 10% sales in fase 2 or 3 is a sucses because world wide that are a lot of cards.
And still growing till Fase 4 excepted to a large part of the market.
While Dev support grow each fase and games comes avaible showing the merrit of AI and PPU stuff. The exceptance grows.
Games are getting more and more expensive to make, and finding and training staff to use these toys, then coding them in is going to add time and expense to the development process for sweet FA benefit, because only a tiny % of even hard-core gamers are going to buy these things.
Why would developers throw away money like that?
Well you point is a good one for clone makers small Plubishers and dev's houses in for a quick money.
There are also Class AAA title wich have the time and the money and the expertise to crank a game that set games a tech level further. There in competition and Dev's cant stay behind.
Show me the Games, screw benchmarks, in current games the GPU not the CPU is the bottle-neck on how pretty everything looks, off-loading from the CPU won't make any difference unless the game is coded to be especially ai/physics heavy to begin with, which is kind of stupid unless you already know that the vast majority of your target market: A: Has the accelerator; B: Wants to play your game C: There's enough of them in A+B to make a profit.
The introduction fase is the difficult one and Dev support. CPU and GPU dependance vary per games but also genre. FPS have the most very GPU intensive titles. These are exceptions.
Flight sims and RTS are the more CPU bound.
Space trade game to.
So maybe they parallel develop two different modes?
Batlefield series at least the first one had a AI slider because CPU power aviable per gamer varry very much. The need for adustable AI load is already there. But most game don;t have a option for that.
The most AI power for FPS will be.
LArge scale Mplay maps but with few real Mplayer but with a lot of bot's.
tryed BF2 with 128 bots. Its got a slide show.
The new AA 2,7 game edition introduce a map where a small US team must play Coopertive agains a Overmatch off Foo. AI troops but behaving more or less as Players.
Would be a bit more CPU intensive map.

Nope, they won't do that, a fight against 10 dudes in a FPS is a whole other game than a fight against a 1000 dudes, they'd have to effectively design a whole other game to make use of your physic/ai acceleration, they ain't gonna do that.
Your so to much into FPS. The amount is important but also the amount off AI feature per unit. it could be still 1000 vs 1000 but one with simplefied the most primal AI to the other with complex and AL AI features. So you get a bit more stress of stupid behavior.
Or 500 vs 2000. and a bit difference in AI features.

For FPS NPC get more tactical behave more like real players and be adaptive and more.
So in summary, nice Idea, but it ain't gonna fly.

As enough gamers supports it and willing to pay for it it has a change. altho a large part are against it.

Mostly a killer game or a game wich put AI or Physcs to good use makes even turn the poor people around.
 
Space to me is going to be a major issue. Some people had mentioned E-ATX as a solution but it is not. E-ATX does not make the board "taller" for more slot room but "wider" so that the motherboard has room for things like a 2nd CPU or additional ram slots. (AKA a server board.) So a totally new format would have to be created. In addition how big of a board are we talking about? An extra 3 slots? (Maybe more if the HSF solutions for these devices are double height.) This is going to make the machines huge. With DUAL and QUAD core CPU's here or on the way I would like to see efforts made to use them for this stuff. They might not be as fast as a dedicated AI core but they will be tons faster than current AI on a single CPU core and they are basically free when it comes to space and cost. (As they become more mainstream they replace the old single core CPUs in the same price bracket.)
 
SuperGee said:

That's really way too long and involved for me to read.

But enjoy your new add-in card.

I hope the game is worth it.
 
Well I don't have a Intia processor. Don't know they are out yet?
Just recently heard about the Killer Nic to. Sounds interresting to.

But I do have two decent gameRigs both with a PPU. Played CF:CT and Graw. In Lan. Well then you know the potentional of PPU. :)
Now, more good PhysX games. And it get more worth the €.
JTF out soon got Gold.
CF:revolutions in the winter. Not so far away to.
UT2007 interresting to.

Don't know any games supporting this AIPU.
would be nice for RTS RPG Space games. FPS to in a way.

Well I'am going to do a large Upgrade for my third game rig begin next year.

First I gues Need a new Case a E-ATX bigtower. With mobo tray.
Then lokin for a Conroe2 mobo with lot of slot's CF/SLI
and PCIE 1 to 8 en PCI2.2slots.

E-ATX means.
CPU Mem and Northbridge can be more inline.
So more higher Slot room avaible

Large BTX board what can that mean for more slot's ?

I geuss to keep the option open the choice of mobo gets important.

EATX give exualy more space for extra socket or Mem slots. And more onboard stuff.
Not so much slotspace with more room because EaTX is more aimed at Workstation mobo's for more room for second socet and a lot of mem slots.

Also Mobo makers will jump in to this wave of extra addon cards. And use EaTX differently.
 
Okay, I agree that AI is something that multi-core CPUs can/should be used for. This doesn't work with physics because physics requires massively parallell processing. But for AI the extra cores should work just fine.

The add-in card for AI will be useful when developers are ready to implement smarter AI algorithims. Genetic programs, for example, would almost certainly need a very powerful AI chip to run.

But I agree that mobo makers will soon begin to offer more slots on E-ATX. Or we might see a new form factor alltogether. ATX is pretty old, you know.
 
HOCP4ME said:
Okay, I agree that AI is something that multi-core CPUs can/should be used for. This doesn't work with physics because physics requires massively parallell processing. But for AI the extra cores should work just fine.
MultiCore is something that certenly will be used for more AI. It's the normal Pace as CPU get more powerfull in time. So there will be more AI in games just by CPU.
But AI can scale even further.
More like AL. Taking more factors into acount. Taking AI to a much higher level.
Depends on genre. But also new game Features wich give a indirect impact on the AI problem. Like already mentioned Pathing in dynamical changing enviorments. Full Destructable terrain and Map's.

So I think there is some heavy use for AI. But games must adopt to.
The compain of stupid AI will fade away with still better AI resources coming avaible.

Maybe the complain will be the AI owns the gamer. :)

For that the AI can adopt to the Player tactics and skill.
 
SuperGee ...

SuperGee ...

How can something so long...be so horribly written?

I will re-iterate what I said on the previous page: I see (somewhat) of a need for a PPU: physics are really intensive calculations, and could benefit from an actual card (though, I do hope to see it integrated on the video card some time in the future). For an AIPU, I don't think we really need one. As people have said before, it sounds like a solution for a need that doesn't exist.
 
Well for Physics you can imagine some merrit? Buit AI not? Because its so transparant,
compared to Graphics whats feeds marketing of a game.
More intelligent AI or even AL could mean very much for games. It's just under estimated.
Well see it like this.
I would miss effect physics.
also Gameplay physx.
Also would miss eyecandy
But get stressPlay instead have gameplay fun due to Bad pathing.
AI is very interwoven with gameplay. even more then physics. it can make or brake a game.
Now with Dualcore there can be implemented a larger decent minimal level of AI, for archieve decent gameplay. But off course this can be leveld up a notch or three.
With more Higher AI detail and features.
I also think AI is taken for granted.

PS my english skill is bad just like my native language skill. / but have just very much to say.
 
This is very old news... There was a news article about AIPU one year ago over at ExtremeTech. AIseek did a proper site-launch in March and also published the demos and the whipaper.

I've made a couple of posts about AIseek in the past but, because of the PPU-mania, no-one seemed to be as excited to talk about it back then as we've now seen on this thread.
 
Back
Top