R
ring.of.steel
Guest
Nvidia may integrate a killer nic, and watercool the whole lot. Dont forget the peltier cooling and 25,000 way sli.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Nvidia may integrate a killer nic, and watercool the whole lot. Dont forget the peltier cooling and 25,000 way sli.
You know what's really sad? If I had infinity moneys... I'd buy all that shit. Buy it as quick as they could sell it to me.
You know what's really sad? If I had infinity moneys... I'd buy all that shit. Buy it as quick as they could sell it to me.
If i had infinate money i would pay nvidia to make me a quad xeon board with tri sli
If i had infinate money i would pay nvidia to make me a quad xeon board with tri sli
...but why stop at three? commission a one-off 6x SLI board. Put those extra Extended-ATX slots to good use.
The E-ATX form factor compliant boards have no more expansion slots than standard ATX boards do. They are larger yes, but not in that direction.
My dreams are crushed and I look silly to boot. Guess it's time to start the single slot phase head research.
lol, thats settled a few heated arguments.Geforce8 cards to get PHYSX support shortly
If I have infinite money, I would donate it to the third world so that there won't be any single human being starving anymore
I cant wait for them to write phys-x profiles for all my games to get a stunning 1 fps improvement.
Phys-x was a worthless technology to begin with...it's amazing to see people be excited about it now that nvidia is incorporating it.
It's like all the 8 series people think they are going to automatically get something for nothing.
Ultimately I think GPU accelerated physics is a stop gap at best, just like phys-x was. quad core CPUs are coming down to mainstream prices, and 8 core CPUs are on the horizon. Easy way for a game to take advantage of that is to use a multi-threaded physics engine, which I expect will take back any ground lost to GPU physics.
To each their own but SLI can mean the difference between games being playable or not at native resolution on a 30" display. SLI isn't for everyone but it is hardly as bad as the nay sayers make it out to be.
If your buying a 8800 series graphics card your throwing money at diminishing returns. GTS, GTX, Ultra those must all suck because you get less FPS/$ going up the scale. Seriously, 100% thats a joke. You can't combine two of anything and get 100% scaling. Some of it has to go towards talking to each other.
IIRC, 2 8800 GTSs in SLI will out preform a 8800 Ultra at stock speeds. And the two of them will cost about the same. But it must suck, because it's not twice as fast as a single 8800 GTS.
Hmm, I disagree. Standard CPUs are ill-equipped to do alot of parallel calculations like that. Even with 8, 16, 32, or 128 cores, a well designed GPU will chew through it faster.
It's the same reason the additional cores aren't used for rendering right now. Despite running at upwards of 3GHz, they just aren't suited for that sort of stuff.
Nvidia has been running fluid dynamic simulation and the like on GPUs for more than a year now. I'm imagining all they had to do was write a wrapper of sorts -- PHYSX to CUDA?
Am I wrong or does this make the 9800GX2 a much more attractive purchase?
Especially for us without SLi slots...
Take the PS3 for example, it had the PS2 emotion chip on the top-of-the-line launch PS3 60GB editions, now the 80gb has onboard software emulation. I hope something like this happens for us early g80 owners because the g92 owners shouldn't be able to reap what we sow. That's not a fanboy comment but it's about time us early adopters don't get fucked, and instead we get something nice, but who knows.
I'm not sure where it came from, but I installed the newest Beta ForceWare driver last week, along with all my other drivers for my new PC, and I have PHYSX stuff in my Control Pannel.
EDIT: I've been informed that when I installed the Unreal Tourney 3 demo, it installed that PhysX stuff.
It just makes much more sense to put this load onto the CPU, games like Alan Wake are supposed to be dedicating one or more cores entirely to physics calculations, if thats successful it maybe that other developers follow suit, this allows them to reach the biggest audience and make the physics more than just eye candy and impliment them into gameplay more and more.
You already have gotten something nice: One year of G80 goodness while the rest of us were stuck with the 7-series or earlier, patiently (or not) waiting for the refresh or the next generation. That should be reward enough. You also have tri-SLI capacity while G92-owners don't for what it's worth.
Anyway, it said in the article that it uses CUDA in hardware to do the non-graphics physics processing, and that all of the 8-series is capable of that, from the old G80 to the lowly G84, to the new G92.
Still... I think it might also be an idea to offload it to one of the cores on a Quad core or better.
Sure, the CPU is not anywhere near as efficient as the GPU when caluculating these sorts of things, but Intel is already prepping their 8 core and 16 core designs. What else are we going to use them for? A full hardware TCP/IP stack with real time per packet filtering on one core would be nice too. Audio could also get a decent speed and quality boost by putting it all onto a processor.
Hmm, I disagree. Standard CPUs are ill-equipped to do alot of parallel calculations like that. Even with 8, 16, 32, or 128 cores, a well designed GPU will chew through it faster.
It's the same reason the additional cores aren't used for rendering right now. Despite running at upwards of 3GHz, they just aren't suited for that sort of stuff.
The problem is added cost for the player.
Which is more common right now, a fast cpu and slow gpu or the opposite.
Most store-bought computers seem to focus on cpu power over anything else (nice marketing I guess). Also the fact that the majority of gamers (kids) can't afford to get anything that costs a bundle, standard bundled computers is predominant.
Combine all these things together and the probable implementation of even more cores which even now are unused in games) and it makes most sense to first use that source of processing power rather than the gpu/discrete solution.
The software houses aren't going to make games that require either a Nvidia, Ati or whatever to run, but they already have to require a cpu.
Noone is disputing that the discrete sollution is the best but the argument is that "best" isn't enough. Viable economically for the softwarecompanies is.
I cant wait for them to write phys-x profiles for all my games to get a stunning 1 fps improvement.
Phys-x was a worthless technology to begin with...it's amazing to see people be excited about it now that nvidia is incorporating it.
It's like all the 8 series people think they are going to automatically get something for nothing.
These must be the same people who thought a magic driver and magic patch would suddenly make Crysis run twice as fast.
Why does everyone feel that writing games to support PhysX in GeForce8's won't happen?
Games have support for plenty of optional components or capabilities already.
EAX 2,3,4,5 is a hardware option, and is enabled or disabled in the game. It plays fine without it, but better with it. The game writers have written the game to run with or without it.
IIRC, Doom was written to use software rendering, 3dfx or OpenGL. That sounds kind of like writing critical game components differently to support optional hardware - no?
I remember reading in the Bioshock tweak guide about how different hardware rendering features were enabled were set when low/med/high was set on various options. Don't you think that also represents coding for hardware support of optional features?
If pretty much everyone with nVidia 8 hardware has PhysX when the dust settles, that's probably worth writing optional code for in future games, or upcoming patches of current games. I don't see why we would have a "PhysX support - YES/NO" or "Physics - software/PhysX/??" option.
Hopefully the drivers or the game software will give us some knobs to tweak to control how many resources we want to make available for physics at the expense of other rendering.
On, say, HL2 or COD4 I bet my G92 GTS has some cycles to spare at 1600x1200 so I may allow significant resources for physics instead of cranking AA/AF to the max. On Crysis, however...