192GB of Ram?

can anyone think of a reason to need that much ram aside from ram drives?

I can't think of anything that wouldn't be CPU bottlenecked.
 
At the moment unless you run some kind of super-computing workstation or server or something there isn't. In 10 years though, there may be a need for that much, depending on how applications and OS's evolve to take full advantage of widespread 64 bit adoption as well as memory density. I don't forsee desktop boards having more than 8 memory slots, so you'd need 16GB/stick modules to start running into current OS limitations. Probably the most likely use will be for HD video and audio editing at home, and of course games. 10 years ago 512 megs of ram was lots for a desktop machine, a few were 1gb+ but most were 256 megs or less. So around 4-8x growth from then till now, if that continues then we'll be looking at 16-32 gigs being common. Then again 15 years ago 4 megs of ram seemed like plenty, and things like like gigabytes seemed about as realistic as lightsabers.
 
during my days as a lab rat in a rather awesome computer lab, i was able to use several systems with 256GB-512GB of ram. we used them to test VM's and with a HP DL785 wish the maxed out 512gb of ram we had like 500VM's running on the one system. THAT is what that kinda ram is used for right now.
 
during my days as a lab rat in a rather awesome computer lab, i was able to use several systems with 256GB-512GB of ram. we used them to test VM's and with a HP DL785 wish the maxed out 512gb of ram we had like 500VM's running on the one system. THAT is what that kinda ram is used for right now.

what kind of CPU setup can handle 500vms?
 
I have deployed up to 128GB ram per node on some large data warehouse / analytics platforms that experience fairly heavy load. As the data set size increases into the tens of terabytes, it comes much less expensive to climb the performance curve through RAM than HD spindles (and at these capacities, SSD is out of the question). But these are very powerful single or two-node (e.g. non-clustered or small cluster) environments that sustain a sizeable user load.

I don't know if I'll see myself building systems with significantly more RAM per node than that down the road. On the one hand RAM is relatively cheap these days, however I'm already seeing SSD storage play a role loosely analogous to say L2 cache in the processor, but for database systems, and of course SSD is much cheaper per GB.
 
Well, technically the 32-bit OS should be able to see up to 64GB ram with PAE & addressing in 4GB "Chunks". I'm pretty sure some of the windows/server os's can do it. Linux & FreeBSD AFAIK can both do it on 32-bit.

I think the problem in xp/vista on 32-bit is a Microsoftism.

Memory address extension in PAE is operational in Server versions of Windows, but Microsoft disabled it for consumer versions because a lot of consumer hardware drivers never expected to see 36-bit addresses and would cause problems when confronted with them. If you're going to break compatibility, might as well go all the way to 64-bit.
 
I already have a program I made that could max out the 192GB of RAM limitation.
 
Windows 7 to support that much ram ? I hope this is true. Now I can get all that ram for Emails and Internet surfing. :)
 
A terminal server with many thin clients all running firefox and open office would be the purpose of such a quantity of RAM.
 
A terminal server with many thin clients all running firefox and open office would be the purpose of such a quantity of RAM.

The OP's question was directed towards desktop workstations. Of course a server has a use for all of that RAM.
 
Silly me missing the original post. I would have to say that support for that much memory is geared towards the server market. Though it may be hard I cannot see a practical or economical purpose for that much memory on a desktop.
 
Assuming that RAM size doubles every 3 years and that current normal size is 4GB then 192GB will be the standard RAM size by year 2025.
 
RAM size doubles closer to every 2 years. Beginning of 95' 16MB was common I believe. Doubling ever 2 years would put us around 2GB today which is pretty close but still too slow. Doubling every 3 years is we'd be ~512MB right now.
 
Thats crazy. Talk about super overkill. No end user should every need that much anytime in the near future.

HAH I told someone when we all had 4mb ram and he upgraded to 16mb!!!!!!!!!!! :D


and read MEGABYTE!!!!!!!!!!!!!!1 :eek:
 
Wake me up when we hit a TB of RAM.

192 GB. Finally... running two windows of Crysis!

Lossless movies, anyone?
 
Back
Top