How far has virtualization technology gone?

javisaman

Limp Gawd
Joined
Apr 3, 2007
Messages
500
Hi,

Right now, I run dual boot with Windows XP Pro and Gentoo Linux. I'm building a new rig and I was considering going with just Windows Vista Business and all of the Linux apps running through some virtualization software. I'm no IT tech or anything just like that, just more of a hobbyist who does some biomedical research on the side with linux. Apparently all of these new Intel and AMD chips have some well advertised virtualization features, but I'm not exactly sure how well they are supported. Hopefully this isn't asking too much, but what is the current state of virtualization technology?


Regards
 
3D support isn't really there yet in the most popular virtualizations software packages, but performance overall in them is pretty good. Make sure you upgrade to a processor with some kind of VT support (which most new ones include), and you'll be fine.
 
The problem you may run into is that of time. Specifically that of the VM's time keeping. Most of the VM software I have seen have issues keeping accurate time; They either run really fast or really slow.

There are ways to keep the time in sync with the host, but if you are doing something that requires a high degree of time sensitivity, VMs are probably not the best option.
 
To be honest, I haven't seen what VT in the CPU has "bought me". I run VMWare Server VMs in a P4 3.2GHz machine at work, and a VM in my e6600 at home. I don't see a whole lot of difference in VM "quality of running". Everything seems to be appropriately responsive, but it's not like the 6600 just stomps the P4 with VM activity.

I think a VM will work great as long as your app isn't trying to interface directly to your hardware. I have some custom Windows serial port software that didn't behave well when run inside a VM on the 6600 (Linux os with Windows VM inside). It sorta worked, but sorta doesn't cut it. ;)
 
To be honest, I haven't seen what VT in the CPU has "bought me". I run VMWare Server VMs in a P4 3.2GHz machine at work, and a VM in my e6600 at home. I don't see a whole lot of difference in VM "quality of running". Everything seems to be appropriately responsive, but it's not like the 6600 just stomps the P4 with VM activity.
It bought you the ability to run a 64-bit client on a 32-bit host, when using an Intel CPU. Also, it allows you to run kernel-virtual-machines (KVMs) in Linux. I do not know whether there are any performance improvements. I have read some stuff about future processors adding more virtualization features that may improve performance.
 
It's not going top make much of a difference until IO virtualization becomes possible..

When I can run 3D apps in VM, then I'll be happy, until then it has very little to offer...
 
I'm sure IBM has things tuned for S390 (Linux), but I haven't seen anything from xen or VMWare that take advantage of the hardware virtualization, at least in a way that I can discern on Linux amd64 or x86 distributions.

I use VMWare for sandbox work, and I don't see any difference between an Intel 352 (Celeron, no VT) and an E6420 (c2d, w/VT, set to use one core). Even when I'm doing something that absolutely hammers the virtual machine, like hundreds of Servlet or .NET requests per second, I see little difference when you equalize for the difference in MIPS.

Other bottlenecks pop up (Disk, Network, Memory I/O) at that point, probably.

As for the biomedical end, I've done some work with DICOM analysis and BLAST servers in the past- the underlying (MPICH-g2 for example) libraries are barely aware of threads, much less an operating system that won't grant every I/O request immediately. As a result, I can't see what problems a lightweight (something you could put together for less than 10K) setup using VMT would solve aside from ease of administration in a large installation, security or testing. My guess is that you'd get zilch for gains in hardware utlization by adding a bunch of VM's working on the same problem.
 
So from what I'm reading it seems like though virtualization is pretty mature is limited in its performance.

I guess I should have been more specific. I usually program in Java and Perl (both which can be done in Windows). The main thing is having an environment for the applications that my colleagues give me (they use Mac OSX and Unix). None of them use 3d graphics or anything like that though and are not performance/time critical.I think I'll still stick with the dual boot setup just for the cool factor (Beryl is awesome!):D .

Thanks for all the input.
 
Bit of a necro post, sorry, but when is virtualization with 3D support coming? Is it already supported in Conroe and just up to software engineers to implement? How about K10?

I long for the day when I can run 98, XP, Vista64, Gentoo, and Ubuntu all at the same time with each at their native speed...
 
Make sure you upgrade to a processor with some kind of VT support (which most new ones include), and you'll be fine.


VT doesn't do anything for performance.

Here is a post:
The increase in speed you experience with VMWare is not due to VT.

Read the following from VMWare engineers: http://www.vmware.com/vmtn/resources/528

and

Read the following on the vmware forums: http://www.vmware.com/community/message.jspa?messageID=376400

VT will allow you to run a 64-bit guest OS in VMWare. It does not offer any performance improvements for 32-bit guests. In fact, it will slow them down. VMWare has publicly stated they will not modify their current releases to enable VT w/ 32-bit guests because it is SLOWER than their software solution.

Dual cores, along with the improved C2D architecture are the reasons you experience improved performance with VM guests.

Post #11 In this thread: http://www.hardforum.com/showthread.php?t=1153951&highlight=
 
From the same thread you posted:
I've had discussions on this topic with vmware reps on this board and it seems they basically have no interest in fixing their VT support for 32 bit OS.

I've now stopped pursing this issue have moved onto a different product that actually has working VT support ( and is faster than vmware in every bench as a result).

Right now there are two different (non-VMware) products with working VT support and there's probably more coming. Maybe one day vmware will see the light and actually add support for it.
 
From the thread from the [H] that you posted:

Just to clarify-

The hardware virtualization technology that was alluded to earlier in this thread is not in play here, as has been generally accepted in the last few posts. VMWare is fully virtualizing the os, meaning that it is translating every command.

Hardware virtualization applies to another type of virtualization called paravirtualization, of which VMWare is battling Xen for supremacy. In paravirtualization (and this grossly simplifies things...) the system runs a thin hypervisor underneath the os that translates some commands but lets others directly through to the hardware, creating much faster access. The speed loss in paravirtualizing an os is around 5%, compared to 25% or so with the fully virtualized solutions. However, the base os kernel (not the virtualized one) needs to be modified to support paravirtualization, meaning it has to be rebuilt, unless the cpu supports hardware virtualization. In this case the cpu can run paravirtualized operating systems without modification to the base os.

Does this mean that the VT on Core2Duo can support this, but we don't have working software to take advantage of it yet? Or will this only be capable with another, yet-to-be-implemented set of VT instructions?
 
Back
Top