CPU selection for Virtualization

Baredor

Gawd
Joined
Jun 30, 2004
Messages
667
I'm trying to develop a plan for a very simple, non-critical virtual server box. I'm just planning to do 4(+/-1) virtual OS running things like Antivirus, WSUS, BES, etc in a ~150 user environment. I've been trying to do some reading and come up with an answer to what the better setup would be between a choice similar to the following:

1. 1x Xeon® 3070, 2.66GHz, 4MB Cache, 1066MHz FSB (yes, the conroe Xeon)

2. 2x Xeon® 5110, 4MB Cache, 1.60GHz, 1066MHz FSB

I would like to bump option 2 up to 5130s, but that would be another ~$800 and this is going to be approved with the attitude of "an experiment" anyway, so I'm trying to stay as low as I reasonably can without gimping the project.

I have been reading on IVT and the pros/cons of multiple CPUs and assigning affinity, but am still coming up with mixed results. I would think the more the merrier, but have seen some articles to contradict this when it comes to virtualization. If anyone could educate me on the general principles that should govern this sort of thing, I would be appreciative.
 
I would definately go for more cores if you're running alot of intensive stuff on the server, also with that many virtual os's running, i'd recommend scsi or a seperate harddrive for each OS.

I've found that most os's are able to handle affinity on their own much better than you are, on a much quicker level and that you very rarely need to mess with it.
 
I'm not too familiar with virtualization on that scale, but how would clockspeed factor into it? It seems to me that if the op is thinking 3 - 4 virtual os's, he'd be better off with 4 slower cores than two faster cores. The opteron 2xx price cuts should have hit today, or will soon, so that might be a good option.
 
We have a server running in our building that has anywhere from 20 to 50 virtual machines running at a given time and it has never had a problem. I do not know the exact specs, but it is a year or two old. We do not however have ~150 users connected at once. It is a single processor system, possibly dual core.
 
00ber_m00 said:
20 to 50 virtual machines running at a given time

Blessed holy light. On a single processor. I wouldn't have guessed that would have been possible. As for the users, 150 is total, but remember this is very light load stuff (although some of it has spike usage, like AV, WSUS).


Slartibartfast said:
how would clockspeed factor into it? It seems to me that if the op is thinking 3 - 4 virtual os's, he'd be better off with 4 slower cores than two faster cores.

I'm still leaning towards the same conclusion but interested in more replys. :D
 
What are you going to be running? VMWare vs. Virtual Server 2005 are very different setups, which one do you intend to use?

Generally speaking:

MS Virtual Server - dedicated core/memory and harddrive to a virtual box

VMWare ESX - whatever you want to give it, one partition, it handles everything for you on the backend including processor/memory and hard disk space, you don't need separate partitions for each box. You do everything from inside ESX.
 
I forgot to mention we use VMWare. Also, the big thing to remember is that running all the virtual machines at the same time is not that cpu intensive since most of them aren't being heavily utilized at any given time. The big thing is to have enough RAM for them all to perform up to the level you want.
 
00ber_m00 said:
I forgot to mention we use VMWare. Also, the big thing to remember is that running all the virtual machines at the same time is not that cpu intensive since most of them aren't being heavily utilized at any given time. The big thing is to have enough RAM for them all to perform up to the level you want.

yeah, I was planning on running minimum of 2 gigs, but most likely 4. I am surprised though that at that sheer volume of machines it's still not intense to have them idle. Any idea what percent utilization it hovers at? Just curious. :)

ND40oz said:
What are you going to be running? VMWare vs. Virtual Server 2005 are very different setups, which one do you intend to use?

VS 2005 R2 is the current plan (I can hear the groans from here :p ) but I've played around with it and like it, so far. Granted I'm new to this still. VMware, I haven't tried in a few releases, and certainly not since the time it went free. I DLed it the other day and am going to play around with it before making a firm decision. I know absolutely nothing about Xen.

As far as dedicated cores, I have read about this, but does it really NEED it? Is the difference that substantial over letting it fend for itself? The disadvantage I kept seeing come up is that if you assign a core to a machine, that core is inaccessable to the other machines even if it is idling and the others are slammed. :confused:
 
With Virtual server 2005 R2 you can't limit a virtual machine to a particular core. You can limit it to a percentage of processor avaiablibilty as a maximum, but that's it. You do take a significant performance hit with Virtual Server 2005 compared with VMWare as all the virtualization processes run at Ring 3, but it works fine.

I have a test server with 2 2.6 Ghz Xeons (Dell 1750) that is now running Virtual Server 2005 R2 (it was originally set up before you could run with Virtual Server 2005 or VMWare single server versions for free, nobody blinked an eye at having to pay $199 dollars for Virtual Server 2005) with 4 gigs of RAM. Handles two AD test domains (2 DCs each) and whatever other couple of servers I need at times. Disk throughput matters more than CPU/RAM if you have multiple active servers (This is true for VMWare as well). When the client OS servers are idle, CPU usage is about 8-10% of one processor for the managment overhead. I've tested 2 Citrix, 1 WSUS, and other servers with no issues at the same time as running my normal test AD controllers.

When the next service pack for Virtual Server 2005 R2 comes out, it will be able to take advantage of any virtualization optimizations are present in whatever processors you use, but it doesn't use them now.

Disk I/O contention is still the big performance killer anyway. And we are still a few chipset revisions away from getting vitrualization assistance in hardware on the disk I/O side.

At home, I have a Dell SC430 with 2 gigs of RAM I picked up cheap with a Pentium D 820 processor. The host OS is 2003 x64 running acting as my AD, WSUS 3.0 Beta, file/print, and Virtual host server. I have an XP client OS my wife uses for validating what web sites look like on the PC side; and a Windows 2003 x86 client instance running DHCP, IIS 6.0, and Exchange 2003 for my home email. Works great.

I used to run the same home setup (except for the host OS being x86 and WSUS 2.0), on a Dual AMD 1600+ setup with 1.5 gigs of RAM. Took a little over twice as long for the client OSs to boot as opposed to the Pentium D 820, but it worked fine.
 
My mantra when it comes to virtual servers is that you will need more memory than you think and more disk space than you anticipate.

Make sure you leave room to grow on both those fronts if you don't max them out to start with.

I run 4-6 virtual machines on a dual xeon 2.0 ghz hyperthreaded system. It is a mix of development and infrastructure systems for my home network.

I worked for a company that moved their entire development lab to MS virtual server with no problems.

You need decent CPU's, but not necessarily the fastest. Remember that the "hardware" for your virtual machines it takes CPU cycles to emulate. More cores/cpus will help with responsiveness. If you can get CPU's with hardware virtualization support then you will be better prepared for the future (MS VS support is in beta). Hardware virtualization is apparently more important when you run OS’s that do not have VM additions available for them.

I would not recommend messing with processor affinity. Just let VS handle the processors. As far as setting a VM to use 100% of a single cpu, you can do that and VS will assign it to whichever processor is available. I would recommend against doing that if you can avoid it. If you feel the need to have some CPU reserved for your VM’s then set the percentage lower and let VS handle the rest.
 
yeah, I was planning on running minimum of 2 gigs, but most likely 4. I am surprised though that at that sheer volume of machines it's still not intense to have them idle. Any idea what percent utilization it hovers at? Just curious.

When they're all idle? I think it was 5% or less. I don't remember the exact number but it was very low.
 
Interesting, thanks for the responses. I've got a better handle on the CPU aspect now, but have a followup question on disks: what is a better setup to try to relieve I/O - a 250-300GB mirror, or a comparable Raid 5? I would think a Raid 10 would be best, but that will likely be cost prohibitive in my case.
 
VMWare, IMO (and with proof too I guess) loves having separate hw for ideal performance for each of the VMs. Makes sense, as they are virtual computers. Simply moving my WinXP VM to a separate HDD in my *nix box and upgrading to 2GB RAM makes it run silky smooth.

Honestly though, I believe you'd be benefit from multiple slower cores than fewer faster cores, along with copious amounts of RAM, and a powah-ful I/O system. (RAID 5 kills write speed I hear).
 
Sorry if old bump- I need a bigger box!

One thing I've learned, virtual what ever loves more of everything. I vote more, more cores.

2k3 x64
Dual Xeon HT box, 2.8 GZh I think

All machines are basically @ idle 23.5/7, all XP Pro

vpcidleuc4.jpg

vpcsummaryof3.jpg
 
marty9876 said:
Sorry if old bump- I need a bigger box!

One thing I've learned, virtual what ever loves more of everything. I vote more, more cores.

2k3 x64
Dual Xeon HT box, 2.8 GZh I think

All machines are basically @ idle 23.5/7, all XP Pro

What are you doing out of the DC forum... :p
 
Back
Top