Building a Server for VMWare. Advice?

Bar

n00b
Joined
Apr 7, 2007
Messages
27
Hi guys,

I am new to virtualisation but I am going to build a server and use VMWare so I can test sending down group policies etc when I start my mcse. My Host OS will be Win2k3 and I plan on running a couple of xp VMs and a linux VM and probably another Win2k3 VM.

I just want some advice on the following:

HDD: How much disk space will I require? I have been told that I should RAID10 four drives. Can I RAID10 4x160gb drives? is that enough space?

RAM: Im guessing 4Gb minimum, preferably 8gb?

Motherboard: I have no idea what would be required in a motherboard? 1 NIC is fine isnt it?

CPU: Dual core? quad core? should I be looking at desktop cpus or more like opterons and Xeons?

GFX: Im guessing this doesnt really matter to much.

Power: I have a corsair HX-620 which will hopefully be able to do the job?

I also want to to have my file server in the same machine so I have 4x500Gb hard drives that will act as my fileserving drives which may end up in RAID5, will that affect anything?

Thanks everyone for there help!
 
Seems like these questions can be best answered by yourself.

Disk space?

Take the disk space that you'll need for each of your guest VMs and add them up. Pretty easy. If you're testing domain stuff then you won't need much space on top of your base OS.

Take the memory that you want to allocate for each of the guest VMs you will be running concurrently and add it up. Add a little for overhead and add whatever your host OS is using.

How many CPU cores? How many VMs do you want to run at once and how much processing power do you want them to have?
 
Well that certinly makes sense.

Does a VM on a dual core use both cores or does it use one core?

For example if I had 4 VMs on a quad core machine will they all run on there own core?

20Gb for each VM should be enough shouldnt it? can you change the size after it has been allocated? same with memory.
 
You can assign 1 or 2 vCPUs to a VM. vCPUs have to be scheduled on different physical cores. One vCPU per VM is recommended to avoid scheduling overhead unless your VM really needs more power than 1 of the physical CPUs can provide. If you have 4 VMs running with a quad-core host, chances are good they'd each be running on a different core, though that can change dynamically. I can't remember if VMware server lets you set processor affinity like you can in ESX.
 
I'm running a VMWare ESX server at my home with the following specs:

AMD 3800+ X2 AM2
4GB DDR2 PC5300
3x73GB 10k SCSI drives in RAID 5
Intel Gb NIC

I've had a dozen VMs running on this config without any issues.

I'm assuming you'll be running VMWare Server, not ESX. Server isn't near as efficient as ESX, however still gets the job done nicely.

I'd advise getting a dual core CPU. VMWare will be able to utilize both cores. For RAM, I'd go with 4GB depending on how many VMs you'll be running. One thing to remember is that if you run multiple VMs with the same OS (at least, this is true with ESX) VMWare will share common memory chunks with these VMs together to conserve memory.

For example, every copy of Windows XP needs explorer.exe loaded in RAM so VMWare will have one instance of explorer.exe physically stored in RAM and every VM running Windows XP will use that same bit of RAM for explorer,exe, rather than each VM having it's own separate copy of explorer.exe in RAM.

Honestly, if you go with a nice RAID5 or RAID10 setup along with a dual core CPU and 4GB of RAM you should be pretty set, unless you're planning on running 10 or more VMs. I'd go with SCSI if possible, otherwise the fastest SATA drives you can afford.
 
I'm running a VMWare ESX server at my home with the following specs:

AMD 3800+ X2 AM2
4GB DDR2 PC5300
3x73GB 10k SCSI drives in RAID 5
Intel Gb NIC

I've had a dozen VMs running on this config without any issues.

I'm assuming you'll be running VMWare Server, not ESX. Server isn't near as efficient as ESX, however still gets the job done nicely.

I'd advise getting a dual core CPU. VMWare will be able to utilize both cores. For RAM, I'd go with 4GB depending on how many VMs you'll be running. One thing to remember is that if you run multiple VMs with the same OS (at least, this is true with ESX) VMWare will share common memory chunks with these VMs together to conserve memory.

For example, every copy of Windows XP needs explorer.exe loaded in RAM so VMWare will have one instance of explorer.exe physically stored in RAM and every VM running Windows XP will use that same bit of RAM for explorer,exe, rather than each VM having it's own separate copy of explorer.exe in RAM.

Honestly, if you go with a nice RAID5 or RAID10 setup along with a dual core CPU and 4GB of RAM you should be pretty set, unless you're planning on running 10 or more VMs. I'd go with SCSI if possible, otherwise the fastest SATA drives you can afford.

Memory sharing, balloon driver, etc. are unique to ESX.
 
ESX isn't free though is it?

This is very informative thanks. I didnt realise you also need fast hard drives. So SCSI or Raptors it is then.

Just a question for you CoW how did you run a dozen VMs with only 146gb of usable hard drive space?
 
ESX isn't free though is it?

This is very informative thanks. I didnt realise you also need fast hard drives. So SCSI or Raptors it is then.

Just a question for you CoW how did you run a dozen VMs with only 146gb of usable hard drive space?

Most of my Linux VMs only need about 8-10GB of hard drive space since they're command line only and half of the VMs are Linux.
 
Well I managed to get a server from work that was just lying around, I will post the specs and pics early next week I will be away the next 3 days for work. Well see if anyone can help me spec this baby out.
 
Well I managed to get a server from work that was just lying around, I will post the specs and pics early next week I will be away the next 3 days for work. Well see if anyone can help me spec this baby out.

if you have any questions - drop me a line. I'm a VMWare TSE
 
I understand ESX is picky about the hardware (at least SATA-controllers). Can anyone share their working configurations for ESX (version 3.5)
 
I understand ESX is picky about the hardware (at least SATA-controllers). Can anyone share their working configurations for ESX (version 3.5)

You just need SCSI (or a SATA RAID controller that it sees as SCSI that you can throw a driver in for). What are you planning on running it on?
 
You just need SCSI (or a SATA RAID controller that it sees as SCSI that you can throw a driver in for). What are you planning on running it on?

You need more than that, and there are no drivers you can throw in ;) You have to understand - there is no console or any place to add drivers in to the system (the console is actually a virtual machine). You have to be supported out of the box

Check the HCL grid.

http://www.vmware.com/support/pubs/vi_pages/vi_pubs_35.html First set of links are to supported controllers and cards.
I understand ESX is picky about the hardware (at least SATA-controllers). Can anyone share their working configurations for ESX (version 3.5)

EXTREMELY picky. Namely, we don't support SATA (except on SAS RAID controllers, RAID1 minimum), and we don't support IDE (except as an install location: no VMFS).

the EASIEST way to get up and going is with this motherboard:
http://www.newegg.com/product/product.asp?item=N82E16813131154
use 2 SATA drives in RAID1 and have at it. :) Feed it lots of ram, a nice quad core CPU, and pet it from time to time. The LSI Logic controller is built in and fully supported out of the box.
 
You need more than that, and there are no drivers you can throw in ;)

I did it a while ago with a SATA RAID controller. Can't remember if it was 2.5.3 or VI3. In the ESX 2.x days there were several times when I had to force load RHEL drivers from the service console to get a NIC or disk controller working.
 
I did it a while ago with a SATA RAID controller. Can't remember if it was 2.5.3 or VI3. In the ESX 2.x days there were several times when I had to force load RHEL drivers from the service console to get a NIC or disk controller working.

There is no such thing as a service console in VI3. In fact, while it may have worked in ESX2.5, it's not supported, and there's NO way it'll work now.

What you think is the service console is the Console OS (COS), and it's a virtual machine. :)

It's either supported now, or not, out of the box. No drivers can be installed. In fact, in 3i, there's no COS anymore at all. 3.5 still has one, but it's pretty minimal.
 
I totally understand the requirements from VMwares side to make sure that devices on the HCL are server products so they can guarantee the reliability. But I think it's pretty common that you'd like a more simple machine in the testlab (less noise, price etc)

Thanks for the tip on the Motherboard - the only downside is that it "only" supports 8 GB of RAM. I was thinking of trying with the Abit Fatal1ty FP-IN9 SLI motherboard (one of the few that supports 16GB and Quad-core?).

But then I guess I have to get a NIC (Intel Pro 1000 MT seems popular) and some cheap storage controller. Anyone got a suggestion for a SATA-controller that they know work that aren't too expensive for the lab?

I've done hours of browsing the VMware forums but can't seem to figure out the best alternative for the storage.
 
I totally understand the requirements from VMwares side to make sure that devices on the HCL are server products so they can guarantee the reliability. But I think it's pretty common that you'd like a more simple machine in the testlab (less noise, price etc)

Thanks for the tip on the Motherboard - the only downside is that it "only" supports 8 GB of RAM. I was thinking of trying with the Abit Fatal1ty FP-IN9 SLI motherboard (one of the few that supports 16GB and Quad-core?).

But then I guess I have to get a NIC (Intel Pro 1000 MT seems popular) and some cheap storage controller. Anyone got a suggestion for a SATA-controller that they know work that aren't too expensive for the lab?

I've done hours of browsing the VMware forums but can't seem to figure out the best alternative for the storage.

Test lab machines generally run under VMWare workstation or the like, and honestly, that's not a priority for us - most places build a machine identical to their others for testing on. The chipset on that mobo may not be supported, btw. What's it running?

There are no SATA controllers that are supported. You'll have to go SCSI, or a SAS controller and feed it SATA drives (in RAID).

Most of the adaptec SCSI adapters are supported.
 
It's ATI Radeon Xpress 1250 chipset and Realtek NIC (which I know will not be supported). But this card together with Adaptec cards would work.

What chipsets should I look for that is supported (that being said, it doesn't mean the NIC nor storage controller will be supported)?
 
It's ATI Radeon Xpress 1250 chipset and Realtek NIC (which I know will not be supported). But this card together with Adaptec cards would work.

What chipsets should I look for that is supported (that being said, it doesn't mean the NIC nor storage controller will be supported)?

read the hardware HCL.

We support a few of the Nforce boards (odd, yes), and then the intel 3000, for instance. Just hit up the HCL. vmware.com/support/pubs/
 
There is no such thing as a service console in VI3. In fact, while it may have worked in ESX2.5, it's not supported, and there's NO way it'll work now.

What you think is the service console is the Console OS (COS), and it's a virtual machine. :)

Of course force-loading other distro drivers from the service console isn't supported. I never said it was. And yes, there certainly is a service console in ESX3 and no, it's not a VM. Just because it's resources are allocated from the VMkernel doesn't make it a virtual machine.

And, you're right; for the six people on the planet who are using 3i today, there is no service console.
 
read the hardware HCL.

We support a few of the Nforce boards (odd, yes), and then the intel 3000, for instance. Just hit up the HCL. vmware.com/support/pubs/

I did a search for NForce in the HCLs but only found 3 NICs supported. But nothing about chipsets. Can you point me in the right direction?

For example, the Abit Fatal1ty FP-IN9 SLI is using a Nvidia nForce 650i SLI chipset rather than ATI.
 
Of course force-loading other distro drivers from the service console isn't supported. I never said it was. And yes, there certainly is a service console in ESX3 and no, it's not a VM. Just because it's resources are allocated from the VMkernel doesn't make it a virtual machine.

And, you're right; for the six people on the planet who are using 3i today, there is no service console.

Yes, that service console is a VM. Trust me here, I work for VMWare as a technical engineer supporting this product, I know what I'm talking about. It looks like a console, and it acts like a console, but it is NOT a console. At VMKernel init, the base RHEL kernel init is stopped, VMKernel loads, and then ~continues~ the RHEL init inside of a virtual environment.

It's a virtual machine. Created by the VMKernel, and if you have access to the dev tools, you can modify and edit it fully.

If you try to load something like powerpath, for instance, you'll corrupt parts of the VM because it doesn't actually have full hardware access - it's going through the normal abstraction layer of any other virtual machine.

The console is just a troubleshooting VM with the esxcfg commands.


I did a search for NForce in the HCLs but only found 3 NICs supported. But nothing about chipsets. Can you point me in the right direction?

For example, the Abit Fatal1ty FP-IN9 SLI is using a Nvidia nForce 650i SLI chipset rather than ATI.

I'll ask at work. I don't think there's an issue, but we don't normally list the chipsets themselves.
 
the EASIEST way to get up and going is with this motherboard:
http://www.newegg.com/product/product.asp?item=N82E16813131154
use 2 SATA drives in RAID1 and have at it. :) Feed it lots of ram, a nice quad core CPU, and pet it from time to time. The LSI Logic controller is built in and fully supported out of the box.

Are the onboard SATA and Gb NICs natively supported by ESX 3.5?

EDIT: Nevermind, answered my own question via Google-fu.

That is a nice board. :)
 
Are the onboard SATA and Gb NICs natively supported by ESX 3.5?

EDIT: Nevermind, answered my own question via Google-fu.

That is a nice board. :)

SATA is a no, but the SAS controller is! As long as you feed it SATA drives in RAID1, it's happy :) And yes, the nics are good.
 
SATA is a no, but the SAS controller is! As long as you feed it SATA drives in RAID1, it's happy :) And yes, the nics are good.

Good stuff.

SAS kicks complete and total ass.

My ESX server at home is running just a simple ASRock AM2 board, AMD 3800 X2, an old SCSI PCI card, and an Intel Gb NIC.

With this board, however, one could make a killer ESX server for cheap.

Mobo: $350
Q6600: $250
4x2GB RAM: $140
2x40GB SATA RAID 1 for ESX: $60
3x320GB SATA RAID 1E for VMs: $200
Case: $150
Quality PSU: $100

Total: $1,200 :D

I might have to recommend this to my boss as a testing/development box. We're currently running on HP BL460c's.
 
Good stuff.

SAS kicks complete and total ass.

My ESX server at home is running just a simple ASRock AM2 board, AMD 3800 X2, an old SCSI PCI card, and an Intel Gb NIC.

With this board, however, one could make a killer ESX server for cheap.

Mobo: $350
Q6600: $250
4x2GB RAM: $140
2x40GB SATA RAID 1 for ESX: $60
3x320GB SATA RAID 1E for VMs: $200
Case: $150
Quality PSU: $100

Total: $1,200 :D

I might have to recommend this to my boss as a testing/development box. We're currently running on HP BL460c's.


Yep - I'm building much the same, actually :D
 
Back
Top