Your home ESX server lab hardware specs?

I mostly use Hyper-V at work so I decided to make an ESX box to play with random stuff.

I started with ripping apart a 1U supermicro server based on a X7DBU board. I got it on ebay for a pretty good price (well under $200 with shipping included)

The CPU's are only 2x L5320 1.86GHz but it has 32GB ram. :cool: With the current prices of RAM thu the roof I figure this would be a great bang for the buck setup to play with.

The case I used was a NZXT 210 I had lying around. The X7DBU did NOT want to fit so I had to do some modding.

Right now I only have 3 VM's (win2k12, win7 and ubuntu) but I plan on building a typical windows infrastructure (2012 DC's, exchange 2013, sharepoint, something with a med size SQL database blah blah blah to get more familiar with a 2012+ environment)

Pics:

Case + Board
v41c79.jpg


Hacking the case up to make the board fit. I didnt have tin snips with me. Was going good with the nibblers until I hit a rivet that was in the way, so I just bent everything caveman style.
2mi3rio.jpg


Cooling - I have a bunch of spare AP-15 Gentle Typhoons I ordered a while back so I finally put some more of them to use:
xdutjr.jpg


Ziptied nice and snug. Kinda reminds of of one of those quad RC helis heh.
e5jwaq.jpg


All put together, Some random drives in (no raid yet, but maybe soon):
a302sp.jpg


Loaded up on test bed:
2mw9jrc.jpg


Console:
2vkeo38.jpg
 
Added a second microserver and created a cluster, DRS doing a very nice job of keeping it balanced.


Ok dumb question... but is that from the web interface? I've never used it but it looks interesting,...
Is there a "web interface" that is different than the "web client"? When I goto my ESX box in a browser I get a generic page with a link to a vSphere client which I use, but I see no link to a directly hosted web interface? The client that I use looks different than the pic above.
 
Really kind of saddened by the fact that AMD is not going to be making Steamroller and on FX chips.

By the time I am ready to upgrade my FX-8120 based ESXi server, this fact is probably going to force me onto more expensive real server hardware :(
 
Is there a "web interface" that is different than the "web client"? When I goto my ESX box in a browser I get a generic page with a link to a vSphere client which I use, but I see no link to a directly hosted web interface? The client that I use looks different than the pic above.

Its the vcenter web interface not the hosts web interface.
 
Dell T320 stuffed full of ram, 1TB WD black drives, and extra gig ports. Works quite well but wasn't cheap.
 
Are Dell Poweredge C1100 2x Xeon L5520 2.26Ghz servers good enough to run ESXi deployments these days? I know they are 3-4 years old but would they suffice for a home ESXi server cluster (2 of them obviously)? I'm looking to do some testing yes but also run a media server VM (Plex & Subsonic) that serves 10+ clients (multiple transcodes at once) as well as a few different Windows Server VM's.

The Dell C1100s are the absolute best bang for the bucks and have been so for the past year or so.
I mean, the 72GB or RAM alone is worth the entire price you pay for them on ebay.
You can even upgrade the CPUs to the 56xx for a 12 cores system.
Maybe I lucked out, but mine is very quiet (or maybe it is because I stuffed it with SSDs only, which helps with cooling). I had plans to mod it into a tower in order to silence it, but gave up on that when I realized it was quiet already for my use.
So, all the parts for the mod are now sitting unused.
 
The Dell C1100s are the absolute best bang for the bucks and have been so for the past year or so.
I mean, the 72GB or RAM alone is worth the entire price you pay for them on ebay.
You can even upgrade the CPUs to the 56xx for a 12 cores system.
Maybe I lucked out, but mine is very quiet (or maybe it is because I stuffed it with SSDs only, which helps with cooling). I had plans to mod it into a tower in order to silence it, but gave up on that when I realized it was quiet already for my use.
So, all the parts for the mod are now sitting unused.
I have been eyeing those C1100's w/72GB ram for a while now. With the right storage, you could virtualize for days.

The mod I did on my X7DBU system was pretty much all for the noise, that system sounded like a 747 taking off. New system is all but silent.

If I find an even bigger virtualization project I just might have to get one of those C1100's. :drool:
 
The Dell C1100s are the absolute best bang for the bucks and have been so for the past year or so.
I mean, the 72GB or RAM alone is worth the entire price you pay for them on ebay.
You can even upgrade the CPUs to the 56xx for a 12 cores system.
Maybe I lucked out, but mine is very quiet (or maybe it is because I stuffed it with SSDs only, which helps with cooling). I had plans to mod it into a tower in order to silence it, but gave up on that when I realized it was quiet already for my use.
So, all the parts for the mod are now sitting unused.

I'll add the HP DL160 G6's are up there for bang for the buck. I picked up one with 2 5639's and 72GB of ram for $529. I also picked up a dual quad core version with 72GB of ram for under that.

For ~$1000-1500 you can get the Dell 6100 with 8 quad cores and a ton of ram (4 dual cpu nodes)
 
I mostly use Hyper-V at work so I decided to make an ESX box to play with random stuff.

I started with ripping apart a 1U supermicro server based on a X7DBU board. I got it on ebay for a pretty good price (well under $200 with shipping included)

The CPU's are only 2x L5320 1.86GHz but it has 32GB ram. :cool: With the current prices of RAM thu the roof I figure this would be a great bang for the buck setup to play with.

The case I used was a NZXT 210 I had lying around. The X7DBU did NOT want to fit so I had to do some modding.

Right now I only have 3 VM's (win2k12, win7 and ubuntu) but I plan on building a typical windows infrastructure (2012 DC's, exchange 2013, sharepoint, something with a med size SQL database blah blah blah to get more familiar with a 2012+ environment)

Pics:

Case + Board
v41c79.jpg


Hacking the case up to make the board fit. I didnt have tin snips with me. Was going good with the nibblers until I hit a rivet that was in the way, so I just bent everything caveman style.
2mi3rio.jpg


Cooling - I have a bunch of spare AP-15 Gentle Typhoons I ordered a while back so I finally put some more of them to use:
xdutjr.jpg


Ziptied nice and snug. Kinda reminds of of one of those quad RC helis heh.
e5jwaq.jpg


All put together, Some random drives in (no raid yet, but maybe soon):
a302sp.jpg


Loaded up on test bed:
2mw9jrc.jpg


Console:
2vkeo38.jpg

That's a really cool setup. I like the push pull zip tie setup. Very cool.
 
That's a really cool setup. I like the push pull zip tie setup. Very cool.
Thanks :)
I havent really done any monitoring but I have run prime95 on a handful of VM's and everything stays fairly cool to the touch. 4x pushing would probably still work about the same.

I have the MB on a "3-pin workstation" setting which keeps the fans at about 1200 RPM.

It doesnt overheat and its near silent so Im happy.
 
I don't want to be "rude", just curious, are you guys buying Enterprise licenses for home labs? :eek::eek:
 
ok, for test purpose it's acceptable.. but in this thread there is a lot of fantastic hardware, no one is using it for production?
I mean, if you just want to "play" you can do it with muuuch less stuff.. no?
 
ok, for test purpose it's acceptable.. but in this thread there is a lot of fantastic hardware, no one is using it for production?
I mean, if you just want to "play" you can do it with muuuch less stuff.. no?

Yes, but then this would be oftForum. :p
 
Yes, but then this would be oftForum. :p


Lol! I test enterprise level items sometimes. I also bought the level of hardware I did as I don't plan on upgrading int he next 3 years since my wife and I are looking at starting our family. All "silly" purchases go out the window when the kids come along. (And rightfully so.)
 
ok, for test purpose it's acceptable.. but in this thread there is a lot of fantastic hardware, no one is using it for production?
I mean, if you just want to "play" you can do it with muuuch less stuff.. no?

For actual production most people seem to go with main-stream branded (Dell/HP/Etc) servers for production due to the support issues. A lot of the stuff here is white-box/lab equipment that people shy away from in production.

I have 2 Supermicro E3-1230v2/32GB 1u's in production for about a year right now and they have been flawless. I did pick up a couple HP DL160's off ebay to use instead because we were hitting the limits of our 64GB ram in production and you can't beat the cost/performance ratio on them. 72GB ram and 24 cores for $550? Yes please! No warranty but at that price we can buy a spare and still be cheaper than a new build.
 
For actual production most people seem to go with main-stream branded (Dell/HP/Etc) servers for production due to the support issues. A lot of the stuff here is white-box/lab equipment that people shy away from in production.

I have 2 Supermicro E3-1230v2/32GB 1u's in production for about a year right now and they have been flawless. I did pick up a couple HP DL160's off ebay to use instead because we were hitting the limits of our 64GB ram in production and you can't beat the cost/performance ratio on them. 72GB ram and 24 cores for $550? Yes please! No warranty but at that price we can buy a spare and still be cheaper than a new build.

That's the beauty of hypervisors: the hardware can be a commodity. There are advantages and disadvantages to the hardware platform you select, but the VM doesn't care if it's running on a whitebox or a $500k blade center. It just works.

My lab has 5 whiteboxes, three running VMware, two running Hyper-V, and works great. No need for enterprise class hardware. I've probably spent about $4,500 for the hosts, networking, and SAN which gives me:

3x VMware hosts
AMD Phenom II X6 1045T CPU (2.7GHz, turbo up to 3.2GHz)
32GB RAM
Intel VT quad port NIC, onboard NIC
Corsair 430W 80+ efficiency PSU

2x Hyper-V hosts
AMD Phenom II X4 945 CPU (3.0GHz)
16GB RAM
Intel PT dual port NIC, Intel CT NIC, onboard NIC

2x HP v1910-24g switches, stacked

Windows 2012 R2 SAN
Intel i3-4130T 35W dual core CPU
32GB RAM
2x320GB 2.5" 7.2k RPM OS drives, mirrored
Auto-tiering pool for VM's:
-4x512GB Toshiba SSDs
-8x600GB Velociraptors
Intel VT quad port NIC
IBM M1015 SAS/SATA card in IT mode
Starwind v8 Beta with VAAI support
Primocache 0.9.2 beta using 27GB of RAM as read cache for all VM datastores/SMB shares

It all works great and makes for a stellar lab environment. VMs perform beautifully.
 
Primocache 0.9.2 beta

How is PrimoCache working out for you?

I stumbled across this product when I was building my own SAN solution using a M1015, but it was in beta so I passed (I ended up going with a HP SmartArray P410 with 512MB BBWC). Plus the 90-day license key kinda bugged. How are you getting around it (or are you)?

Also, what are you using Starwind for? I thought 2012 R2 storage spaces handled iSCSI target/replication/snapshots/tiering thereby eliminating the need for something like Starwind.
 
How is PrimoCache working out for you?

I stumbled across this product when I was building my own SAN solution using a M1015, but it was in beta so I passed (I ended up going with a HP SmartArray P410 with 512MB BBWC). Plus the 90-day license key kinda bugged. How are you getting around it (or are you)?

Also, what are you using Starwind for? I thought 2012 R2 storage spaces handled iSCSI target/replication/snapshots/tiering thereby eliminating the need for something like Starwind.

PrimoCache is working great. I've been using it since it was FancyCache v0.8. The 90 day license thing is annoying because you can get more 90 day licenses but have to reboot the whole server for it to apply. I typically piggy back it when I apply patches.

Starwind not only performs much better than Microsoft's iSCSI server, it also supports VAAI which is a big bonus. Microsoft's NFS server is also pretty slow or I'd use that, too.
 
biggest issue with primocache it's not cluster compatible as it does not mirror the cache between multiple nodes for fault tolerance

it's just not safe to have no own batt cache of gigabytes in size as with power off you're going to loose all the decent transactions

so if you want read write ram and flash block cache which is safe to use think about flashsoft products and pernix data but primo is out ((

clustered storage spaces with windows work with sas disks only so you need to get expensive per gigabyte sas drives, multiple sas controllers and external sas jbod (better two for fault tolerance) and at the same time starwind can build the same config with a fraction of space used from your boot sata drives and no other hardware!!! clustered storage spaces need at least maybe $4,000 invested into hardware to work

starwind is very close to what vmware is doing with vsan

building high performance and fault tolerant virtual san from inexpensive sata and ethernet (flash is optionally used for cache)

windows built-in target is a joke as it's not fault tolerant on it's own and needs windows cluster and thus external shared storage (sick!)

for a single node you can use smb share and not spawn an iscsi target as smb share would be faster compared to ms target

How is PrimoCache working out for you?

I stumbled across this product when I was building my own SAN solution using a M1015, but it was in beta so I passed (I ended up going with a HP SmartArray P410 with 512MB BBWC). Plus the 90-day license key kinda bugged. How are you getting around it (or are you)?

Also, what are you using Starwind for? I thought 2012 R2 storage spaces handled iSCSI target/replication/snapshots/tiering thereby eliminating the need for something like Starwind.
 
biggest issue with primocache it's not cluster compatible as it does not mirror the cache between multiple nodes for fault tolerance

it's just not safe to have no own batt cache of gigabytes in size as with power off you're going to loose all the decent transactions

so if you want read write ram and flash block cache which is safe to use think about flashsoft products and pernix data but primo is out ((

clustered storage spaces with windows work with sas disks only so you need to get expensive per gigabyte sas drives, multiple sas controllers and external sas jbod (better two for fault tolerance) and at the same time starwind can build the same config with a fraction of space used from your boot sata drives and no other hardware!!! clustered storage spaces need at least maybe $4,000 invested into hardware to work

starwind is very close to what vmware is doing with vsan

building high performance and fault tolerant virtual san from inexpensive sata and ethernet (flash is optionally used for cache)

windows built-in target is a joke as it's not fault tolerant on it's own and needs windows cluster and thus external shared storage (sick!)

for a single node you can use smb share and not spawn an iscsi target as smb share would be faster compared to ms target

I'm only using Primocache for read caching. Write caching occurs in Windows Storage spaces on SSD drives so it's persistent.
 
Setup a home lab a few months back

Hardware

Dell R610
2x Xeon x5570
128gb ECC registered
6x 500gb Seagate 10k SAS in raid 5 +1 hotspare

Software
ESXi 5.1 update 1

Lab setup
2x Windows domain controllers
2x IIS hosts
1x SQL 2012 host
SCCM 2012 (4 hosts)
SCOM 2012 (4 hosts)
Orchestrator (future project - another 3 hosts)
2x CentOS 6.x
1x pfSense to test firewall deployments
1x VMWare VCMA appliance

The 610 was set to minimum performance mode via BIOS which down clocks the CPU to 1.6ghz and memory to 866. Server runs around 50% utilization (CPU/RAM) and pulls ~275watts. Other than slow boot up times, once the VM's are running there's really no noticeable impact to performance. Would like to nab an actual SAN/NAS to offload VMDK storage at some point.

This is a very scaled down version of my work labs (HPC7000 blades, EMC VNX5700 storage) to play around with different configs and settings.
 
why don't you use a combination of a csv cache (read only) and flash cache (read write)? no need to install any tools..........................

I'm only using Primocache for read caching. Write caching occurs in Windows Storage spaces on SSD drives so it's persistent.
 
why don't you use a combination of a csv cache (read only) and flash cache (read write)? no need to install any tools..........................

I'm not using any CSVs and I am experimenting with Virtual Flash cache in VMware, however I prefer using my SSDs in the Windows SAN since they can be used as read and write cache rather than read only with vFRC in VMware. Pernix would be a great option if I could get an NFR license. :D
 
PrimoCache is working great. I've been using it since it was FancyCache v0.8. The 90 day license thing is annoying because you can get more 90 day licenses but have to reboot the whole server for it to apply. I typically piggy back it when I apply patches.

Starwind not only performs much better than Microsoft's iSCSI server, it also supports VAAI which is a big bonus. Microsoft's NFS server is also pretty slow or I'd use that, too.

I did some reading on Starwind and it seems its quite a bit performance over MS's ISCSI. I will try Starwinds free version and see how that goes.
 
I did some reading on Starwind and it seems its quite a bit performance over MS's ISCSI. I will try Starwinds free version and see how that goes.

Give the Beta a try if you're feeling adventurous. I took the plunge because it supported VAAI. I'm using normal thick iSCSI file LUNs on top of Windows Storage Spaces and it works great.
 
Give the Beta a try if you're feeling adventurous. I took the plunge because it supported VAAI. I'm using normal thick iSCSI file LUNs on top of Windows Storage Spaces and it works great.
I might as well, since I just do this in my spare time for fun.
I ordered a 256gb SSD for my main machine so I could use the 128gb for my ISCSI but its held up for some reason. :(
I hope its easy to setup like the windows one.
 
OK, final update on my lab environment and then I need to stop spending money (except to increase Hyper-V host RAM to 32GB when prices come down).

3x VMware ESXi 5.5 hosts
AMD FX-6300 6 core CPU
32GB RAM
Intel PRO1000 VT quad port Gb NIC
Onboard Gb NIC

2x Hyper-V 2012 R2 hosts
AMD Phenom II X6 1045T 6 core CPU
16GB RAM
Intel PRO1000 PT dual port Gb NIC
Intel PRO1000 CT single port Gb NIC
Onboard Gb NIC

Windows 2012 R2 Storage server
Intel i3-4130T 35W dual core CPU
32GB RAM
2x250GB 2.5" 7,200RPM OS drives (mirrored)
Auto-tiering VM Pool
- 4x512GB Toshiba SSD drives
- 8x600GB WD Velociraptor 10,000RPM drives
Shares Pool
-5x3TB Seagate 7,200RPM drives
Intel PRO1000 VT quad port Gb NIC
Onboard Gb NIC
PrimoCache Beta 0.9.2 using 29GB of system RAM as read cache for VM Pool
Starwind v8 Beta 2 presenting iSCSI LUNs to VMware hosts over 2x1Gb links
SMB 3 shares presented to Hyper-V hosts over 2x1Gb links
 
OK, final update on my lab environment and then I need to stop spending money (except to increase Hyper-V host RAM to 32GB when prices come down).

3x VMware ESXi 5.5 hosts
AMD FX-6300 6 core CPU
32GB RAM
Intel PRO1000 VT quad port Gb NIC
Onboard Gb NIC

2x Hyper-V 2012 R2 hosts
AMD Phenom II X6 1045T 6 core CPU
16GB RAM
Intel PRO1000 PT dual port Gb NIC
Intel PRO1000 CT single port Gb NIC
Onboard Gb NIC

Windows 2012 R2 Storage server
Intel i3-4130T 35W dual core CPU
32GB RAM
2x250GB 2.5" 7,200RPM OS drives (mirrored)
Auto-tiering VM Pool
- 4x512GB Toshiba SSD drives
- 8x600GB WD Velociraptor 10,000RPM drives
Shares Pool
-5x3TB Seagate 7,200RPM drives
Intel PRO1000 VT quad port Gb NIC
Onboard Gb NIC
PrimoCache Beta 0.9.2 using 29GB of system RAM as read cache for VM Pool
Starwind v8 Beta 2 presenting iSCSI LUNs to VMware hosts over 2x1Gb links
SMB 3 shares presented to Hyper-V hosts over 2x1Gb links

The last system is easy to know what you are using it for, what about the rest! :)
 
Good morning,

I'm looking for 2 Hosts for Hyper-V (or ESX) for lab purposes.
In the past I had two DL160 G6 servers however they are noisy and I really don't need two sockets per host.

Could you advise me some cheap configs in ATX format or similar?

Thanks
 
Good morning,

I'm looking for 2 Hosts for Hyper-V (or ESX) for lab purposes.
In the past I had two DL160 G6 servers however they are noisy and I really don't need two sockets per host.

Could you advise me some cheap configs in ATX format or similar?

Thanks

Define cheap. We also need to know what your workloads are, or the current specs of the current HP servers.
 
Back
Top