Ideal ESXI Install

ToddW2

2[H]4U
Joined
Nov 8, 2004
Messages
4,017
Just got my 6 CPU licenses and am working on my plan:

3 Hosts:
2x - SuperMicro 24 Bay HotSwap w/Redundant Plat(or gold?) PSU
1x - Rosewill 12 Bay HotSwap w/SeaSonic Gold PSU
1x - (If needed, 16 bay rackable JBOD chassis)

2 of the systems have on-board LSI RAID controller w/cache & 10gbit.


I plan to use RAID10 SSD for the VMs on each machine, and do ESXI on Flash or SuperDOM modules. I have various SSD for this as I didn't want 1 failure brand and I like to test various drives. (4x840 Pro 128gb, 4x256gb M550, 4xS3700 200gb --- I may swap out the Samsung 840s for Intel S3500 drives, I have enough for a RAID 1setup)

My questions are:

1. Anyone do ESXI+Logging on a SuperMicro SuperDom? (Sure it's a bit more $ but 17 TB write-endurance means I don't waste ANY space elsewhere for ESXI, right?) All 3 use SuperMicro motherboards so I can do this on all 3 Hosts.


2. What to do / how to arrange storage?
3 of the machines (or combined in 1-2 chassis) will have persistent storage (spinners) arrays to start. I'm not sure how to best utilize/run the persistent storage.

I will have a VM for NAS duty, and want access to the arrays in this VM, and others. We're only using ~500gb in our current NAS (Synology) should hit around 1TB by summer migrations, so we have plenty of space in all arrays. I believe I have around 24 of the 300gb drives.

- RAID1 (2x1TB) for backup of important data in NAS. This will likely be expanded into 3tb, 4tb, or 6tb and am thinking RAID6 or ???
- RAID6 (5x2TB) for NAS/General Storage Duties
- 2x RAID10 (8x 300gb 15kRPM SAS) + Hot Spares (I could do 3, but prob no room for hot spares, not 100%)

I don't want to HAVE to run all 3 hosts all the time due to power/heat output but I'm a bit on edge about running them all from 1 host.

*Note* 2 have on-board LSI Raid w/Cache*
*Note* All RAID = hardware RAID with LSI card or onboard LSI controller

My thought was to use the Rosewill (most efficient) 4U setup with onboard LSI raid to run:
- NAS VM (OMV? Plex, iscsi, usual Nas stuff)
- RAID10 (4x200gb Intel S3700) in hot swaps for VMs.
- RAID6 (5x2TB) in hot swaps for NAS
That gives me 5 more hot-swap bays, I have normal SATA connectors, additional LSI Raid card(s) as well as SAS2 expanders. I was thinking of running 5x300gb SAS in RAID0 or RAID6 -- also debating using an external chassis for these so I can power them on/off as needed to save power. This could be for my VM images/backups, and general VM datastore, that I could have backup to the RAID6 (5x2TB if needed)

How does ESXI & the VMs handle an array coming offline and online, is that possible?


This leaves the 1 machine handling 24/7 duty while keeping power within check and allow a # of storage options.

Since I'll have more drives in the other system but they won't be on 24/7 is a VSAN even something to consider?

3. If I don't run it on a SuperDOM module how can one easily have RAID1 of ESXi USB or CF? DO they make adapters that make this possible? Or, is it best to just keep a copy of the drive once ESXI boots, yank and copy?
 
If you're running all local storage, how are you planning on vMotions/HA, or aren't you? Are you planning this? Link

I typically work with pretty large enterprise configurations (Cisco UCS, etc) so we typically have shared SAN storage across larger clusters, and boot-from-SAN or auto deploy configurations for stateless machines (server profiles can run on any blade across the environment)

One suggestion on the Raid10 SSDs. We had a similar setup in a monitoring environment that had to be isolated and external to all datacenter dependencies. It was loaded with consumer class SSDs based on customer cost concerns, and we had a LOT of drive failures. Depending on your hardware RAID controller, it might not correctly report the drive issues, so you could have multiple failed drives and not know it until your RAID fails. So I'd recommend checking the ability of the RAID controller to properly inform you of SSD issues, and definitely set up monitoring/alerting for the array if you have the capability.

In terms of arrays "coming and going offline", I'm assuming you mean one of your servers being evacuated of all VMs, and then turned off? This answer really depends how you have your cluster setup (or if you have one at all), and what your settings are for HA, etc. There are also built in capabilities in vCenter for power management based on running VMs and the ability to turn hosts off dependent on load. link

Overall, I'd really recommend putting these in a cluster if that is possible, with some sort of shared storage. I believe that would serve you best and provide you with the best uptime and the least stress. I'd have to defer the VSAN question to others who use it in production, but I understand the license isn't cheap, and I'd recommend you consider other options, which could vary depending on how many VM's you're running.

On your last question about the RAID-1 boot drive (USB or CF), most environments I've been in have just run off of 1 isolated boot flash (depends on the ability of the hardware to support that). I never saw a boot USB flash drive fail, even in some Dell R900s/910's that had been online for 4+ years. In Cisco UCS, we have the capability with the M4 blades to have RAID-1 boot flash (SD cards), but we never use that since we run stateless, and no local disks are tied to any of the blades or profiles.

Check out VMware auto deploy, and it's options for booting servers. If you have a PXE environment on site already, you might find that solution familiar, and might consider running your config that way. If you use VMware host profiles, you might find that the mean time to recovery from bare metal install to functional host is really low, but again, that depends on the totality of your environment and what capabilities you have.
 
Last edited:
This is in a home-lab / test-dev environment as well as home usage. So most likely no dedicated data system.

I have some USB drives coming.
 
I have a 5 Bay Synology w/4 Port NIC but I want to run the media stuff too, and I don't think it can handle it all, the QNAP's have such nicer processors I should have gone that route. I also want to combine it into the ESXI box to recoup some $$ invested on the new 5TB drives too ;) I'm just not 100% which NAS+OS VM I'll go with yet.

I downloaded a handful to play with and see which I like.


Luckily we don't house 1000s of videos so my transfer to a new NAS right now is simple as all data fits on 1 HD :)
 
Back
Top