Just got my 6 CPU licenses and am working on my plan:
3 Hosts:
2x - SuperMicro 24 Bay HotSwap w/Redundant Plat(or gold?) PSU
1x - Rosewill 12 Bay HotSwap w/SeaSonic Gold PSU
1x - (If needed, 16 bay rackable JBOD chassis)
2 of the systems have on-board LSI RAID controller w/cache & 10gbit.
I plan to use RAID10 SSD for the VMs on each machine, and do ESXI on Flash or SuperDOM modules. I have various SSD for this as I didn't want 1 failure brand and I like to test various drives. (4x840 Pro 128gb, 4x256gb M550, 4xS3700 200gb --- I may swap out the Samsung 840s for Intel S3500 drives, I have enough for a RAID 1setup)
My questions are:
1. Anyone do ESXI+Logging on a SuperMicro SuperDom? (Sure it's a bit more $ but 17 TB write-endurance means I don't waste ANY space elsewhere for ESXI, right?) All 3 use SuperMicro motherboards so I can do this on all 3 Hosts.
2. What to do / how to arrange storage?
3 of the machines (or combined in 1-2 chassis) will have persistent storage (spinners) arrays to start. I'm not sure how to best utilize/run the persistent storage.
I will have a VM for NAS duty, and want access to the arrays in this VM, and others. We're only using ~500gb in our current NAS (Synology) should hit around 1TB by summer migrations, so we have plenty of space in all arrays. I believe I have around 24 of the 300gb drives.
- RAID1 (2x1TB) for backup of important data in NAS. This will likely be expanded into 3tb, 4tb, or 6tb and am thinking RAID6 or ???
- RAID6 (5x2TB) for NAS/General Storage Duties
- 2x RAID10 (8x 300gb 15kRPM SAS) + Hot Spares (I could do 3, but prob no room for hot spares, not 100%)
I don't want to HAVE to run all 3 hosts all the time due to power/heat output but I'm a bit on edge about running them all from 1 host.
*Note* 2 have on-board LSI Raid w/Cache*
*Note* All RAID = hardware RAID with LSI card or onboard LSI controller
My thought was to use the Rosewill (most efficient) 4U setup with onboard LSI raid to run:
- NAS VM (OMV? Plex, iscsi, usual Nas stuff)
- RAID10 (4x200gb Intel S3700) in hot swaps for VMs.
- RAID6 (5x2TB) in hot swaps for NAS
That gives me 5 more hot-swap bays, I have normal SATA connectors, additional LSI Raid card(s) as well as SAS2 expanders. I was thinking of running 5x300gb SAS in RAID0 or RAID6 -- also debating using an external chassis for these so I can power them on/off as needed to save power. This could be for my VM images/backups, and general VM datastore, that I could have backup to the RAID6 (5x2TB if needed)
How does ESXI & the VMs handle an array coming offline and online, is that possible?
This leaves the 1 machine handling 24/7 duty while keeping power within check and allow a # of storage options.
Since I'll have more drives in the other system but they won't be on 24/7 is a VSAN even something to consider?
3. If I don't run it on a SuperDOM module how can one easily have RAID1 of ESXi USB or CF? DO they make adapters that make this possible? Or, is it best to just keep a copy of the drive once ESXI boots, yank and copy?
3 Hosts:
2x - SuperMicro 24 Bay HotSwap w/Redundant Plat(or gold?) PSU
1x - Rosewill 12 Bay HotSwap w/SeaSonic Gold PSU
1x - (If needed, 16 bay rackable JBOD chassis)
2 of the systems have on-board LSI RAID controller w/cache & 10gbit.
I plan to use RAID10 SSD for the VMs on each machine, and do ESXI on Flash or SuperDOM modules. I have various SSD for this as I didn't want 1 failure brand and I like to test various drives. (4x840 Pro 128gb, 4x256gb M550, 4xS3700 200gb --- I may swap out the Samsung 840s for Intel S3500 drives, I have enough for a RAID 1setup)
My questions are:
1. Anyone do ESXI+Logging on a SuperMicro SuperDom? (Sure it's a bit more $ but 17 TB write-endurance means I don't waste ANY space elsewhere for ESXI, right?) All 3 use SuperMicro motherboards so I can do this on all 3 Hosts.
2. What to do / how to arrange storage?
3 of the machines (or combined in 1-2 chassis) will have persistent storage (spinners) arrays to start. I'm not sure how to best utilize/run the persistent storage.
I will have a VM for NAS duty, and want access to the arrays in this VM, and others. We're only using ~500gb in our current NAS (Synology) should hit around 1TB by summer migrations, so we have plenty of space in all arrays. I believe I have around 24 of the 300gb drives.
- RAID1 (2x1TB) for backup of important data in NAS. This will likely be expanded into 3tb, 4tb, or 6tb and am thinking RAID6 or ???
- RAID6 (5x2TB) for NAS/General Storage Duties
- 2x RAID10 (8x 300gb 15kRPM SAS) + Hot Spares (I could do 3, but prob no room for hot spares, not 100%)
I don't want to HAVE to run all 3 hosts all the time due to power/heat output but I'm a bit on edge about running them all from 1 host.
*Note* 2 have on-board LSI Raid w/Cache*
*Note* All RAID = hardware RAID with LSI card or onboard LSI controller
My thought was to use the Rosewill (most efficient) 4U setup with onboard LSI raid to run:
- NAS VM (OMV? Plex, iscsi, usual Nas stuff)
- RAID10 (4x200gb Intel S3700) in hot swaps for VMs.
- RAID6 (5x2TB) in hot swaps for NAS
That gives me 5 more hot-swap bays, I have normal SATA connectors, additional LSI Raid card(s) as well as SAS2 expanders. I was thinking of running 5x300gb SAS in RAID0 or RAID6 -- also debating using an external chassis for these so I can power them on/off as needed to save power. This could be for my VM images/backups, and general VM datastore, that I could have backup to the RAID6 (5x2TB if needed)
How does ESXI & the VMs handle an array coming offline and online, is that possible?
This leaves the 1 machine handling 24/7 duty while keeping power within check and allow a # of storage options.
Since I'll have more drives in the other system but they won't be on 24/7 is a VSAN even something to consider?
3. If I don't run it on a SuperDOM module how can one easily have RAID1 of ESXi USB or CF? DO they make adapters that make this possible? Or, is it best to just keep a copy of the drive once ESXI boots, yank and copy?