Updateed 4/26:
20 days later, and many hours of reading later, I have changed just about everything in the first post. For anyone interested in a similar build, here is where I stand. Hardware will be ordered this week, since I can't wait any longer and have not been able to get much feedback on my build plan.
Usage Requirements:
Budget: I have blown my original budget, so all bets are off. I now am shooting for the best choices as much as possible.
I will get deeper into the details once hardware arrives, but here is what I think I need based on my research:
Planned layout:
*Original outdated post*
20 days later, and many hours of reading later, I have changed just about everything in the first post. For anyone interested in a similar build, here is where I stand. Hardware will be ordered this week, since I can't wait any longer and have not been able to get much feedback on my build plan.
Usage Requirements:
- NFS share for VMs running on 2 other servers. Estimate of 20-30 Windows and Linux VMs running at any time. *Primary purpose
- Samba/NFS share for desktops and VMs to map to for central shared storage
- Possibly run ESXi with hardware passthrough to make best use of hardware resources. If not, possibly run MySQL on server too.
- Possibly host remote backups for people. Will look into options such as ssh/rsync or windows equivalent.
- Possibly run sabnzb for easy Usenet access.
Budget: I have blown my original budget, so all bets are off. I now am shooting for the best choices as much as possible.
- Case: SuperMicro SC846TQ-R900B (I would much prefer the SC847A-R1400LPB 32-bay chassis, but people say to avoid using an expander with ZFS.)
- Motherboard: Supermicro X8DTH-6F
- CPU: 2 * E5645 Hex-core
- RAM: 48GB of Crucial DDR3 PC3-10600 Unbuffered, ECC
- SAS: 2 * AOC-USAS2-L8I + onboard LSI 2008, all passed through if using ESXi
- HDD: 18+ * 5k3000. Will probably order more before I am done
- HDD L2ARC/OS: ??? Should have ordered the 64GB V100 but now the rebate is expired.
- Power Supply: Included in chassis
- UPS: Existing 10KVA 110/208/240 UPS.
I will get deeper into the details once hardware arrives, but here is what I think I need based on my research:
Planned layout:
- ESXi, controllers passed through to OpenIndiana VM for ZFS. May test other options when I get that far.
- 6 drives in RAID10 for use by MySQL VM and other bandwidth-hungry apps
- 12 drives in some sort of RAIDz. Last night I was worrying about what would happen with a controller failure, so I might do 4 3-drive RAIDzs, or I might do 2 5-disk RAIDz with a spare or two if I decide I am not worried about that.
- Enable DeDupe on specific folder for hosting Windows vmdks. This will maximize dedupe effeciency without requiring massive amounts of ram.
- Either a 15k or SSD for the OI VM boot disk and possibly L2ARC if it is an SSD. I don't think I need RAID for that right now, because I can easily back up the VMDK and copy it to a new drive in a hurry.
*Original outdated post*
Hey everyone, I just recently discovered how much good info is on this board when researching a NAS solution. I have pieced together lots of good info from many sources, and thought I would post here to solicit some peer reviews before I buy all the gear. Once I come up with my final config I plan to buy and build ASAP.
Background: (long, so feel free to skip this)
My home "Lab" currently has 2 servers.
- Dell 2950 with 6 1TB disks. These were built as one 3-disk RAID 5, then I discovered it couldn't be expanded without Windows so I added a mirror and a hot spare for the last 3. ESXi running on internal USB.
- Dell R710 with 2 15k 143GB disks. ESXi also on internal USB. 48GB ram, so this is my primary server.
Currently my only shared storage is an old PC with a single 1TB disk running NFS so I can move VMs between servers and because 143GB is way too small for my 710.
I know I need some sort of shared storage to make this work much better, so I started researching. Here was my progression so far:
- Rackable systems 16-bay server. I was looking at buying one of these off eBay and just loading it with drives. It seemed a quick and dirty solution. However, I thought the supply had dried up so I looked elsewhere.
- NAS appliances. Next I looked at things like the QNAP 7-bay NAS as an option. It seems pretty straightforward, but it doesn't get the best reviews and I was hoping for more drives. I also would prefer redundant power if I can manage.
- UnRaid (Lime Technology) appliances. I found UnRaid and saw the pre-build chassis they make for it. 15 drives in a mid-tower looked very tempting and I thought I would order one or build a similar and put something with ZFS on it. This was when I found out about ZFS DeDupe which I think will help a lot on my VMs.
- Custom build. I found threads on UnRaid forums about custom builds, and found the Norco servers. While researching them I found this forum, and all the threads about different build options.
Usage: My "Lab" is pretty well used by home standards. At any given time I have 20-30 VMs running, mostly XP or Ubuntu with a handful of others mixed in. Each could be idle, or very active depending on the day. I feel today I have some significant performance hits from the low disk count. Especially on the 710.
Requirements: I plan to host most if not all of my VMs on my new shared storage. I plan for something like 10 2TB disks right now, leaving capacity to add another 10 bigger disks when I need them and prices adjust. I have a higher-end Cisco gigabit switch, so I plan to do LACP or similar to try to maximize bandwidth.
Assumptions:
- I think based on all my reading I am best off with ZFS with DeDupe. I am not sure yet which OS, but OpenIndiana sounds tempting.
- I think NFS is my best bet for VMWare, though I am open to iSCSI if there is a good reason for it. I like the flexibility of NFS though because I can have one volume for everything, and hopefully share it out CIFS and other ways as needed.
- Unless I hit a major snag, I hope to virtualize the ZFS host so I can have some low-use VMs on the same server. Maybe put the 2 15k SAS disks in and run VCenter as VM. If I understand the concept right, I can run ESXi, pass through my SAS controllers, and create a ZFS appliance to share out all my disks via NFS to other (and the same) ESXi servers. Did I get that right?
Build: Here is my build as of right now. I have stolen bits from other threads, and factored in feedback from some of the pros with most of this.
Budget: I would like to be cost-effective where possible, but I can spend $3,000-$4,000 on this if I need to before I start reconsidering options.
- Case: Norco RPC-4224
- Motherboard: Supermicro X8ST3-F
- RAM: Minimum of 8GB, more likely 16GB or more
- SAS: 3 * M1015?
- HDD: 10 * 5k2000 or 5k3000. Price will help determine.
- HDD L2ARC: As I dig deeper into ZFS configs, I think I will get some SSD for L2ARC. Too early to know what I need so that will get filled in later. Might use same for ZIL if it makes sense.
- Power Supply: TBD. Haven't researched yet.
- UPS: Existing 10KVA 110/208/240 UPS.
Questions:
- Is the M1015 the "best" choice if I end up doing ESXi passthrough? I have seen people saying it works, but are there any drawbacks to consider? I like the price, going as low as $75 on eBay.
- The X8ST3-F has lots of slots and IPMI, both big plusses. But it is not cheap. Is that the best bang for my buck if I don't forsee ever growing beyond 24 drives in this build?
- I'm a little confused by the specs on the motherboard. It lists a SAS and a SATA controller. Does that depend on what type of disk I add? Because the SAS is an LSI which would work for ESXi passthrough, but apparently the SATA wouldn't. I don't intend to spend the extra for SAS.
- Is the built in expander a reason to avoid the SuperMicro cases? I like that they have redundant PS, which would be harder to do in the 4224. But I have seen lots of comments that lead me to think I should avoid expanders.
Out in left field options:
- I haven't completely ruled out the Rackable Server. For $450 shipped plus disks, it would be an easy solution even if it would be slower than building my own. I could also build in the future and migrate to a new system.
- I haven't ruled out the QNAP either. But at $2000 I think I can do better on my own.
Closing thoughts: Any and all feedback and criticism is greatly appreciated, especially specific to a NAS to support lots of VMs. I will try to keep the updated as I firm up my plans, and plan to post pics as soon as I build it. I would love to get parts ordered within a week.
Last edited: