Drive Pooling vs ZFS for Media Storage

frankhuzzah

Weaksauce
Joined
Apr 18, 2007
Messages
96
I'm in the later stages of my media server upgrade an am planning out my storage system. I'm doing an all-in-one solution, based off of Gea's guide.

I currently have a Xeon 1230v2 on a Supermicro X9SCM-iiF with 8 gigs of ram and the following hard drives:

2 - 750GB 7200rpm Seagates
1 - 1TB WD Green
1 - 1.5TB WD Green
1 - 1.5TB Seagate F4
5 - 2TB Hitachi 5k3000 (new)

Regardless of the decision, I plan on getting an M1015 to connect all the drives.

The primary purpose of this storage is going to be media storage and backups of the other household computers. Currently that consists of about 5 machines, but there will only be 2 or maybe 3 devices streaming at any given time.

So here is my question, would I be better served by a ZFS or Snapraid type solution? I'd initially thought about setting up the 5k3000s as a RaidZ array and using that as an ESXi datastore, but after more research this seems less than ideal. So now I'm debating between what I'd assume to be imporoved performance on ZFS versus better flexibility of a drive pooler. I like the idea of being able to add drives of any size to the pool, but ZFS just sounds interesting and fun!

Anyone have any opinions on the matter?
 
I'd be curious about this as well. The array on my media server went down last night and I will be looking into a new solution in the future.
 
As I see the difference:

Pro snapraid:
You can expand a pool by adding disk by disk
Energy efficient: only the disk is used where the data is on, others can sleep
In case of a desaster, only the data on bad disks are affected
Redundancy not realtime but on demand

Target use:
Media server with medium demands for data security


Pro ZFS
Striping what means that performance increase with number of disks/vdevs (all disk are busy)
Realtime Checksums with self healing featur, Realtime Raid
Filecheck/ scrubbing without unmount to repait silent data errors
Copy on write with the result of unlimited snaps
Higher Raid-levels (Up to ZFS3 what means that three disk may fail without data loss)
No reachable limits regarding capacity, files, filesize, directories etc.
To expand a redundand pool, you must add at least a mirror (two disks)

Target use:
Datacenter ready solution with the best available data security on filesystem level
 
You can combine them too - you'd lose ZFS' self healing feature and multi drive performance scaling, but you'd still have data integrity checking, snapshots etc etc - Snapraid would take care of the healing part, though it'd be a manual process.

Essentially its a tradeoff in the end - you just need to figure out what your priorities are feature-wise - there's no one size fits all unfortunately!
 
I currently have a Xeon 1230v2 on a Supermicro X9SCM-iiF with 8 gigs of ram and the following hard drives:

2 - 750GB 7200rpm Seagates
1 - 1TB WD Green
1 - 1.5TB WD Green
1 - 1.5TB Seagate F4
5 - 2TB Hitachi 5k3000 (new)

Regardless of the decision, I plan on getting an M1015 to connect all the drives.
Regardless of your final decisions to the OS and FS, if you have the extra $$$ I would sell the 750's, 1 & 1.5TB drives (should be able to get some $$$ for them) and replace them with 3 additional 2TB or 2 additional 3TB drives. Will make your system a little quieter, make your decisions as to cases/mounting a little easier now and will also allow you to use 7 or 8 of the ports on the 1015 and not need an expander now.
 
You could also use the motherboard's onboard SATA ports as well as the M1015.

SnapRAID would have no issue with the mixed drive sizes - for ZFS you could split the two 1.5s into 2x750GB partitions on each, and then create two 3x750GB raidZ pools.
Use the 1TB as the OS drive!
(you could also split the 1TB drive into 250GB and 750GB partitions and use the 250GB partition for OS and then make a 3x750GB raidz and a 4x750GB raidz!).
Whole drives is easier on ZFS, but partitions work fine too!

Or use the 1TB for OS and create two mirror pools, one with the two 750GB, the other with the two
1.5TB drives!

Just saying, you have options....;)
 
You could also use the motherboard's onboard SATA ports as well as the M1015.

SnapRAID would have no issue with the mixed drive sizes - for ZFS you could split the two 1.5s into 2x750GB partitions on each, and then create two 3x750GB raidZ pools.
Use the 1TB as the OS drive!
(you could also split the 1TB drive into 250GB and 750GB partitions and use the 250GB partition for OS and then make a 3x750GB raidz and a 4x750GB raidz!).
Whole drives is easier on ZFS, but partitions work fine too!

Or use the 1TB for OS and create two mirror pools, one with the two 750GB, the other with the two
1.5TB drives!

Just saying, you have options....;)

How does ZFS like you partitioning disks like that? I hadn't considered it, as I thought it only wanted full disks. Any downsides?
 
It's an option, but I don't much like the idea of feeding zfs anything but raw physical disks.
 
As far as I know, ZFS will only allow the rebuild of 1 vdev at the same time. If you were to split the 1.5Tb disks and stick the halves in different vdevs, you may run into problems with rebuilding if you ever have to replace one of them.

The reason ZFS likes to have the whole disk instead of just a partition is because it can then exert direct control over the HDD's write cache.

As for the OS options:
I have tried most of the options listed in the thread above, but in the end I am now running the Oracle Solaris 11 rather then any of the derivatives. I had to be careful with the hardware selection, but if you do that, you really get a rock stable system. I like S11 a lot, supports infiniband cards out of the box, with RDMA and ISER and whatnot. Try that with a BSD or linux, and you will be messing with compiling the OFED stack for weeks... You may not need all of that however.
 
You may not need all of that however.

This is the crux of my problem. Clearly by spending an inordinate amount of time on this forum, my tendency is to go with the most feature filled (regarless of the complexity). But keeping the WAF high is equally as important.

The idea of partitioning the disks to use in a ZFS system is appealing, but at that point I could just as well do a hybrid approach of having the 2TB drives in a raidz and create another VM to use the misc disks in Snapraid. Unless there would be a performance benefit to the partitioning route.
 
Or use the 1TB for OS and create two mirror pools, one with the two 750GB, the other with the two 1.5TB drives!

Granted, this would be WAY more storage than I'd need, but if I took this route and put the two mirrors into a single pool, would performance be decent is handed back to ESXi as a datastore to run maybe a 10 or so low I/O VMs?

Then I could have those drives as the datastore for all the VMs and a separate pool made up of the 2TB drives for my media storage.
 
You really don't need more than a few GB for local datastore (use a 750GB drive for that.) Install ESXi on a USB stick, use the 750GB drive for local datastore, hosting the SAN VM. Put the 5 2TB drives on the m1015 and pass that in to openindiana (or whatever) and set up a raidz. Yeah I know you can't easily expand it, but honestly, is 8TB really going to get too small for you? Don't even bother with the other drives...
 
You really don't need more than a few GB for local datastore (use a 750GB drive for that.) Install ESXi on a USB stick, use the 750GB drive for local datastore, hosting the SAN VM. Put the 5 2TB drives on the m1015 and pass that in to openindiana (or whatever) and set up a raidz. Yeah I know you can't easily expand it, but honestly, is 8TB really going to get too small for you? Don't even bother with the other drives...

This was my initial plan. My question was more of a question of best performance. My initial setup is basically as you described. USB with ESXi, 750GBs as datastores. And honestly, no I don't currently need more than 8TB, but if I could setup those other disks in mirrors and maximize iops, then I might. Then again, with the falling prices of SSDs, I could just replace the datastores with a couple of those and easily surpass the performance of the mirrors.
 
Well, for good random read IOPS, get a 6th 2TB drive, and set up 3x2 mirror. Much better random read than the raidz. You would start with 6TB usable storage and could add 2TB at a time.
 
Back
Top