Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I'm not looking for a pure software RAID option as that will require custom kernels.
Is there a better option at the same price point when it comes to a RAID controller? I'm not looking for a pure software RAID option as that will require custom kernels.
That article use CentOS, as do most articles about Linux RAID as the CentOS kernel supports RAID by default.
That is unfortunately not the case with the mainline Ubuntu kernel that I use in my system.
Code:Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-51-generic x86_64) mashie@IONE:~$ cat /proc/mdstat Personalities : unused devices: <none> mashie@IONE:~$
sudo modprobe raid456
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>
Until the RAIDZ2 arrays can be expanded one drive at a time I will not go down that route.Id look into ZFS
Is there something wrong with mdadm?
Until the RAIDZ2 arrays can be expanded one drive at a time I will not go down that route.
BTW, RAID-Z expansion is in the alpha stage on the github sources.
Matthew Ahrens has a pull request here:
https://github.com/zfsonlinux/zfs/pull/8853
Hopefully that means we will have working expansion next year..
From a CPU load I'm impressed, only 30% of a single core is used and it is 2 x 200MB/s being copied to the array.
I like the Buffalo systems. They are pretty solid backup units.Grab a QNAP or Synology and be done with it That's what i ended up doing because i got tired of dicking around with OS's and storage crap, then attach another USB drive externally and do nightly backups of important files
Ya, i am no fun, just tired of crap breaking and spending hours fixing things when i do it all day at work. I got a sheild and it works great!
Also blue's were not meant to be raided due to their time out times. and yes raid 5 is dead for spinning rust, raid 10 or raid 6 for more resiliency.
I like the Buffalo systems. They are pretty solid backup units.
I can see that I was not a fan of their home and small business line, but their enterprise units were and still are beasts. Not cheap though wish they offered models with SFP+ I don't have anything that has a 10G copper port.I had a buffalo and hated it! Just from the cheap feeling plastic everything and i was always having to reboot it and performance was "meh", but that was about, 6 years ago i had one?
Raid 6 isn't to much recommended either although it's still better than 5, which is much better than 0... most are using a hybrid now says, like raid 10 (1+0), which gives redundancy and is more performant than raid 6, but > 4 drives raid 6 nets you more storage. Depending on how many drives, how big, etc. Or how often you backup will help with suggestions. I personally run raid 0 and backup files I don't want to lose.After nearly losing 3.6TB worth of data last night I'm looking to move from MHDDFS to a RAID 6 solution for my storage.
Thankfully I got the drive that took a dump working in read only long enough to copy everything off it but it was quite a wake up call. Many years ago I was running RAID 5 in a previous system but that appear to have gone out of fashion now with larger arrays.
So here I am, now in need of a RAID 6 storage solution that can run on Ubuntu 18.04 while still allow growth of the array one drive at a time as and when needed. The storage is mainly UHD rips that are streamed to Nvidia Shields around the house.
The current MHDDFS array consisted of 2 x 10TB Ironwolfs and 5 x 4TB WD Blues, it was one of the WD's that died so they will all be retired. Instead I plan on getting another 4 x 10TB Ironwolfs. The end result is the same ~40TB of usable storage but way more robust in case of drive issues.
I have been eyeing up the Highpoint RocketRaid 2840A which looks decent but there are hardly any reviews to be found.
Is there a better option at the same price point when it comes to a RAID controller? I'm not looking for a pure software RAID option as that will require custom kernels.
Raid 10 can actually be more dangerous than raid6 depending on what drives fail.
Raid 10 can actually be more dangerous than raid6 depending on what drives fail.
Sounds good, how is th software raid 6 working, it has to generate parity twice for everything, how doe sthat affect your speeds? It's much safer than no raid.I ended up using software RAID6 (mdadm) with 7x 10TB drives using the SATA ports on the motherboard. There are enough spare ports for a 14 disk array if needed in the future.
RAID6 is working fine, in case of a rebuild I still have a one drive resiliency for the 18h it will take as I simply will leave it alone to rebuild. This is after all a home server and not production.
Important stuff is backed up to the cloud. The data on the array I just want to survive a dead disk or two which it now will.
Just because it can doesnt mean it's not working to do so. If you're doing processing of sorts on said data and you're using your cycles for raid, it is going to be slower to some extent. I have a hardware raid card that supports raid 5 (and has a backup battery for power loses), but I'm running it in raid 0... So, no redundancy, but ok for power lose.Dual parity calculation is nothing on a modern x86 CPU. I have software raid arrays (dual or even triple parity) that can read or write large files at over 1GB/s with hard drives.
Just because it can doesnt mean it's not working to do so. If you're doing processing of sorts on said data and you're using your cycles for raid, it is going to be slower to some extent.
That's what by I was asking, I assumed it wouldn't be to difficult nowadays, but I haven't had a chance or reason to test it which is why I was curious. If it's < 5% of a single core, then it's nothing. If it's 50%, well that's not nothing . Sounds like it's closer to nothing on newer systems. Last time I ran raid without hardware was my duron 800mhz... So it wasn't nothing.I get what you're saying, but we're beyond this point now with spinning disks. Parity calculation isn't going to add up to more than a shrug for modern desktop CPUs. Pointedly, most NAS devices that provide single and dual parity and can house more storage than most consumers and small businesses would actually use themselves run off of mid-range tablet SoCs. At best, they have some form of x86 CPU in the form of an Atom or Jaguar, and those devices are usually spec'd as such for the purposes of running other server processes, media transcoding to weaker devices or streaming to the web, and for high-performance disk encryption.
A potato can do parity calculations these days.
That's what by I was asking, I assumed it wouldn't be to difficult nowadays, but I haven't had a chance or reason to test it which is why I was curious. If it's < 5% of a single core, then it's nothing. If it's 50%, well that's not nothing . Sounds like it's closer to nothing on newer systems. Last time I ran raid without hardware was my duron 800mhz... So it wasn't nothing.