Previously, we discussed what RAID is. Now we can move onto finding out how to configure a RAID system in Linux and comparing the various choices we have.
Let me start by stating that this post will never be "complete". I have only been using Linux since 2012, but in that time I have reconfigured my personal setup more than three times as storage needs have increased, and I have learnt about new issues and techniques. There are many articles written about the merits of each individual software RAID system, but this one will aim to compare them together in one place, as well as provide installation instructions.
This post will not be focusing on performance benchmarks. I don't have the time or money to spend on comparing all the different RAID configurations against the various filesystem types. Even when you do have that information, there are other factors that are more important, such as the advantages that a copy-on-write filesystem gives you, as well as stability. If you are into that sort of thing, I suggest you follow Phoronix as he has many great articles for that.
RAID Types (Not Levels)
I will be covering the following tools that I am aware of that can be utilized in order to create a RAID array in Linux. I have implemented all but the last of them.
Some of you may think it odd that I have included BTRFS and ZFS because they implement RAID at the Filesystem level. However, they do meet all the requirements of RAID and I actually prefer this type of RAID due to the flexibility. It also means that you don't have a choice when choosing your filesystem, which I like due to the paradox of choice.
This is the most common way to create a RAID array, and thus the most "supported" with documentation and forum posts easy to find all over the internet.
- RAID 10 requires even pairs of drives after the initial 4.
What is copy-on-write?
Before I cover BTRFS, and ZFS, it is worth explaining that these are both copy-on-write (COW) filesystems. A COW filesystem is one in which any changes to data are written as new blocks to the drives, rather than updating the original blocks. This may appear to be a wasteful way to utilize your storage capacity, but comes with many benefits. The main one is the ability to take an instant snapshots of your data. This is allows one to take consistent backups of your filesystem or database with no downtime, and to aid with live migration as discussed in this talk about flocker. The nature of only writing new blocks rather than replacing previous ones is also a natural fit for SSDs which suffer from the need for wear levelling.
I first heard about BTRFS from the video below. This filled with so much excitement that I converted my mdadm RAID array that day (by copying all the data off over the network and then moving it back again).
The main advantages of BTRFS are:
- The ability to expand the array through adding more drives easily.
- Ability to change RAID levels with no downtime.
- Snapshot and rollback capabilities.
- Scrubbing to remove bitrot.
However, BTRFS has two major drawbacks that the video either does not cover or glances over.
- BTRFS is not officially stable even though it has been around for a while now.
- Running KVM guests on BTRFS causes appalling disk performance and can lead to failures or data loss.
If you want an "easy" solution with the benefits of a copy-on-write filesystem and are not running KVM, or are worried about the fact that it's not officially "stable". However, you would probably be better of spending money on a dedicated ZFS.
Zfs is very similar to BTRFS in that it is another copy-on-write filesystem that offers RAID management. It has one killer advantage over BTRFS:
* It's been around longer and is production ready.
The need for ECC
Unfortunately, you really need to use ECC memory in a ZFS setup, and you are unlikely to find this in any old computer hardware that you have lying around. With ECC, you need to make sure that your motherboard and CPU supports it. With intel chips, this tends to only be their more expensive Xeon processors, and won't appear in any of the "consumer" i3, i5, or i7 chips. AMD on the other hand surprised me with ECC support in their consumer grade FX series processors which are dirt-cheap, whilst still packing 8 cores and virtualization support for KVM/XEN. However, they do tend to pack far-higher TDPs than their Intel counterparts.
For the physical hard drives, I personally use WD RED of sizes 3 and 4 TB. Buying 4TB drives is actually more expensive per GB, and are harder to iteratively buy since you are also buying more Gigabytes at a time. However, you have to consider how many SATA ports that your motherboard has. You can buy a motherboard that has a lot more SATA ports, but that tends to be a lot more expensive and will still have an upper limit.
If you are buying a new setup, rather than turning your old computer into a fileserver, I recommend the ASROCK C2750D4I or the slightly cheaper ASROCK C2550D4I. Both of those boards are server grade with 12 SATA ports and ECC memory for a ZFS setup. The first is only slightly more expensive in percentage terms, but has 8 cores instead of 4 which is useful if the computer is also going to act as a KVM host. They also feature IPMI support which is useful. It was also featured on both Linus Tech Tips and Tek Syndicate. I have no idea why it was forced onto an ITX form factor as how many ITX cases can fit 12 drives!? If you find one please put it in the comments.
If I were to setup again and had the money, I would go with a ZFS system. Building an ECC system, does not have to be expensive, I just don't have those parts lying around at home.