Create RAID with LVM
Many Linux users have created RAID arrays using mdadm commands and do not realize that you can also create a RAID through LVM.
Related Posts
Steps
Installing LVM
You may need to install the LVM packages in order to build these arrays.
sudo apt-get install lvm2
Partition Drives
If setting up against some physical hard drives, you should create one or more partitions on the drive first.
Creating RAID 0
sudo vgcreate [vg name] /dev/sd[x]1 /dev/sd[x2]1 ...
lvcreate -i[num drives] -I[strip size] -l100%FREE -n[lv name] [vg name]
sudo mkfs.[ext4/xfs] /dev/[vg name]/[lv name]
- Stripe size needs to be a number of the power 2, starting with 4. E.g. 4, 8, 16, 32, 64. If your data is mostly small text files, then use 4. If you are mostly dealing with media then you may want something larger.
- If you want to use the xfs filesystem, you may need to install xfprogs with
sudo apt-get install xfsprogs -y
Creating RAID 1 (Mirror)
VG_NAME="vg1"
LV_NAME="lvm_raid1"
sudo vgcreate $VG_NAME /dev/sd[x]1 /dev/sd[x]1
sudo lvcreate \
--mirrors 1 \
--type raid1 \
-l 100%FREE \
--nosync \
-n $LV_NAME $VG_NAME
sudo mkfs.[ext4/xfs] /dev/$VG_NAME/$LV_NAME
/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-XXXXXXXXXXXX-part1
instead of /dev/sd[x]1
.
Creating RAID 5 (Parity)
VG_NAME="vg1"
LV_NAME="lvm_raid5"
sudo vgcreate $VG_NAME /dev/sd[x]1 /dev/sd[x]1 /dev/sd[x]1
sudo lvcreate \
--type raid5 \
-l 100%FREE \
--nosync \
-n $LV_NAME $VG_NAME
sudo mkfs.[ext4/xfs] /dev/$VG_NAME/$LV_NAME
Scrubbing
If you have any type of raid other than RAID 0, then you can scrub the data every now and then to prevent bitrot.
Check RAID Status
You can check the RAID status with the following command (changing the value of the VG_NAME
to your volume group name):
VG_NAME=vg1
sudo lvs -a -o name,copy_percent,devices $VG_NAME
That will output something similar to:
LV Cpy%Sync Devices
lvm_raid1 100.00 lvm_raid1_rimage_0(0),lvm_raid1_rimage_1(0)
[lvm_raid1_rimage_0] /dev/sdb1(1)
[lvm_raid1_rimage_1] /dev/sda1(1)
[lvm_raid1_rmeta_0] /dev/sdb1(0)
[lvm_raid1_rmeta_1] /dev/sda1(0)
Check that the Cpy%Sync
is set to 100.00
.
Extra Info
LVM Is Using md Under the Hood
As Felipe Franciosi points out in the comments, configuring as above will still use "md" behind the scenes. It just saves you the trouble of using "mdadm".
You can confirm that by identifying the device mapper setup by lvm with:
dm=$(basename $(readlink /dev/${VG_NAME}/${LV_NAME}))
dmsetup table /dev/${dm}
It will show you that the driver "raid" is being used. Then, from dmsetup(8), you'll see:
"raid Offers an interface to the kernel's software raid driver, md"
For more information on LVM and MD RAID, please refer to this unix & Linux post.
Pysical Migration
Even though RedHat have documentation about how to physically migrate a volume group of disks from one server to another,
I found that I could just physically move a RAID 1 LVM pair of disks from one computer to aother and they just showed up without issue.
This was from an Xubuntu 20.04 desktop to an Xubuntu 22.04 desktop, which may have made my life easier. I made sure to update both machines /etc/fstab
accordingly after the fact.
References
- Redhat Docs - RAID Logical Volumes
- LVM Manual
- Gentoo Linux - LVM
- Ask Ubuntu - LVM2 builtin raid - how to check raid status
First published: 16th August 2018