Programster's Blog

Tutorials focusing on Linux, programming, and open-source

Create RAID with LVM

Many Linux users have created RAID arrays using mdadm commands and do not realize that you can also create a RAID through LVM.

Steps

Installing LVM

You may need to install the LVM packages in order to build these arrays.

sudo apt-get install lvm2  

Creating RAID 0

sudo vgcreate [vg name] /dev/sd[x]1 /dev/sd[x2]1 ...
lvcreate -i[num drives] -I[strip size] -l100%FREE -n[lv name] [vg name]
sudo mkfs.[ext4/xfs] /dev/[vg name]/[lv name]
  • Stripe size needs to be a number of the power 2, starting with 4. E.g. 4, 8, 16, 32, 64. If your data is mostly small text files, then use 4. If you are mostly dealing with media then you may want something larger.
  • If you want to use the xfs filesystem, you may need to install xfprogs with sudo apt-get install xfsprogs -y

Creating this RAID array will remove the ability to remove a drive from the VOLUME group later.

Creating RAID 1 (Mirror)

VG_NAME="vg1"
LV_NAME="lvm_raid1"

sudo vgcreate VG_NAME /dev/sd[x]1 /dev/sd[x]1

sudo lvcreate \
  --mirrors 1 \
  --type raid1 \
  -l 100%FREE \
  --nosync \
  -n $LV_NAME $VG_NAME

sudo mkfs.[ext4/xfs] /dev/$VG_NAME/$LV_NAME

Creating RAID 5 (Parity)

VG_NAME="vg1"
LV_NAME="lvm_raid5"

sudo vgcreate $VG_NAME /dev/sd[x]1 /dev/sd[x]1 /dev/sd[x]1

sudo lvcreate \
  --type raid5 \
  -l 100%FREE \
  --nosync \
  -n $LV_NAME $VG_NAME

sudo mkfs.[ext4/xfs] /dev/$VG_NAME/$LV_NAME

Scrubbing

If you have any type of raid other than RAID 0, then you can scrub the data every now and then to prevent bitrot.

Check LVM RAID Status

sudo lvs -a -o name,copy_percent,devices $VG_NAME

This will output something like:

  LV             Cpy%Sync Devices                                        
  lv1            0.70     lv1_rimage_0(0),lv1_rimage_1(0),lv1_rimage_2(0)
  [lv1_rimage_0]          /dev/sda(1)                                    
  [lv1_rimage_1]          /dev/sdb(1)                                    
  [lv1_rimage_2]          /dev/sdc(1)                                    
  [lv1_rmeta_0]           /dev/sda(0)                                    
  [lv1_rmeta_1]           /dev/sdb(0)                                    
  [lv1_rmeta_2]           /dev/sdc(0)

Extra Info - LVM Is Using md Under the Hood

As Felipe Franciosi points out in the comments, configuring as above will still use "md" behind the scenes. It just saves you the trouble of using "mdadm".

You can confirm that by identifying the device mapper setup by lvm with:

dm=$(basename $(readlink /dev/${VG_NAME}/${LV_NAME}))
dmsetup table /dev/${dm}

It will show you that the driver "raid" is being used. Then, from dmsetup(8), you'll see:

"raid Offers an interface to the kernel's software raid driver, md"

For more information on LVM and MD RAID, please refer to this unix & Linux post.

References

Last updated: 22nd November 2019
First published: 16th August 2018