Below is my cheatsheet for using ZFS.
- ZFS - Create Disk Pools - use this if you just want to set up your RAID array.
- Ubuntu 16.04 - Using Files To Test ZFS - learn ZFS by creating local files. No need to invest in buying disks first.
- Sharing ZFS Datasets Via NFS
Table of Contents
sudo zpool list
Create a ZFS volume/pool on a single disk:
zpool create vol0 /dev/sd[x]
Your pool will automatically be mounted at
Delete All Datasets In A Pool
zfs destroy -r [pool name]
Delete a Pool
sudo zpool destroy [pool name]
Check Disk Statuses
If you're running a redundant raid, you may want to check if any drives have failed once in a while. This is done by just checking the pools.
sudo zpool status
Check Pool Balance
If you add to a pool that already contains data, your pool will initially be "unbalanced" and remain unbalanced until more data is written to the pool. This is because ZFS does not bother spreading the existing data around to make use of the new disks. If you keep adding data to the pool, it will eventually become balanced in terms of space utilized across the disks, but your existing data will still only be written on the initial disks unless you have it rewritten to the pool.
To check the balance of your pool, execute:
zpool list -v
Below is some example output of that command with my RAID10 pool that I recently added 2 x 8 TB drives to. As you can see, my array is heavily unbalanced and I will need to re-balance the array if I want to get much better performance.
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zpool1 13.6T 4.21T 9.39T - 15% 30% 1.00x ONLINE - mirror 3.62T 2.20T 1.43T - 31% 60% sda - - - - - - sdb - - - - - - mirror 2.72T 1.65T 1.07T - 32% 60% sdc - - - - - - sdd - - - - - - mirror 7.25T 368G 6.89T - 2% 4% sde - - - - - - sdf - - - - - -
The easiest way to rebalance an array is probably to create a new temporary dataset, and move all the existing data to it, and then back again. By the end of the first move, the disks should be fairly utilized, but the individual files won't, and with the second pass, the files will also be fairly balanced.
Scrub a pool
sudo zpool scrub [pool name]
sudo zpool status
Create a Dataset
sudo zfs create [pool name]/[dataset name]
You can create a "descendent" dataset/filesystem like so:
sudo zfs create [pool name]/[dataset name]/[descendent filesystem]
List Datasets and Pools
sudo zfs list
sudo zfs destroy [pool name]/[dataset name]
Set Dataset Record Size
Read here for more information about what the record size actually does.
sudo zfs set recordsize=[size] pool/dataset/name
Get Dataset Record Size
sudo zfs get recordsize pool/dataset/name
Share Dataset Over NFS
zfs snapshot [pool]/[dataset name]@[snapshot name]
sudo zfs list -t snapshot
zfs rename [pool]/[dataset]@[old name] [new name]
If you wish to rollback to the most recent snapshot, then you can do so with:
zfs rollback [pool]/[dataset]@[snapshot name]
-f option forces the file
system to be unmounted, if necessary.
If you wish to rollback to a snapshot earlier than the most recent one, then one has to specify the
-r option, which will recursively destroy any snapshots more recent than the specified one.
zfs rollback -r [pool]/[dataset]@[snapshot name]
To be safe, I would suggest that if you are considering this step, and don't wish to lose the subsequent snapshots, then you could consider creating a promoted clone of the intermediary snapshot. E.g.
zfs clone [pool]/[dataset]@2023-10-12-1300 [pool]/[name-for-clone] zfs promote [pool]/[name-for-clone]
You will also probably want to rename the filesystems back around:
zfs clone [pool]/[dataset]@[desired-snapshot-restore-point] [pool]/[name-for-clone] zfs rename [pool]/[dataset] [pool]/[new-name-for-legacy-dataset] zfs rename [pool]/[name-for-clone] [pool]/[datset]
Delete a Snapshot
zfs destroy tank/home/cindys@snap1
- A clone is a great way to create another "filesystem" (e.g. another place to mount and write to).
- Must be created from a snaspshot.
- Clones are dependent on the snapshot they are created from which means that if you delete the original snapshot (such as through a rollback), the clone will corrupt/disappear (haven't tested).
- Clones start out taking no additional space (because dependent on origin/parent snapshot).
- Clones can be "promoted" so that it is no longer dependent on its "origin" snapshot. This
makes it possible to destroy the origin snapshot/filesystem. This will make the parent-child
dependency reversed so that the origin file system becomes a clone of the specified file
system. The snapshots previous to the one that was used to create the clone are now owned by
the promototed clone.
- The promoted clone must not have any conflicting snapshot names of its own. If there are, you can make use of the rename subcommand to resolve these.
A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space.
Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The origin property exposes this dependency, and the destroy command lists any such dependencies, if they exist.
The clone parent-child dependency relationship can be reversed by using the promote subcommand. This causes the "origin" file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from.
Creating RAID Arrays
Refer to my post on creating ZFS Pools.
zfs mount -a
zfs get all | grep mountpoint
sudo zfs set mountpoint=/path/to/mount zpool-name/dataset-name
Mount A Specific Pool
sudo zfs mount $POOL_NAME
sudo zfs set dedup=on zpool-name
sudo zfs set dedup=off zpool-name
- Oracle Docs - Managing ZFS File Systems (Overview)
- Oracle - Sharing and Unsharing ZFS File Systems
- Ask Ubuntu - How do I mount a ZFS pool?
- Reddit - Anybody know how to check balance of vdevs in a pool?
First published: 16th August 2018