ZFS Cheatsheet
Below is my cheatsheet for using ZFS.
Related Posts
- Ubuntu 16.04 - Using Files To Test ZFS - learn ZFS by creating local files. No need to invest in buying disks first.
Table of Contents
Pools
Check Free Space
sudo zfs list
This will output something like below, which shows you all of your pools and datasets, as well how much space is taken up and available in each:
NAME USED AVAIL REFER MOUNTPOINT
zpool1 7.16T 3.62T 7.02T /zpool1
zpool1/data-folder-sync 139G 3.62T 139G /zpool1/data-folder-sync
zpool2 5.01T 5.78T 4.92T /zpool2
zpool2/data-folder-sync 89.6G 5.78T 89.6G /zpool2/data-folder-sync
df
and pydf
will just confuse you when working with ZFS, so just ignore those, especially when you start making full use of snapshots and datasets.
List Pools
sudo zpool list
Create Pools
Delete All Datasets In A Pool
zfs destroy -r [pool name]
Delete a Pool
sudo zpool destroy [pool name]
Check Disk Statuses
If you're running a redundant raid, you may want to check if any drives have failed once in a while. This is done by just checking the pools.
sudo zpool status
Check Pool Balance
If you add to a pool that already contains data, your pool will initially be "unbalanced" and remain unbalanced until more data is written to the pool. This is because ZFS does not bother spreading the existing data around to make use of the new disks. If you keep adding data to the pool, it will eventually become balanced in terms of space utilized across the disks, but your existing data will still only be written on the initial disks unless you have it rewritten to the pool.
To check the balance of your pool, execute:
zpool list -v
Below is some example output of that command with my RAID10 pool that I recently added 2 x 8 TB drives to. As you can see, my array is heavily unbalanced and I will need to re-balance the array if I want to get much better performance.
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zpool1 13.6T 4.21T 9.39T - 15% 30% 1.00x ONLINE -
mirror 3.62T 2.20T 1.43T - 31% 60%
sda - - - - - -
sdb - - - - - -
mirror 2.72T 1.65T 1.07T - 32% 60%
sdc - - - - - -
sdd - - - - - -
mirror 7.25T 368G 6.89T - 2% 4%
sde - - - - - -
sdf - - - - - -
The easiest way to rebalance an array is probably to create a new temporary dataset, and move all the existing data to it, and then back again. By the end of the first move, the disks should be fairly utilized, but the individual files won't, and with the second pass, the files will also be fairly balanced.
Scrubbing
Scrub a pool
sudo zpool scrub [pool name]
sudo zpool status
Datasets
Create a Dataset
sudo zfs create [pool name]/[dataset name]
You can create a "descendent" dataset/filesystem like so:
sudo zfs create [pool name]/[dataset name]/[descendent filesystem]
List Datasets and Pools
sudo zfs list
Delete A Dataset
sudo zfs destroy [pool name]/[dataset name]
Set Dataset Record Size
Read here for more information about what the record size actually does.
sudo zfs set recordsize=[size] pool/dataset/name
Get Dataset Record Size
sudo zfs get recordsize pool/dataset/name
Share Dataset Over NFS
Snapshots
Snapshot A Dataset
zfs snapshot [pool]/[dataset name]@[snapshot name]
List Snapshots
sudo zfs list -t snapshot
Rename A Snapshot
zfs rename [pool]/[dataset]@[old name] [new name]
Restore A Snapshot
Restore The Most Recent Snapshot
If you wish to rollback to the most recent snapshot, then you can do so with:
zfs rollback [pool]/[dataset]@[snapshot name]
-f
option forces the file
system to be unmounted, if necessary.
Restore Older Snapshot
If you wish to rollback to a snapshot earlier than the most recent one, then one has to specify the
-r
option, which will recursively destroy any snapshots more recent than the specified one.
zfs rollback -r [pool]/[dataset]@[snapshot name]
To be safe, I would suggest that if you are considering this step, and don't wish to lose the subsequent snapshots, then you could consider creating a promoted clone of the intermediary snapshot. E.g.
zfs clone [pool]/[dataset]@2023-10-12-1300 [pool]/[name-for-clone]
zfs promote [pool]/[name-for-clone]
You will also probably want to rename the filesystems back around:
zfs clone [pool]/[dataset]@[desired-snapshot-restore-point] [pool]/[name-for-clone]
zfs rename [pool]/[dataset] [pool]/[new-name-for-legacy-dataset]
zfs rename [pool]/[name-for-clone] [pool]/[datset]
Delete a Snapshot
zfs destroy tank/home/cindys@snap1
Clones
- A clone is a great way to create another "filesystem" (e.g. another place to mount and write to).
- Must be created from a snaspshot.
- Clones are dependent on the snapshot they are created from which means that if you delete the original snapshot (such as through a rollback), the clone will corrupt/disappear (haven't tested).
- Clones start out taking no additional space (because dependent on origin/parent snapshot).
- Clones can be "promoted" so that it is no longer dependent on its "origin" snapshot. This
makes it possible to destroy the origin snapshot/filesystem. This will make the parent-child
dependency reversed so that the origin file system becomes a clone of the specified file
system. The snapshots previous to the one that was used to create the clone are now owned by
the promototed clone.
- The promoted clone must not have any conflicting snapshot names of its own. If there are, you can make use of the rename subcommand to resolve these.
A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space.
Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The origin property exposes this dependency, and the destroy command lists any such dependencies, if they exist.
The clone parent-child dependency relationship can be reversed by using the promote subcommand. This causes the "origin" file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from.
Mounting
Mount Everything
zfs mount -a
Get Mountpoints
zfs get all | grep mountpoint
Set Mountpoint
sudo zfs set mountpoint=/path/to/mount zpool-name/dataset-name
Mount A Specific Pool
sudo zfs mount $POOL_NAME
Deduplication
Enable Deduplication
sudo zfs set dedup=on zpool-name
Disable Deduplication
sudo zfs set dedup=off zpool-name
References
- Oracle Docs - Managing ZFS File Systems (Overview)
- Oracle - Sharing and Unsharing ZFS File Systems
- Ask Ubuntu - How do I mount a ZFS pool?
- Reddit - Anybody know how to check balance of vdevs in a pool?
First published: 16th August 2018