Programster's Blog

Tutorials focusing on Linux, programming, and open-source

ZFS Cheatsheet

Below is my cheatsheet for using ZFS.

Table of Contents


Check Free Space

sudo zfs list

This will output something like below, which shows you all of your pools and datasets, as well how much space is taken up and available in each:

NAME                      USED  AVAIL     REFER  MOUNTPOINT
zpool1                   7.16T  3.62T     7.02T  /zpool1
zpool1/data-folder-sync   139G  3.62T      139G  /zpool1/data-folder-sync
zpool2                   5.01T  5.78T     4.92T  /zpool2
zpool2/data-folder-sync  89.6G  5.78T     89.6G  /zpool2/data-folder-sync

Tools like df and pydf will just confuse you when working with ZFS, so just ignore those, especially when you start making full use of snapshots and datasets.

List Pools

sudo zpool list  

Create Pools

Refer here

Delete All Datasets In A Pool

zfs destroy -r [pool name] 

Delete a Pool

sudo zpool destroy [pool name]  

Check Disk Statuses

If you're running a redundant raid, you may want to check if any drives have failed once in a while. This is done by just checking the pools.

sudo zpool status  

Check Pool Balance

If you add to a pool that already contains data, your pool will initially be "unbalanced" and remain unbalanced until more data is written to the pool. This is because ZFS does not bother spreading the existing data around to make use of the new disks. If you keep adding data to the pool, it will eventually become balanced in terms of space utilized across the disks, but your existing data will still only be written on the initial disks unless you have it rewritten to the pool.

To check the balance of your pool, execute:

zpool list -v

Below is some example output of that command with my RAID10 pool that I recently added 2 x 8 TB drives to. As you can see, my array is heavily unbalanced and I will need to re-balance the array if I want to get much better performance.

zpool1  13.6T  4.21T  9.39T         -    15%    30%  1.00x  ONLINE  -
  mirror  3.62T  2.20T  1.43T         -    31%    60%
    sda      -      -      -         -      -      -
    sdb      -      -      -         -      -      -
  mirror  2.72T  1.65T  1.07T         -    32%    60%
    sdc      -      -      -         -      -      -
    sdd      -      -      -         -      -      -
  mirror  7.25T   368G  6.89T         -     2%     4%
    sde      -      -      -         -      -      -
    sdf      -      -      -         -      -      -

The easiest way to rebalance an array is probably to create a new temporary dataset, and move all the existing data to it, and then back again. By the end of the first move, the disks should be fairly utilized, but the individual files won't, and with the second pass, the files will also be fairly balanced.

Beware of just using the mv command, because it will initially just copy the data until all of it has been written before deleting the original. You could easily run out of space. It would be better to use something like rsync as shown here to move the files one at a time.


Scrub a pool

sudo zpool scrub [pool name]

To see the progress of a scrub use sudo zpool status


Create a Dataset

sudo zfs create [pool name]/[dataset name]  

ZFS will automatically mount the dataset at /path/to/pool/[dataset name].

You can create a "descendent" dataset/filesystem like so:

sudo zfs create [pool name]/[dataset name]/[descendent filesystem]

List Datasets and Pools

sudo zfs list  

Delete A Dataset

sudo zfs destroy [pool name]/[dataset name]  

A dataset cannot be destroyed if snapshots or clones of the dataset exist. You may likely have the dataset mounted, and need to unmount it first.

Set Dataset Record Size

Read here for more information about what the record size actually does.

sudo zfs set recordsize=[size] pool/dataset/name

Size should be a value like 16k, 128k, or 1M etc.

Get Dataset Record Size

sudo zfs get recordsize pool/dataset/name

Share Dataset Over NFS

Refer here.


Snapshot A Dataset

zfs snapshot [pool]/[dataset name]@[snapshot name]  

List Snapshots

sudo zfs list -t snapshot  

Rename A Snapshot

zfs rename [pool]/[dataset]@[old name] [new name]  

Restore A Snapshot

Restore The Most Recent Snapshot

If you wish to rollback to the most recent snapshot, then you can do so with:

zfs rollback [pool]/[dataset]@[snapshot name]  

The file system that you want to roll back is unmounted and remounted, if it is currently mounted. If the file system cannot be unmounted, the rollback fails. The -f option forces the file system to be unmounted, if necessary.

Restore Older Snapshot

If you wish to rollback to a snapshot earlier than the most recent one, then one has to specify the -r option, which will recursively destroy any snapshots more recent than the specified one.

zfs rollback -r [pool]/[dataset]@[snapshot name]  

This will delete all snapshots that were taken after [snapshot name] was taken!

To be safe, I would suggest that if you are considering this step, and don't wish to lose the subsequent snapshots, then you could consider creating a promoted clone of the intermediary snapshot. E.g.

zfs clone [pool]/[dataset]@2023-10-12-1300 [pool]/[name-for-clone]
zfs promote [pool]/[name-for-clone]

You will also probably want to rename the filesystems back around:

zfs clone [pool]/[dataset]@[desired-snapshot-restore-point] [pool]/[name-for-clone]
zfs rename [pool]/[dataset] [pool]/[new-name-for-legacy-dataset]
zfs rename [pool]/[name-for-clone] [pool]/[datset] 

Refer here for what these instructions were based on.

Delete a Snapshot

zfs destroy tank/home/cindys@snap1  


  • A clone is a great way to create another "filesystem" (e.g. another place to mount and write to).
  • Must be created from a snaspshot.
  • Clones are dependent on the snapshot they are created from which means that if you delete the original snapshot (such as through a rollback), the clone will corrupt/disappear (haven't tested).
  • Clones start out taking no additional space (because dependent on origin/parent snapshot).
  • Clones can be "promoted" so that it is no longer dependent on its "origin" snapshot. This makes it possible to destroy the origin snapshot/filesystem. This will make the parent-child dependency reversed so that the origin file system becomes a clone of the specified file system. The snapshots previous to the one that was used to create the clone are now owned by the promototed clone.
    • The promoted clone must not have any conflicting snapshot names of its own. If there are, you can make use of the rename subcommand to resolve these.

A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space.

Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The origin property exposes this dependency, and the destroy command lists any such dependencies, if they exist.

The clone parent-child dependency relationship can be reversed by using the promote subcommand. This causes the "origin" file system to become a clone of the specified file system, which makes it possible to destroy the file system that the clone was created from.

The information above was taken directly from here.



Mount Everything

zfs mount -a

This will let you know if your pool or dataset won't mount for some reason, such as the directory not being empty.

Get Mountpoints

zfs get all | grep mountpoint

Set Mountpoint

sudo zfs set mountpoint=/path/to/mount zpool-name/dataset-name

Mount A Specific Pool

sudo zfs mount $POOL_NAME


Enable Deduplication

sudo zfs set dedup=on zpool-name

Deduplication had a massive negative effect on performance for me on spinning disks.

Disable Deduplication

sudo zfs set dedup=off zpool-name


Last updated: 8th February 2024
First published: 16th August 2018