Programster's Blog

Tutorials focusing on Linux, programming, and open-source

Ceph - Add Disk To Cluster

Basic Example

In this example, I am adding a drive to the node with the hostname In this case the drive is /dev/sdb, but your drive is likely to have a different letter. It is up to you to figure out which letter the new drive is.

Run the following commands from the admin node, inside the cluster folder.

ceph-deploy disk zap
ceph-deploy osd prepare
ceph-deploy osd activate

If you run the command below, you should get the details of the drives on your node.

ceph-deploy disk list

My output was as follows:

ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Listing disks on
[][DEBUG ] find the location of an executable
[][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[][DEBUG ] /dev/sda :
[][DEBUG ]  /dev/sda1 other, ext4, mounted on /
[][DEBUG ]  /dev/sda2 other, 0x5
[][DEBUG ]  /dev/sda5 swap, swap
[][DEBUG ] /dev/sdb :
[][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.2, journal /dev/sdb2
[][DEBUG ]  /dev/sdb2 ceph journal, for /dev/sdb1
[][DEBUG ] /dev/sr0 other, unknown

Advanced Example

It is better if your OSD nodes write their journal to a separate disk, preferably an SSD. They can share the same SSD, but not the same partition. Unfortunately, I cannot yet figure out how to deploy a shared journal drive, but in the meantime here is how to deploy with a dedicated journal drive for each data drive. In this example /dev/sdb is the journal drive and /dev/sdc is the data drive.

ceph-deploy disk zap
ceph-deploy disk zap
ceph-deploy osd prepare
ceph-deploy osd activate

If you get a warning/error message after zapping a drive, it has probably removed the partitions, but you will need to reboot the osd node in order for the changes to take effect.


Last updated: 11th August 2022
First published: 16th August 2018