Programster's Blog

Tutorials focusing on Linux, programming, and open-source

Ceph - Add Disk To Cluster

Basic Example

In this example, I am adding a drive to the node with the hostname ceph-osd1.programster.org. In this case the drive is /dev/sdb, but your drive is likely to have a different letter. It is up to you to figure out which letter the new drive is.

Run the following commands from the admin node, inside the cluster folder.

ceph-deploy disk zap ceph-osd1.programster.org:/dev/sdb
ceph-deploy osd prepare ceph-osd1.programster.org:/dev/sdb
ceph-deploy osd activate ceph-osd1.programster.org:/dev/sdb1

If you run the command below, you should get the details of the drives on your node.

ceph-deploy disk list ceph-osd1.programster.org

My output was as follows:

...
ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Listing disks on ceph-osd1.programster.org...
[ceph-osd1.programster.org][DEBUG ] find the location of an executable
[ceph-osd1.programster.org][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[ceph-osd1.programster.org][DEBUG ] /dev/sda :
[ceph-osd1.programster.org][DEBUG ]  /dev/sda1 other, ext4, mounted on /
[ceph-osd1.programster.org][DEBUG ]  /dev/sda2 other, 0x5
[ceph-osd1.programster.org][DEBUG ]  /dev/sda5 swap, swap
[ceph-osd1.programster.org][DEBUG ] /dev/sdb :
[ceph-osd1.programster.org][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.2, journal /dev/sdb2
[ceph-osd1.programster.org][DEBUG ]  /dev/sdb2 ceph journal, for /dev/sdb1
[ceph-osd1.programster.org][DEBUG ] /dev/sr0 other, unknown

Advanced Example

It is better if your OSD nodes write their journal to a separate disk, preferably an SSD. They can share the same SSD, but not the same partition. Unfortunately, I cannot yet figure out how to deploy a shared journal drive, but in the meantime here is how to deploy with a dedicated journal drive for each data drive. In this example /dev/sdb is the journal drive and /dev/sdc is the data drive.

ceph-deploy disk zap ceph-osd1.programster.org:/dev/sdb
ceph-deploy disk zap ceph-osd1.programster.org:/dev/sdc
ceph-deploy osd prepare ceph-osd1.programster.org:/dev/sdc:/dev/sdb
ceph-deploy osd activate ceph-osd1.programster.org:/dev/sdc1:/dev/sdb1

If you get a warning/error message after zapping a drive, it has probably removed the partitions, but you will need to reboot the osd node in order for the changes to take effect.

References

Last updated: 14th January 2025
First published: 16th August 2018

This blog is created by Stuart Page

I'm a freelance web developer and technology consultant based in Surrey, UK, with over 10 years experience in web development, DevOps, Linux Administration, and IT solutions.

Need support with your infrastructure or web services?

Get in touch