Programster's Blog

Tutorials focusing on Linux, programming, and open-source

Ubuntu 14.04 - Deploy a Ceph Cluster (Part 2)

This tutorial will carry on from part 1 and deploy a Ceph Storage Cluster using ceph-deploy.

Steps

On the deployment/admin node, create a directory from which we will operate. Many of the commands we run will generate configuration files in the current directory, hence the need to create a directory and move into it.

mkdir my-cluster
cd my-cluster

Important: From now on, do not call any ceph-deploy commands with sudo or run it as root if you are logged in as a different user, because it will not issue sudo commands needed on the remote host.

Run the command below for the monitor node(s).

ceph-deploy new [monitor node hostname or ip]

e.g.

ceph-deploy new ceph-mon1.programster.org

The previous command will have created a configuration file in our current directory. We need to change the number of replicas from the default of 3 to just 2 because we only have 2 data storage nodes in this tutorial. For a production system, stick with at least 3!

editor ceph.conf

Add the following line under [global]

osd pool default size = 2

Install Ceph

Now it is time to actually install ceph on the nodes. Edit the command below to be appropriate to your configuration (adding all node hostnames including self):

ceph-deploy install \
ceph-deployer.programster.org \
ceph-mon1.programster.org \
ceph-osd1.programster.org \
ceph-osd2.programster.org

This can take a long time. You may want to go away and make a cup of tea.

Execute the command below to add the initial monitor(s) and gather the keys:

ceph-deploy mon create-initial

Setting up the OSDs

Add two OSDs. For fast setup, this quick start uses a directory rather than an entire disk per Ceph OSD Daemon. See ceph-deploy osd for details on using separate disks/partitions for OSDs and journals. Login to the Ceph Nodes and create a directory for the Ceph OSD Daemon.

ssh [osd 1 hostname or ip]
sudo mkdir /var/local/osd0
exit

ssh [osd 2 hostname or ip]
sudo mkdir /var/local/osd1
exit

Now prepare each of them

ceph-deploy osd prepare \
{ceph osd1 hostname or ip}:/var/local/osd0 \
{ceph osd2 hostname or ip}:/var/local/osd1

Now activate them

ceph-deploy osd activate \
[ceph osd1 hostname or ip]:/var/local/osd0 \
[ceph osd2 hostname or ip]:/var/local/osd1

Copying Configs

Use ceph-deploy to copy the configuration file and admin key to the Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

ceph-deploy admin \
ceph-deployer.programster.org \
ceph-mon1.programster.org \
ceph-osd1.programster.org \
ceph-osd2.programster.org

Run the following command on every node to ensure that it has the correct permissions for the ceph.client.admin.keyring.

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

Conclusion

You now have a running ceph cluster. Moving forward, you may want to deploy a block device, deploy a metadata server (necessary for cephFS), or use my ceph cheatsheet.

References

Last updated: 14th January 2025
First published: 16th August 2018

This blog is created by Stuart Page

I'm a freelance web developer and technology consultant based in Surrey, UK, with over 10 years experience in web development, DevOps, Linux Administration, and IT solutions.

Need support with your infrastructure or web services?

Get in touch