Programster's Blog

Tutorials focusing on Linux, programming, and open-source

Ubuntu 14.04 - Add A Ceph Metadata Server

To run a Ceph Filsystem (CephFS), you must have a running storage cluster with at least one metadata server. The metadata server stores all the information about files, such as their permissions, who owns them, and when they were last modified.

This tutorial series is currently for the Jewel release which only supports running 1 metadata server. Also, running more than 1 filesystem is "experimental".



For this tutorial, I have spun up a new VirtualBox machine located at the hostname All my services are running on different virtual machines, but different services can share the same box. However, it is important that you do not have two of the same service on the same box.

Only deploy a metadata server after you have deployed your monitors and OSDs.

First, perform the steps that need to be performed on all Ceph nodes in your cluster, no matter what the service is.

sudo apt-get install ntp openssh-server -y 

# Add the ceph user
sudo useradd -d /home/$USERNAME -m $USERNAME  
sudo passwd $USERNAME 

# Add the user to the sudoers file
sudo visudo -f /etc/sudoers

Next, from your administration/deployment node as the ceph user, execute the following command to grant passwordless ssh access:

ssh-copy-id [metadata server hostname]

Then use ceph-deploy to install ceph and then deploy the metadata service on the metadata server. You must do this within your cluster directory. In the previous tutorials we called this my-cluster and it was within $HOME of the ceph user.

ceph-deploy install [host-name]
ceph-deploy mds create [host-name]



ceph-deploy install
ceph-deploy mds create

If you have multiple metadata servers that you wish to deploy, then you would run the following:

ceph-deploy install \
[hostname 1] \
[hostname 2] \

ceph-deploy mds create \
[hostname 1] \
[hostname 2] \

The official documentation states that you could optionally specify the name of the daemon to run so that you could run multiple meta data services on the same box, but I don't recommend this.

Use ceph-deploy to copy the configuration file and admin key to the metadata servers so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

ceph-deploy admin \
[metadata server 1] \
[metadata server 2] \

Run the following command on every metadata server to ensure that it has the correct permissions for the ceph.client.admin.keyring.

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

Now you can run commands from the ceph cheatsheet on the metadata servers. For example, check the health of the cluster from one of the new metadata servers by executing:

ceph status 

If all went well, that should have returned something like:

    cluster 040cd577-db06-4fc8-b28e-000a80e7a9a0
     health HEALTH_OK
     monmap e1: 1 mons at {ceph-mon1=}, election epoch 2, quorum 0 ceph-mon1
     mdsmap e4: 1/1/1 up {}
     osdmap e6: 2 osds: 2 up, 2 in
      pgmap v14: 192 pgs, 3 pools, 1884 bytes data, 20 objects
            15430 MB used, 21114 MB / 38547 MB avail
                 192 active+clean


Last updated: 20th June 2021
First published: 16th August 2018