Ubuntu 14.04 - Add A Ceph Metadata Server
To run a Ceph Filsystem (CephFS), you must have a running storage cluster with at least one metadata server. The metadata server stores all the information about files, such as their permissions, who owns them, and when they were last modified.
Prerequisites
Steps
For this tutorial, I have spun up a new VirtualBox machine located at the hostname ceph-mds1.programster.org. All my services are running on different virtual machines, but different services can share the same box. However, it is important that you do not have two of the same service on the same box.
First, perform the steps that need to be performed on all Ceph nodes in your cluster, no matter what the service is.
sudo apt-get install ntp openssh-server -y # Add the ceph user USERNAME="[username]" sudo useradd -d /home/$USERNAME -m $USERNAME sudo passwd $USERNAME # Add the user to the sudoers file #[USERNAME] ALL=(ALL) NOPASSWD:ALL sudo visudo -f /etc/sudoers
Next, from your administration/deployment node as the ceph user, execute the following command to grant passwordless ssh access:
ssh-copy-id [metadata server hostname]
Then use ceph-deploy to install ceph and then deploy the metadata service on the metadata server. You must do this within your cluster directory. In the previous tutorials we called this my-cluster
and it was within $HOME of the ceph user.
ceph-deploy install [host-name] ceph-deploy mds create [host-name]
metadata
e.g.
ceph-deploy install ceph-mds1.programster.org ceph-deploy mds create ceph-mds1.programster.org
If you have multiple metadata servers that you wish to deploy, then you would run the following:
ceph-deploy install \ [hostname 1] \ [hostname 2] \ ... ceph-deploy mds create \ [hostname 1] \ [hostname 2] \ ...
Use ceph-deploy to copy the configuration file and admin key to the metadata servers so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
ceph-deploy admin \ [metadata server 1] \ [metadata server 2] \ ...
Run the following command on every metadata server to ensure that it has the correct permissions for the ceph.client.admin.keyring.
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
Now you can run commands from the ceph cheatsheet on the metadata servers. For example, check the health of the cluster from one of the new metadata servers by executing:
ceph status
If all went well, that should have returned something like:
cluster 040cd577-db06-4fc8-b28e-000a80e7a9a0
health HEALTH_OK
monmap e1: 1 mons at {ceph-mon1=10.1.0.66:6789/0}, election epoch 2, quorum 0 ceph-mon1
mdsmap e4: 1/1/1 up {0=ceph-mds1.programster.org=up:active}
osdmap e6: 2 osds: 2 up, 2 in
pgmap v14: 192 pgs, 3 pools, 1884 bytes data, 20 objects
15430 MB used, 21114 MB / 38547 MB avail
192 active+clean
References
- Ceph Docs - Ceph Filesystem
- Ceph Docs - Add/Remove Metadata Server
- Sébastien Han - Deploy a Ceph MDS server
- ceph.narkive.com - Help:mount error
First published: 16th August 2018