CentOS 7 - Deploy a Ceph Cluster (Part 1)
Introduction
This tutorial is going to show you how to deploy a Ceph cluster. A ceph cluster can currently provide object or block storage over a network. In layman's terms, this is equivalent to deploying your own Amazon simple storage solution (S3) or elastic block storage (EBS). This is particularly useful if you want to:
- host your own data due to performance/security/costs reasons.
- increase the services available in your cloud offering. (I'm looking at you D.O.).
- provide a value added service to your dedicated server business.
Layout
We're going to deploy a Ceph cluster across 3 virtual machines, with the help of 4 Virtualbox instances. The cluster will constist of 2 storage nodes and a single monitoring node. The "extra" virtual machine being used that is not part of the cluster is just a node we use to deploy the cluster in the first place.
Note: In a production environment you would want at least 3 OSD nodes and at least 3 monitoring nodes. Due to the monitoring service being so lightweight, it can be run on the same host as the OSDs but it is better not to. I will show you how to add these nodes to your cluster later.
Steps
First, install CentOS 7 on a single Virtual Machine and ensure it is fully up-to-date, before cloning it 3 times so that you have 4 virtual box instances. We will refer to these new clones as ceph admin
, ceph mon
, ceph osd1
, and ceph osd2
.
We need to configure each of the nodes with a static IP and update our DNS server to point to each of these nodes. If you do not have a DNS server, then you will have to rely on their IPs, but I recommend deploying a simple DNS server in 6 easy steps with the help of Docker, which is what I do.
Configure Ceph Deployer
Install the ceph-deploy
tool on the deployment/admin VM. The ceph deploy tool will help us to turn the other virtual machines into a Ceph cluster.
All Nodes - NTP & OpenSSH Server
All nodes need NTP and openssh server so that they can be connected to by the deployment tool, and so that the Ceph cluster doesn't get 'confused' due to time differences between nodes in the cluster.
sudo yum install ntp ntpdate ntp-doc -y
Configuring The Ceph User
The admin node must have password-less SSH access to all the other Ceph nodes, with sudo privileges. This is because it needs to be able to install software and configuration files without prompting for passwords. We need to create a Ceph user on ALL Ceph nodes in the cluster. A uniform user name across the cluster may improve ease of use but we don't want to use obvious user names to protect against brute force hacks.
Run the following procedure on every node, substituting {username}
for the user name you define, describes how to create a user with passwordless sudo. I recommend putting in something like ceph_random$uffix
USERNAME="{username}" sudo useradd -d /home/$USERNAME -m $USERNAME sudo passwd $USERNAME
Now that we have created the user on the server, we need to allow that user to execute sudo commands without being prompted for a password. We do this by running sudo visudo -f /etc/sudoers
, and adding a line as such:
{USERNAME} ALL=(ALL) NOPASSWD:ALL
Once you have created the ceph user on every node, log into the ceph user on the deployment/admin node and generate an ssh key, making sure not to set a passphrase.
ssh-keygen
Now, from the admin node, add yourself to each of the other nodes:
ssh-copy-id {ceph user}@{hostname}
Disalbe SELinux
SELinux is set to Enforcing by default. I recommend disabling SELinux on all nodes during installation and ensuring that the installation and cluster are working properly before hardening your configuration. To disable it, run the following:
setenforce 0 sudo sed -i 's/enforcing/disabled/g' /etc/selinux/config
Conclusion
Congratulations, you've finished the first stage of deploying your cluster. You will now be able to continue to part 2 to finish deploying the cluster.
References
- Ceph Documentation - Installation (Quick)
- Ask Ubuntu - How to run sudo command with no password?
- Ceph Storage on Proxmox
First published: 16th August 2018