Programster's Blog

Tutorials focusing on Linux, programming, and open-source

A Really Basic GitLab Pipeline To Build And Deploy A Docker Image

Introduction

Setting up continuous integration and continuous deployment (CI/CD) is one of those things that everybody wants, and rightly so because the benefits are massive, especially when you work as part of a team. It allows the developers to just focus on the code, and the "operations guy" wont be having to "continuously" roll out updates as devs make last minute changes just before a client meeting.

This tutorial will show you how you can get started very quickly and easily, using your own GitLab server with a really simple pipeline that will:

  1. Build your Docker image using your project's Dockerfile.
  2. Push the built image to a Docker registry that requires authentication. Possibly using (your own private Docker registry](https://blog.programster.org/run-your-own-private-docker-registry)
  3. Deploy the image to your existing Linux Server.

You can then "build" on top of this starting point to improve various aspects however you want, such as perhaps by integrating Terraform into your pipeline to automatically deploy an EC2, or even an ECS cluster that will act as the docker host(s) etc. More on that in later tutorials.

Related Posts

Suggestion

For the entire setup, I made use of lots of virtual machines. This would probably be expensive if I was doing everything "in the cloud", but you can either just set up Virtualbox on your computer, or use a KVM server, such as Proxmox. The list of servers are:

  1. Private GitLab Server
  2. 1 or more GitLab runners.
  3. Private docker registry (you may have paid for Dockerhub though or be using AWS ECR).
  4. Linux server that will be deployed to.

Little Rant (Feel Free To Skip)

Unfortunately, it seems that I'm late to the party and there are a lot of companies out there trying to make money by "helping" you out with setting up a CI/CD pipeline using their technology. These software solutions are probably really useful tools, but when I was starting googling "pipelines" and "CI/CD" terms into google, I just got buried and confused. Also, you will find that "sponsored stories" and CI/CD adverts will follow you around forever. It appears there is a lot of money sloshing around in this field.

It definitely feels similar to how OpenStack got "hijacked" in my mind, and went from a brilliant concept to something that became impossible to understand and deploy because so many companies wanted to be your paid solution. In the end I now jut run really simple KVM hosts, using Proxmox whenever a GUI is required. From there you can "bolt-on" things like object-storage through Ceph or MinIO.

Steps

The first thing we need to do, if you haven't already, is deploy our own GitLab server and deploy/register some GitLab runners. Don't worry, this is much easier than it sounds.

Create a repository on your GitLab server for your project, and add the Dockerfile and any source code for the project. You can always clone the tutum-hello-world repo and put it into your GitLab repository as a great starting point if you don't already have a project/Dockerfile in mind and just want to try out setting up CI/CD.

Add a .gitlab-ci.yml file to the top level of the repository with the following contents. Don't swap anything out, the variables will get filled in by the GitLab server later.

image: docker:19.03.12

variables:
    ENVIRONMENT: $CI_COMMIT_BRANCH


before_script:
    - cd ${CI_PROJECT_DIR}


stages:
    - build
    - deploy


build:
    image: docker:19.03.12
    stage: build
    script:
        - cd ${CI_PROJECT_DIR}
        - echo "${DOCKER_REGISTRY_USER} -p ${DOCKER_REGISTRY_PASSWORD} ${DOCKER_REGISTRY_HOST}:5000"
        - docker login -u ${DOCKER_REGISTRY_USER} -p ${DOCKER_REGISTRY_PASSWORD} ${DOCKER_REGISTRY_HOST}:5000
        - docker build -f docker/app/Dockerfile -t ${DOCKER_REGISTRY_HOST}:5000/${DOCKER_IMAGE_NAME}:${CI_COMMIT_BRANCH} .
        - docker push ${DOCKER_REGISTRY_HOST}:5000/${DOCKER_IMAGE_NAME}:${CI_COMMIT_BRANCH}


deploy:
    image: ubuntu:20.04
    stage: deploy
    only:
        - staging
        - production
    script:
        - apt-get update -qq
        # Setup SSH deploy keys
        - 'which ssh-agent || ( apt-get install -qq openssh-client )'
        - eval $(ssh-agent -s)
        - chmod 700 ${PRIVATE_SSH_KEY}
        - ssh-add ${PRIVATE_SSH_KEY}
        - mkdir -p ~/.ssh
        - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
        - ssh ${SSH_USER}@${DOCKER_HOST} "/bin/bash update.sh"
    when: manual

Pipeline Notes

  • The when: manual line is responsible for having the pipeline pause and require you to manually press a deploy/play button if you want to proceed with deployment. This way nothing automatically goes out, and you can see who triggered a deployment and when.
  • I have an image declaration at the top, and within the stages of the deployment. In most pipelines you will usually just see one image declared at the top, and that is the docker image used in every phase of the pipeline. In this example, I am showing that you can override this and use a different image on a per-step basis, which is really useful. E.g. one step may require terraform for spinning up infrastructure, whilst the build step needs docker-in-docker in order to perform a docker build.
  • The only statements are really useful. They allow you to "only" perform the step on certain branches (in this case the branches named staging or production which doesn't really make sense but is here for demonstration). You may wish to have two "deploy" steps in your pipeline, one that is "only" for production, that will be responsible for deploying to production , and one only for deploying to staging, which would use a different set of variables.

Deploy A Linux Server

If you haven't already, deploy a Linux server somewhere that we will be deploying to. You will need SSH access to it. This could be a simple DigitalOcean VPS, an AWS EC2, or just a local Virtualbox machine. You will need to install docker on this server.

Create An SSH Key

Unfortunately this pipeline requires you to create a SSH key without a passphrase and allow that to be used to connect to the server that will host your docker container. Thus I suggest you create a key on a per-project basis, add it to Gitlab in the next step (variables), before deleting it locally (E.g. it only exists in a Gitlab variable).

To generate an SSH key, run ssh-keygen, before then adding it to the server you just deployed that will host your Docker container.

I am keen to replace this step with something that feels more secure in future, but this is the simplest/quickest way for now. Be sure to keep your GitLab server and this project variable safe.

Variables

Now we need to add pipeline variables to GitLab. They are:

  • SSH_USER - the user to connect to the server that will deploy the docker container.
  • DOCKER_HOST - the server IP or hostname that will deploy docker container.
  • DOCKER_IMAGE_NAME - the name to give the docker image. E.g. "programsters-blog"
  • DOCKER_REGISTRY_HOST - the server IP or hostname of your docker registry. E.g. "registry.mydomain.com"
  • DOCKER_REGISTRY_USER - the user to authenticate against the docker registry with. E.g. "programster"
  • DOCKER_REGISTRY_PASSWORD - the password to authenticate against the docker registry with.

Also, there is the PRIVATE_SSH_KEY which needs to be a variable that has a value of the contents of the private SSH key, but needs to be of type "file".

Finally, you may have seen CI_COMMIT_BRANCH and CI_PROJECT_DIR which are special variables that GitLab will automatically set for you. The first is the name of the branch that was committed to, and the second is the directory path on the Gitlab runner that acts as the top level of your repository.

Update Script

To keep things generic and easily extendable, I had the pipeline simply call an update.sh script on the remote server. This way if things change, you can just update that script rather than having to dig into your pipeline.

For now, just add the script with the following contents to your server (manually swapping out the variables).

#!/bin/bash

# Prune old images to prevent server filling up.
docker image prune -f

# Pull the latest image
docker pull $DOCKER_REGISTRY_HOST:5000/$DOCKER_IMAGE_NAME:staging

# Gracefully stop existing containers
docker stop `docker ps -aq`

# Remove the original containers.
docker rm `docker ps -aq`

# Re-deploy using latest image.
docker run -d $DOCKER_REGISTRY_HOST:5000/$DOCKER_IMAGE_NAME:staging

You may wish to add environment variables, volumes, and even specify a name to the docker run command, or swap it out with the use of docker-compose. It is impossible for me to know what your specific project might need. This is just a generic starting point.

Conclusion

That's it! You should now have a working starting point that is generic enough for all docker-based projects, which you can now refine/improve upon for your specific project/setup.

References

Last updated: 24th April 2024
First published: 12th February 2021