Programster's Blog

Tutorials focusing on Linux, programming, and open-source

Run Your Own Private Docker Registry

Introduction

Running your own private registry is great if you wish to:

  • Host private images without having to pay for a private registry on DockerHub or something like AWS ECR.
  • Speed things up by deploying one as a pull-through proxy

This tutorial will show you how to do both, and how to configure your registry to use either local, or S3 storage backends.

Private Registry Deployment

For each of the examples below, simply copy the script into your Docker server, update the settings (top of the script) and execute it with BASH. It will run the registry on port 5000.

You will need to provide SSL/TLS certificates.

Local Storage Setup

#!/bin/bash

###### Fill in the settings below ##################

# Specify the absolute path to where you want to store files
LOCAL_STORAGE_DIR="/mnt/registry"

# Give your container a name to recognize it by
CONTAINER_NAME="registry"

# Give a secret to your registry. Be sure to change this
REGISTRY_SECRET="mySecretHere"

########## Don't change below this Line ###########

# stop and remove the existing container if exists
docker kill $CONTAINER_NAME
docker rm $CONTAINER_NAME

docker run \
  -d \
  -v $LOCAL_STORAGE_DIR:/var/lib/registry \
  -e SETTINGS_FLAVOR=local \
  -e REGISTRY_HTTP_SECRET="$REGISTRY_SECRET" \
  -e SEARCH_BACKEND=sqlalchemy \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
  -v `pwd`/certs:/certs \
  -p 5000:5000 \
  --restart=always \
  --name "$CONTAINER_NAME" \
  registry:2

You can generate a random secret by using the command: head /dev/urandom | tr -dc A-Za-z0-9 | head -c$NUM_CHARACTERS

Docker Compose Version

Alternatively, use this docker-compose version:

version: "3"

services:
  registry:
    container_name: registry
    image: registry
    restart: always
    ports:
      - "5000:5000"
    volumes:
      - ./certs:/certs
      - ./auth:/auth
      - registry-images:/var/lib/registry
    environment:
    - SETTINGS_FLAVOR=local
    - SEARCH_BACKEND=sqlalchemy
    - REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt
    - REGISTRY_HTTP_TLS_KEY=/certs/domain.key
    - REGISTRY_AUTH=htpasswd
    - REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm
    - REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
    - REGISTRY_HTTP_SECRET=mySuperSecretSecret

volumes:
  registry-images:
    driver: local

AWS S3 Storage Backend

Create a folder called certs within the folder you store the script below. Within that folder should be the certificate and key provided by a CA.

#!/bin/bash

CONTAINER_NAME="registry"
IMAGE_NAME="registry:2"

docker pull $IMAGE_NAME

docker kill $CONTAINER_NAME
docker rm $CONTAINER_NAME

docker run \
  -e SETTINGS_FLAVOR=s3 \
  -e AWS_BUCKET=xxxxxxxxxxxxxxxxxx \
  -e STORAGE_PATH=/registry \
  -e AWS_KEY="xxxxxxxxxxxxxxxxxx" \
  -e AWS_SECRET="xxxxxxxxxxxxxxxxxx" \
  -e SEARCH_BACKEND=sqlalchemy \
  -e REGISTRY_HTTP_SECRET=mySuperSecretSecret
  -v `pwd`/certs:/certs \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
  -p 5000:5000 \
  --restart=always \
  --name "$CONTAINER_NAME" \
  -d $IMAGE_NAME

I got an error message that my certificates were signed by an unrecognised authority when using my StartSSL certificates. I resolved this by appending the sub.class1.server.ca.pem to the end of the .crt file's contents.

Security

Authentication

The easiest way to secure your registry is to add some basic login authentication. This can be done by adding a folder called auth which contains an htpasswd file of credentials.

Then generate the credentials file with (after substituting the variables):

USERNAME="programster"
PASSWORD="mySuperAwesomePassword"

docker run \
  --entrypoint htpasswd \
  registry:2 -Bbn $USERNAME $PASSWORD >> auth/htpasswd

If that doesn't work for whatever reason, you can manually run the the htpasswd command outside the docker container: htpasswd -Bbn $USERNAME $PASSWORD >> auth/htpasswd

Then add these parameters to your docker run command before re-deploying.

...
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
...

Htaccess Line Generator

I want to set up CI/CD pipelines with each project using its own username/password, so I created the following script for taking a username, and generating a random password and htaccess line:

#!/bin/bash
echo -n "Enter a username: "
read USERNAME

PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
HTPASSWD_LINE=$(htpasswd -Bbn $USERNAME $PASSWORD)

echo "username: $USERNAME"
echo "password: $PASSWORD"
echo "htpasswd Line: $HTPASSWD_LINE"

If you find that the hpasswd line is empty be sure to install the apache2-utils package. The random password is made to be alphanumeric so that you can mask it in GitLab CI/CD.

Test It - Login

You can immediately test that your authentication configuration worked by trying to login. This has the added bonus of ensuring you don't have to login again later when you wish to push images to the registry.

docker login my.registry.com:5000

You will be prompted for your username and password.

Firewall

If you need your registry to be accessible without login but still be available over the internet, you need to use a firewall. The easiest solution is to use a cloud provider that provides a firewall service that can allow only certain IPs. AWS is such a service with its security groups.

If you want a solution that works using Linux iptables, then you should probably set up a port-forwarding server that acts as a firewall that forwards onto the registry and have the registry behind a NAT. An example would be to have the registry server behind a NAT so it is inaccessible to anybody, but connected to the proxy server via a VPN. The reason we can't just use local iptables rules on the registry itself is that docker containers completely circumvent these rules unless you take care to workaround the issues.

Using The API

Listing Images

If you wish to find out what images are in your registry, you can go to the following URL in your browser:

https://docker-registry.mydomain.com:5000/v2/_catalog

You will receive a JSON response with the following structure:

{
  "repositories": [
    "image1",
    "image2",
    "image3"
  ]
}

We are actually just simulating an API GET request by using our browser. You could send an API GET request with the http basic auth credentials to get the same result.

Listing Image Tags

After you have your list of images (above), you can list the tags available for that image:

https://docker-registry.mydomain.com:5000/v2/{IMAGE_NAME}/tags/list

You will get a JSON response similar to:

{
  "name": "my-image-name",
  "tags": [
    "latest"
  ]
}

Deploying A Pull Through Cache / Proxy

The content above was all about showing you how to set up the registry as an area for storing private images. This section will be about setting it up as a pull-through cache that will store images you try to pull from the Docker Hub. E.g. if you have a CI/CD pipeline that is constantly pulling an Alpine or Debian image, it is probably better if it pulls from a cache at gigabit speeds on your home/office network rather than trying to download from the internet. This also dramatically reduces the chances that you will be effected by the Docker Hub rate limits.

I originally tried setting up my private Docker registry as a pull-through cache as well, but it would not work with the HTTP basic auth authentication being in place. Thus I would recommend setting up the pull-through cache as a seperate server/service. Alternatively, one could not have the HTTP basic authentication in place, and rely on firewall-based security.

services:
  registry:
    container_name: registry
    image: registry:2
    restart: always
    ports:
      - "5000:5000"
    volumes:
      - ./certs:/certs
      - registry-images:/var/lib/registry
    environment:
    - SETTINGS_FLAVOR=local
    - SEARCH_BACKEND=sqlalchemy
    - REGISTRY_HTTP_TLS_CERTIFICATE=/certs/site.crt
    - REGISTRY_HTTP_TLS_KEY=/certs/private.pem

    # Settings for setting up as a proxy/cache
    - REGISTRY_HTTP_HOST="https://docker-registry.mydomain.com:5000"
    - REGISTRY_PROXY_REMOTEURL="https://registry-1.docker.io"

volumes:
  registry-images:
    driver: local

Be sure to change the value of REGISTRY_HTTP_HOST to the fully qualified domain name of your registry server, which needs to be the same as what appears on the TLS certificates.

Configure Docker To Use The Proxy

After having deployed your registry to act as a pull-through cache/proxy, you need to configure your Docker hosts to use it for pulling images. Luckily this is as easy as creating/editing the /etc/docker/daemon.json configuration file:

sudo editor /etc/docker/daemon.json

... and specifying the mirrors like so:

{
  "registry-mirrors": ["https://docker-registry.mydomain.com:5000"]
}

The file will probably not exist already, so you will need to create it.

Then restart the daemon for the changes to take effect:

sudo service docker restart

Optional - Authenticated Docker Hub User

If you wish for your this registry proxy to use a registered user for accessing the Docker Hub, then provide the environment variables for that user's username and password as shown below:

    - REGISTRY_PROXY_USERNAME="my.email@gmail.com"
    - REGISTRY_PROXY_PASSWORD="mySuperSecurePassword"

This means that the rate limits will be higher, with 200 pulls per 6 hour period instead of 100. However, be warned that this will also mean that this pull-through cache has access to any private images you put on the Docker Hub using that account. This means that if there is no firewall/NAT protecting the registry (which doesn't have authentication), then anyone on the internet will be able to pull your private images through this proxy. Thus, I would recommend only using a registered user if you have a firewall in place or if you are setting this up with a user account that does not have any private images, and never will.

This only specifies the user that the registry will access the Docker Hub with, and has nothing to do with authenticating with this registry from elsewhere using the docker login command.

Garbage Collection

It appears that the registry does not automatically prune old images, so you will need to do this yourself. You can send API requests to the registry to delete certain images, but you probably just want to prune all dangling images like so:

docker exec registry \
  bin/registry garbage-collect \
  /etc/docker/registry/config.yml

References

Last updated: 8th August 2024
First published: 16th August 2018