Run Your Own Private Docker Registry
Dockerhub is great if your working on content that you are happy to share, but actually, I want to keep 99% of my work private. Thus, I run my own private docker registry that only I have access to.
For each of the examples below, simply copy the script into your docker-host server, update the settings (top of the script) and execute it with bash. It will run the registry on port 5000. Be aware that that you have to use SSL certificates now.
Local Storage Setup
#!/bin/bash ###### Fill in the settings below ################## # Specify the absolute path to where you want to store files LOCAL_STORAGE_DIR="/mnt/registry" # Give your container a name to recognize it by CONTAINER_NAME="registry" ########## Don't change below this Line ########### # stop and remove the existing container if exists docker kill $CONTAINER_NAME docker rm $CONTAINER_NAME docker run \ -d \ -v $LOCAL_STORAGE_DIR:/var/lib/registry \ -e SETTINGS_FLAVOR=local \ -e SEARCH_BACKEND=sqlalchemy \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -v `pwd`/certs:/certs \ -p 5000:5000 \ --restart=always \ --name "$CONTAINER_NAME" \ registry:2
AWS S3 Storage Backend
Create a folder called
certs within the folder you store the script below. Within that folder should be the certificate and key provided by a CA.
#!/bin/bash CONTAINER_NAME="registry" IMAGE_NAME="registry:2.1" docker pull $IMAGE_NAME docker kill $CONTAINER_NAME docker rm $CONTAINER_NAME docker run \ -e SETTINGS_FLAVOR=s3 \ -e AWS_BUCKET=xxxxxxxxxxxxxxxxxx \ -e STORAGE_PATH=/registry \ -e AWS_KEY="xxxxxxxxxxxxxxxxxx" \ -e AWS_SECRET="xxxxxxxxxxxxxxxxxx" \ -e SEARCH_BACKEND=sqlalchemy \ -v `pwd`/certs:/certs \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -p 5000:5000 \ --restart=always \ --name "$CONTAINER_NAME" \ -d $IMAGE_NAME
.crt file's contents.
Security - Authentication
The easiest way to secure your registry is to add some basic login authentication.
This can be done by adding a folder called
auth which contains an
htpasswd file of credentials.
Then generate the credntials file with (after substituting the variables):
USERNAME="programster" PASSWORD="mySuperAwesomePassword" docker run \ --entrypoint htpasswd \ registry:2 -Bbn $USERNAME $PASSWORD > auth/htpasswd
htpasswd -Bbn $USERNAME $PASSWORD > auth/htpasswd
Then add these parameters to your
docker run command before re-deploying.
... -v `pwd`/auth:/auth \ -e "REGISTRY_AUTH=htpasswd" \ -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ ...
Test It - Login
You can immediately test that your authentication configuration worked by trying to login. This has the added bonus of ensuring you don't have to login again later when you wish to push images to the registry.
docker login my.registry.com:5000
You will be prompted for your username and password.
Security - Firewall
If you need your registry to be accessible without login but still be available over the internet, you need to use a firewall. The easiest solution is to use a cloud provider that provides a firewall service that can allow only certain IPs. AWS is such a service with its security groups.
If you want a solution that works using Linux iptables, then you should probably set up a port-forwarding server that acts as a firewall that forwards onto the registry and have the registry behind a NAT. An example would be to have the registry server behind a NAT so it is inaccessible to anybody, but connected to the proxy server via a VPN. The reason we can't just use local iptables rules on the registry itself is that docker containers completely circumvent these rules unless you take care to workaround the issues.
First published: 16th August 2018