Programster's Blog

Tutorials focusing on Linux, programming, and open-source

Ubuntu - Reverse Proxy Dockerized Websites


You have converted your websites to use docker, but want to run them all on a single VPS to save money or get better performance by using a single large instance rather than lots of small ones. Using a single instance restricts you to having just one public IP, and all your containers require access to ports 80/443.

If one of your websites gets "hacked", then Google may flag your IP, causing users to see a an alarming warning message whenever they try to visit any of your sites in the chrome/chromium browsers. Other browsers may have a similar behaviour (I've seen this in FF, and I don't support IE).

Side Bonus

The nice thing about this solution is the fact that it allows you to easily "migrate" your more demanding websites to another VM later if you desire, without any downtime or waiting for DNS propagation. This is because you can just change the IP your reverse proxy points to and the effect will be immediate!


Install LXC

Docker doesn't require LXC anymore, but this tutorial does.

sudo apt-get install lxc -y
sudo echo 'DOCKER_OPTS="-e lxc"' | sudo tee /etc/default/docker
service docker restart

Use the LXC Bridge

LXC will create a bridge at at 10.0.3.x, where x represents any IP. This allows you to create containers with the IPs between and For this tutorial you will have to keep track, and just record them in the scripts below.

Create Docker Start Script

Normally when you deploy a container, it is something like below which can be easily typed into the terminal:

docker run -d -p 80:80 [my-image]

However, you will need to use the following configuration, so I suggest you create a script that you can call later.

docker run \
  --net="none" \
  --lxc-conf=" = veth" \
  --lxc-conf=" = [container IP between -]/24" \
  --lxc-conf=" =" \
  --lxc-conf=" = lxcbr0" \
  --lxc-conf=" = eth0" \
  --lxc-conf=" = up" \
  -d [Docker Image ID]

See how you no longer need to specify the ports!

My example:

docker run \
  --net="none" \
  --lxc-conf=" = veth" \
  --lxc-conf=" =" \
  --lxc-conf=" =" \
  --lxc-conf=" = lxcbr0" \
  --lxc-conf=" = eth0" \
  --lxc-conf=" = up" \
  -d `docker images -q | sed -n 1p`

The above will create a container that can be accessed on the host at

Virtualbox Debugging Note

If you are testing this using a host within Virtualbox and you find out that your containers do not have internet access, please make sure that you have set up your network for the VM as follows (see the Promiscuous Mode setting):

Configure Nginx

At this point, your website can be accessed from the host, but not from outside the host. We are going to set up the host so that when it recieves a request for it will serve up content from the IP we specify.

sudo apt-get install nginx -y

Write the shared configuration content to a file. This will be used by each site config.

sudo echo 'proxy_redirect          off;
proxy_set_header        Host            $host;
proxy_set_header        X-Real-IP       $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size    10m;
client_body_buffer_size 128k;
proxy_connect_timeout   90;
proxy_send_timeout      90;
proxy_read_timeout      90;
proxy_buffers           32 4k;' | sudo tee /etc/nginx/proxy.conf 

Create The Site Config

For each website/domain, you need to create a config file in the /etc/nginx/sites-enabled directory. I all my config files the same as the domain name to save confusion. E.g for my website at and, I run the following command (update the references accordingly).


echo "
server {
        listen 80;

        server_name `echo $DOMAIN`;

        access_log  /var/log/nginx/access.log;

        location / {
                proxy_pass      http://`echo $CONTAINER_IP`/;
                include         /etc/nginx/proxy.conf;

server {
    listen 443 default_server ssl;

    ssl on;

    server_name `echo $DOMAIN`;

    access_log  /var/log/nginx/access.log;

    ssl_certificate      ssl/`echo $DOMAIN`.crt;
    ssl_certificate_key  ssl/`echo $DOMAIN`.key;

    ssl_protocols        SSLv3 TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers RC4:HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    keepalive_timeout    60;
    ssl_session_cache    shared:SSL:10m;
    ssl_session_timeout  10m;

    location / {
        proxy_pass      https://`echo $CONTAINER_IP`/;
        include         /etc/nginx/proxy.conf;
}" | sudo tee /etc/nginx/sites-enabled/$DOMAIN

If you do not need https, then you can remove the second server block. We have two server blocks instead of combining them so that http requests are passed through as http and https requests are passed through as https. Thus the backend site knows what type of connection is being used. Otherwise you could suffer from infinite redirects when you force https, or a security issue where the connection is not encrypted until it hits the reverse proxy, in which case you might as well not be using encryption at all!

SSL Users - Add the Keys

Add the relevant key to /etc/nginx/ssl/[].key and certificate to /etc/nginx/ssl/[].crt

Restart Nginx

For the changes to take effect, you need to restart nginx with the following command:

sudo invoke-rc.d nginx reload

If nginx fails to start, then check the log at /var/log/nginx/error.log. If the last line states: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32 then simply uncomment the server_names_hash_bucket_size line in /etc/nginx/nginx.conf on line 23.


Make sure that your DNS records point to the public IP of the host, not the container's IP! e.g. not 10.0.3.x


Last updated: 25th October 2018
First published: 16th August 2018