Blog

Using Nginx to proxy to Docker containers

We run our DeskDonkie webapp within a docker container and every deployment starts a new container listening in on a port that docker allocates for us. In order to switch incoming requests to a newly deployed container we use a proxy. There are many proxy options to choose from (a interesting one is hipache) but we chose to use nginx because it is a technology already used in our stack and very familiar to us. Our proxy is a nginx running within its own docker container, listening on public web ports (80 and 443) and configured to proxy_pass traffic to the correct container via it’s individual port:

How is the proxy_pass updated?

The interesting part is how do we update the proxy_pass port of the running container every time a new webapp is deployed?

Idea 1 – We could install SSH within the proxy container. REJECTED. This would complicate our container (e.g. it would need to then also run supervisord and have SSH keys installed). But mostly, it would ruin the docker best practices of having one service per container.

Idea 2 – When we start the proxy container, we could bind-mount the nginx config folder from the host into the container so we always have access to it and can edit it. REJECTED. If a container is relying on mounted configuration, then it isn’t fully containerised. You have the problem of deciding where to put the config on the host, how to add those files for new instances – i.e. you have the same problems as if you ran nginx directly on the host – you didn’t win much by using Docker.

Idea 3 – In the nginx docker container, expose the nginx config folder using the VOLUME directive (documenation is at http://docs.docker.com/userguide/dockervolumes/). See the last line of our nginx proxy Dockerfile:

Now, when you want to make a change, you can use dockers --volumes-from feature. For example, you can use sed to update a value in the nginx config of the running proxy container. Amazingly, it is a one-liner to use sed in a new container that has the nginx.conf file that we want to modify:

The above sed command makes a persistent change, but leaves nothing else behind that we need to clean up.

This works well, it means you have write-access to the config files when you need them, but they are not sprayed all over the host when you don’t.

Once the proxy config is changed, how do we apply it?

Unlike with a VM, the processes that run under docker are actually visible to the host. We can use this to send the pid of the nginx proxy a SIGHUP signal. We can send a “Hang up” signal with kill -HUP (which is the same as kill -1) and Nginx nicely reloads it’s configuration and hang ups the previous worker threads gracefully bringing up the new workers with the new configuration. Warning: Other daemons will behave differently when they get a SIGHUP – see .This is an example of finding the proxy pid and asking it to reload:

What we did here is grep the output of ps aux for something that identifies the nginx proxy pid, in this case, the “stderr” string, because we used it as a argument when we started the proxy nginx, as you can see here:

Sidenote – Ambassador Pattern

Whilst building the proxy I realised it is actually an example of the ambassador pattern.
Our proxy container encourages service portability rather than hardcoding network links between a service consumer and provider. At the same time it allows us to do several devopsy type things which are vital but the developers of the webapp don’t need to concern themselves with. For example we add some functionality to our nginx proxy to redirect non ssl traffic to the https version of the site and we direct bots to a robots.txt file that Disallows them if we know the host isn’t the production site, we add filtering, etc, etc). Ambassador containers are most useful when they have some logic of their own.

Other request routing ideas

The SIGHUP aftermath for nginx wont result in downtime, but for a very short time, might result in ‘mixed’ environments (containers) being served in the same time.
This doesn’t matter much for our app, but is something that is worth mentioning. A way to attack this problem might be to use a alternative approach to the proxy container idea.

One example is that your could use iptables on the host to route the traffic from a tcp source port to the entrypoint-port of the new containerised app. E.g. by adding/replacing a rule for tcp connections with NEW state. This would keep the old connections that are in ESTABLISHED or FIN_WAIT state and any new connection will be routed instantly to the new container.

Another example would be to work one level upstream. If you use a load-balancer (e.g. HAProxy or ELB) you could alter it’s configuration whenever you have a new containerised app you want to target.

Happy Dockerising.


Update: This is a much better way to get a nginx container to reload it’s config:

I’m not sure which version of docker this became available but I’m glad the solution is simple and readable!

Tags >

1 Comment

Post A Comment