Blog

Working around the lack of volumes-from in Kubernetes

We have been running containerised apps since early 2014 so in theory, moving to Kubernetes should be easy. The problem is that we currently make use of the --volumes-from docker flag to share volumes between containers of the same app. For example, a common pattern we use is to put our webapp sourcecode in a data-container and then run both nginx and a app-server like php-fpm or nodejs with the --volumes-from flag, referencing the data-container, thereby ensuring all processes have a common view of the sourcecode. This is very useful for things like dynamically generated asset files such as CSS. For example, if the PHP container creates a CSS file, the nginx container can serve it as they share the same volume. Kubernetes doesn’t have --volumes-from. If you want a single Pod to have both nginx and php-fpm containers, how do you share data between them in a way where the data is visible at container launch AND can be modified by any container in a manner that the others get the changes immediately?

One approach is to copy the data you want to share (e.g. sourcecode) to a volume in a initContainers so that when the containers start they can mount the volume in order to share it. We use emptyDir and this is what it looks like:

This mimics the old --volumes-from quite well but it makes the startup time of the pod a little slow because if your app is 300M then you’re copying 300M to disk in the init phase for each Pod. This is not a problem for a few Pods but if you run dozens of Pods and if you run CronJobs that run every minute the disk I/O is noticeable. We found we were running out of AWS EBS disk I/O credits due to all the moving of bits about.

One alternative idea we came up with is to use “Docker-outside-of-Docker” (DooD) to bake together the app with the data as a local docker image. This way, the app can start very quickly on subsequent startups without needing to copy to disk. DooD is the name given to the technique of mounting the docker socket so you can run docker commands and get the same results as if you ran them directly on the host. In our case, it allow us to run docker build from within a container running in Kubernetes. This is what it looks like in a Kubernetes CronJob:

The initContainer runs a docker build using a self-created Dockerfile. This creates a local docker image that can be used next time the cron runs. It is not best practice to mount the docker socket into a Pod – it is akin to running in “privileged mode”, so it may not be the best approach for you.

Notes:

  • In our use case, we didn’t want to push the image to a docker registry, we wanted to keep in only on the local node to avoid network delays. This is why the container uses imagePullPolicy: Never.
  • The builder container can use a official docker image which has the docker binary in it, e.g. docker:18.05
  • We pass a string straight to docker build avoiding the need of a Dockerfile
  • If you need to pull from a private docker registry as part of your docker build, the normal Kubernetes imagePullSecrets doesn’t work. We found a workaround where we could mount a docker authentication secret into own initContainer. This is what our secret volume looked like, the trick being to use a key and path:

I’m interested to know if folk out there have any other methods to workaround not having volumes-from in Kubernetes?


Resource(s):

No Comment

Post A Comment