Tag Archives: docker

Backup & Restore Docker Named Volumes

I finally started implementing backup & restore feature for Puffin. The first issue I encountered was to make a backup of named volumes.

The official Docker documentation mentions only data volume containers and –volumes-from option. There’s also docker cp command, but it requires knowing the path where the volumes are mounted in the container that uses them.

It turns out it’s pretty easy to do using volume mounts and tar.

To backup some_volume to /tmp/some_archive.tar.bz2 simply run:

And to restore run:

I have chosen alpine image since it’s lightweight and contains everything what’s needed. One potential issue might be preserving file ownership since different users and groups exist on different containers. Classical solution to this problem is to run the tar command using the same image as the one normally using the volume instead of alpine, but what if there’s no tar there? Using numeric owner generally preserves permissions correctly, unless you also use user namespaces. Also you need to remember to stop all the containers using the volume being backed up or restored, otherwise the data on it might be damaged.

Ultimately I wrote my own little volume-backup utility for backup and restore of volumes that simplifies the process even further and offers some improvements. Example usage (see README for more details):

Feel free to check it out and let me know what do you think.

Docker Can Create Only 31 Networks per Machine

I have just learned that in Docker there is a limit of 31 networks for default network driver on a single machine:

docker-black

This is due to the fact that it uses hardcoded list of broad network ranges – 172.17-31.x.x/16 and 192.168.x.x/20 – for bridge network driver. Look into ipamutils and allocator for more details. For overlay network driver 64K networks can be created.

There seems to be no solution to circumvent this limitation apart from manually specifying subnet ranges for each created network – see Docker network create subnet option and Docker Compose network configuration reference. In Puffin, which needs to create a separate network for each application, I implemented a simple address allocator.

Puffin

puffin-scaledI am happy to announce Puffin, a project I have been working on since December.

The idea behind it is to create lightweight web application catalog based on Docker containers, offering the users smooth experience à la mobile app store.

The reason I think it’s interesting is that containers allow packing hundreds of relatively well isolated applications on a single server, which could bring the price of hosting them to almost zero. In addition easy-to-use orchestration technology lets developers easily describe complex applications in terms of microservices.

The whole thing is free / open source and I am hoping to build a small community around it – see README and CONTRIBUTING for details.

Start Only Dependencies via Docker Compose

Docker Compose is great, among other things, for demoing your web applications. It allows you to have consistent runtime environment, download dependencies without polluting the host system and automatically start external services like databases, search engines, mail servers, message queues, caches, etc. Many projects put docker-compose.yml configuration file in the source repository root, to be able to start the app by just typing:

However, during development, when you are constantly changing and debugging the code, it’s sometimes useful to keep the app running natively outside of Docker, but still running dependencies via Docker. This can be achieved in the same docker-compose.yml file by adding a special deps service (or however you want to call it) that will just start the dependencies and exit. It can be invoked as follows:

I am using this technique in my Puffin project – see my docker-compose.yml for an example.

Edit: I have changed how it’s done in Puffin, since I needed more configuration options. I linked above the original file.

Your Own Docker Machine

docker-blackInitially I wanted to write a detailed tutorial based on what I did a couple of months ago, but it turns out it’s no longer necessary. Docker Machine lets you configure your own server using a single command:

Where:
– [name]: how you’d like to call your server in Docker
– [ip]: public IP address of your server
– [user]: user login on your server
– [key]: user private key, for example id_rsa or a key file generated by your hosting service

You may also need to update your server’s firewall configuration by opening port 2376 for Docker daemon, along with any other ports the application is using (e.g. HTTP 80).

Now you can activate your server in Docker:

and you can run any docker image on it, for example nginx (adjust host port number as necessary):

Now visit your server in a web browser and you should see familiar “Welcome to nginx!” page.