puffin-scaledI am happy to announce Puffin, a project I have been working on since December.

The idea behind it is to create lightweight web application catalog based on Docker containers, offering the users smooth experience à la mobile app store.

The reason I think it’s interesting is that containers allow packing hundreds of relatively well isolated applications on a single server, which could bring the price of hosting them to almost zero. In addition easy-to-use orchestration technology lets developers easily describe complex applications in terms of microservices.

The whole thing is free / open source and I am hoping to build a small community around it – see README and CONTRIBUTING for details.

Start Only Dependencies via Docker Compose

Docker Compose is great, among other things, for demoing your web applications. It allows you to have consistent runtime environment, download dependencies without polluting the host system and automatically start external services like databases, search engines, mail servers, message queues, caches, etc. Many projects put docker-compose.yml configuration file in the source repository root, to be able to start the app by just typing:

However, during development, when you are constantly changing and debugging the code, it’s sometimes useful to keep the app running natively outside of Docker, but still running dependencies via Docker. This can be achieved in the same docker-compose.yml file by adding a special deps service (or however you want to call it) that will just start the dependencies and exit. It can be invoked as follows:

I am using this technique in my Puffin project – see my docker-compose.yml for an example.

Edit: I have changed how it’s done in Puffin, since I needed more configuration options. I linked above the original file.

Curriculum Vitae with HTML, CSS and JavaScript

Some time ago I needed to update my CV / resume because I was searching for a new job. When I looked at my current one created many years ago in Microsoft Word and later adapted to Open / Libre Office, I realized that it looks pretty outdated. Also it’s maintenance is time consuming – based on a big fat table, page breaks appear in unexpected places and there is no option to conditionally show or hide parts of the document, so I need to maintain multiple versions. While I guess solutions to these problems exist in LibreOffice, I wanted to solve them in non-WYSIWYG manner to gain greater control over the PDF generation process.

After a bit of searching, I decided to stick with familiar, ubiquitous, simple yet flexible technology – HTML, CSS with a bit of JavaScript. I found out that this combination has quite powerful typesetting capabilities – you can define manual page breaks, margins, avoid widows and orphans and do all sorts of stuff normally found in text editors. Minor rendering deficiencies, like correct hyphenation or avoiding breaking text after a header, can be fixed with a bit of JavaScript.

Resulting project is called CV, source code can be found on GitHub and here’s an example document generated by it.

I styled the output after this website, which is powered by Twenty Thirteen WordPress theme. I don’t think it’s particularly pretty, but the concept is interesting. For consistent, scriptable rendering I used wkhtmltopdf, which uses WebKit a rendering engine.

Your Own Docker Machine

docker-blackInitially I wanted to write a detailed tutorial based on what I did a couple of months ago, but it turns out it’s no longer necessary. Docker Machine lets you configure your own server using a single command:

– [name]: how you’d like to call your server in Docker
– [ip]: public IP address of your server
– [user]: user login on your server
– [key]: user private key, for example id_rsa or a key file generated by your hosting service

You may also need to update your server’s firewall configuration by opening port 2376 for Docker daemon, along with any other ports the application is using (e.g. HTTP 80).

Now you can activate your server in Docker:

and you can run any docker image on it, for example nginx (adjust host port number as necessary):

Now visit your server in a web browser and you should see familiar “Welcome to nginx!” page.



In my latest project I try to use the same application server in development and production environment (for simplicity, easier configuration, faster bugfixing, etc.) To achieve that I switched from embedded Flask / Werkzeug server in development and Apache mod_pyhon in production to embeddable, production-ready Waitress server.

One of the features I missed was automatically restarting the server whenever the code changes. I asked about this feature on Github and very helpful community members suggested various solutions, but none of the answers was satisfactory, so I decided to implement my own.


First I checked how other Python servers tackle this problem and all I came across work by spawning a monitor process (not thread!) with the same parameters as the main process, but with a special environment variable to distinguish between them. Examples:
* Werkzeug Reloadercode
* Pyramid pservecode

This can be confusing for a programmer when the server is embedded inside the application, because any code before entering main event loop (updating database schema, opening a config file, displaying “Starting…” message, etc.) will be executed twice. It seems much cleaner to me to use a dedicated program that will monitor the server process and restart it when necessary.

I found some generic utilities that can do that (e.g. inotifywait, watchmedo trick), but none of them behaves exactly how I want, so I created reload.



It monitors current directory and subdirectories for any changes, ignoring paths specified in .reloadignore file as regular expressions. Perhaps I will add support for reading .gitignore and other files later.


In the end the implementation turned out pretty simple. reload is programming language and server independent and it can be used when developing with anything that restarts reasonably quickly.


burstLogging is always a compromise between storing maximum amount of context information and maintaining good performance + saving disk space. In other words some errors occur only in production environment or are very difficult to reproduce locally, but you can’t store all debug messages on production systems.

I’d like to propose an alternative solution to this dilemma: BurstLogging.

The idea is simple – log only some debug messages that were created shortly before an error message was logged.

This is achieved by storing all logs in buffer, logging only informational messages during normal operation and dumping all messages when an error occurs. Chronological order of messages is preserved (debug messages that are older than currently logged info message are dropped) and there is no huge performance penalty (messages are formatted only when they are emitted).

Current implementation is written in Python, but it shouldn’t be difficult to port it to other programming languages or implement a language agnostic solution communicating via a port or a pipe.

The project is really new and it hasn’t been used in production yet. Please tell me what do you think about this idea.