I am happy to announce Puffin, a project I have been working on since December.
The idea behind it is to create lightweight web application catalog based on Docker containers, offering the users smooth experience à la mobile app store.
The reason I think it’s interesting is that containers allow packing hundreds of relatively well isolated applications on a single server, which could bring the price of hosting them to almost zero. In addition easy-to-use orchestration technology lets developers easily describe complex applications in terms of microservices.
The whole thing is free / open source and I am hoping to build a small community around it – see README and CONTRIBUTING for details.
Docker Compose is great, among other things, for demoing your web applications. It allows you to have consistent runtime environment, download dependencies without polluting the host system and automatically start external services like databases, search engines, mail servers, message queues, caches, etc. Many projects put docker-compose.yml configuration file in the source repository root, to be able to start the app by just typing:
However, during development, when you are constantly changing and debugging the code, it’s sometimes useful to keep the app running natively outside of Docker, but still running dependencies via Docker. This can be achieved in the same docker-compose.yml file by adding a special deps service (or however you want to call it) that will just start the dependencies and exit. It can be invoked as follows:
I am using this technique in my Puffin project – see my docker-compose.yml for an example.
Edit: I have changed how it’s done in Puffin, since I needed more configuration options. I linked above the original file.
Some time ago I needed to update my CV / resume because I was searching for a new job. When I looked at my current one created many years ago in Microsoft Word and later adapted to Open / Libre Office, I realized that it looks pretty outdated. Also it’s maintenance is time consuming – based on a big fat table, page breaks appear in unexpected places and there is no option to conditionally show or hide parts of the document, so I need to maintain multiple versions. While I guess solutions to these problems exist in LibreOffice, I wanted to solve them in non-WYSIWYG manner to gain greater control over the PDF generation process.
Resulting project is called CV, source code can be found on GitHub and here’s an example document generated by it.
I styled the output after this website, which is powered by Twenty Thirteen WordPress theme. I don’t think it’s particularly pretty, but the concept is interesting. For consistent, scriptable rendering I used wkhtmltopdf, which uses WebKit a rendering engine.
I updated my two old, but relatively successful Java projects – segment and mALIGNa:
- migrated from SourceForge to Github
- migrated mALIGNa from ancient Ant to slightly less ancient Maven
- changed the root library package from
- prepared new releases 2.0.0 and 3.0.0 respectively, which are available on Maven Central under
Initially I wanted to write a detailed tutorial based on what I did a couple of months ago, but it turns out it’s no longer necessary. Docker Machine lets you configure your own server using a single command:
docker-machine create -d generic --generic-ip-address [ip] \
--generic-ssh-user [user] --generic-ssh-key ~/.ssh/[key] [name]
– [name]: how you’d like to call your server in Docker
– [ip]: public IP address of your server
– [user]: user login on your server
– [key]: user private key, for example id_rsa or a key file generated by your hosting service
You may also need to update your server’s firewall configuration by opening port 2376 for Docker daemon, along with any other ports the application is using (e.g. HTTP 80).
Now you can activate your server in Docker:
eval "$(docker-machine env [name])"
and you can run any docker image on it, for example nginx (adjust host port number as necessary):
docker run --name nginx1 -d -p 80:80 nginx
Now visit your server in a web browser and you should see familiar “Welcome to nginx!” page.
In my latest project I try to use the same application server in development and production environment (for simplicity, easier configuration, faster bugfixing, etc.) To achieve that I switched from embedded Flask / Werkzeug server in development and Apache mod_pyhon in production to embeddable, production-ready Waitress server.
One of the features I missed was automatically restarting the server whenever the code changes. I asked about this feature on Github and very helpful community members suggested various solutions, but none of the answers was satisfactory, so I decided to implement my own.
First I checked how other Python servers tackle this problem and all I came across work by spawning a monitor process (not thread!) with the same parameters as the main process, but with a special environment variable to distinguish between them. Examples:
* Werkzeug Reloader – code
* Pyramid pserve – code
This can be confusing for a programmer when the server is embedded inside the application, because any code before entering main event loop (updating database schema, opening a config file, displaying “Starting…” message, etc.) will be executed twice. It seems much cleaner to me to use a dedicated program that will monitor the server process and restart it when necessary.
I found some generic utilities that can do that (e.g. inotifywait, watchmedo trick), but none of them behaves exactly how I want, so I created reload.
It monitors current directory and subdirectories for any changes, ignoring paths specified in .reloadignore file as regular expressions. Perhaps I will add support for reading .gitignore and other files later.
In the end the implementation turned out pretty simple. reload is programming language and server independent and it can be used when developing with anything that restarts reasonably quickly.
Logging is always a compromise between storing maximum amount of context information and maintaining good performance + saving disk space. In other words some errors occur only in production environment or are very difficult to reproduce locally, but you can’t store all debug messages on production systems.
I’d like to propose an alternative solution to this dilemma: BurstLogging.
The idea is simple – log only some debug messages that were created shortly before an error message was logged.
This is achieved by storing all logs in buffer, logging only informational messages during normal operation and dumping all messages when an error occurs. Chronological order of messages is preserved (debug messages that are older than currently logged info message are dropped) and there is no huge performance penalty (messages are formatted only when they are emitted).
Current implementation is written in Python, but it shouldn’t be difficult to port it to other programming languages or implement a language agnostic solution communicating via a port or a pipe.
The project is really new and it hasn’t been used in production yet. Please tell me what do you think about this idea.
Agile is less about doing things right, than about doing the right things.
I managed to replace upper case on a MacBook Air using this excellent manual from iFixit and saved about 400€. I though it would be hard, but since everything is integrated, I found it quite easy. The downside is that replacement parts are quite expensive.
A while ago I have closed my Facebook and Linkedin accounts. These services were simply not bringing me any value. Apart from that, I don’t like the way they operate by gathering as much data about me as possible (for example when you install an App on your phone, it tries to upload your entire address book with names, phones and emails to a remote server) and manipulating what they show to you (payment-driven newsfeed, targeted ads).
It doesn’t mean that I’ve given up social networking altogether though. I am following very interesting federated social networking standardisation initiative at W3C Social Web Working Group, chaired by Tantek Çelik (of Indie Web Camp) and Evan Prodromou (of pump.io).
For my part after unsuccessfully trying to deploy my own pump.io, I joined Diaspora* – you can find my profile here.