When going from an established company to a startup like ours, a lot of things change. For the most part for the better (more freedom! flexible working hours! no meetings!), but some changes are not as good. One bad thing is that you become fully responsible for your own IT environment. That’s not to say that fiddling with your IT setup is boring, rather the opposite. But it also takes quite a bit of time if one wants to do it properly and it’s probably not your core business idea. One of the things I really wanted to avoid was the fragile setup that often is found at companies with few employees, usually revolving around a computer in a room or under someones desk, full of Post-It notes saying “Do NOT touch!”, “Don’t turn off! Critical!” etc.

When looking at deployed servers, a couple of things become critical to take care of:

  • Development of new features.
  • Backup of content.
  • Security and software upgrades.

VMs vs Containers. Image taken from http://www.solidfire.com/

Done improperly, you’ll end up with a server that is critical for production and hosts all the services you need. Upgrades becomes a chore, because you can’t take the machine offline and dependencies start to differ between different software, so you don’t know what you can safely install or uninstall. No good.

How do we solve this problem at Warpzone Studios? Every developer who hasn’t been living under a rock during the last three years know the answer: Virtualization through containers. More lightweight than actual virtual machines, they solve all our needs – close to the hardware, easy upgrades, isolated environments. A dream come true!


We use Dockers script format (easily the largest user base currently) for our servers. The script, placed in a “Dockerfile”, can look like the following:


This script downloads the latest release of Debian (pinned to the ‘jessie’ release, also known as Debian 8 (currently 8.6)), flags me as maintainer, add a user called ”bind”, downloads the latest ‘bind9’ software, opens up port 53 and finally runs ‘named’ (short for ‘name daemon’, a daemon able to serve DNS requests). Pretty simple stuff, but oh so awesome. Now that we have a Dockerfile, we can build an “image” (a prebuilt binary saved locally) from it. We can then instantaneously launch as many “container” instances from that image as we’d like. Let’s see how we fare against the three bulletin points I brought up earlier:

  • Development of new features – Easy, just edit the Dockerfile. The Dockerfile is under version control, so all changes are tracked. Servers are isolated from each other, so there should be no interference to consider.
  • Backup of content – Using containers doesn’t really solve backup of user content, but the server configuration is under version control. We’ll look into backup of user content in a future blog post.
  • Security and software upgrades – Super simple. Rebuild the image from the Dockerfile and you have all the latest changes. You do need to boot new containers from this image, so you’ll have a few seconds of downtime per server, but that shouldn’t be a practical problem.

There you have it, a brief introduction to containers. Currently these are the servers that we host internally:

  • Name server (so we can refer to things like ‘git01.lan.warpzonestudios.com’ instead of using raw IPv4.
  • Git server (for version control). Why don’t we use github.com or similar? I’ll get back to that in another blog post.
  • Backup server x 2 (spoiler alert, this might have something to do with how we backup user content).
  • Various Wordpress/Joomla/Jenkins/Phabricator/mysql servers – we’ve experimented with these, but haven’t fully deployed them into production yet. Phabricator and Jenkins will be up and running within a month though.

The last point is something I really like with containerization, because Docker hosts a lot of official images themselves which makes experimenting super easy to do. Want to try out Jenkins for example?
Run  docker run -p 8080:8080 -p 50000:50000 jenkins and you will download the latest image of Jenkins and start a container on your host, accessible from port 8080. Neat!

Further reading for those that are interested:



Containers vs VMs

Part 2 can be found here.