IT environment, part 2: Practical server setup

In my first blog post I gave a very brief introduction to containerization and why we chose to spend some time on our initial IT setup. In this post, I will talk about how we’ve deployed our first container host server. If you’re used to deploying myriads of servers on your own and can configure network settings in your sleep, this post might be a bit basic to you. However, if you’re like me, with some technical know-how but rather inexperienced with larger network setups – then hopefully this post can help you if you one day decide to deploy your own container host.

First, a brief introduction to the hardware we use while working – two workstations, one dedicated server, one router and finally a NAS device. When it comes to business IT infrastructure, that’s as simple as it can get. Our first decision was to determine what OS our server should have. While there are numerous OSes that can run Docker/rkt/other container engines nowadays, it’s probably advisable to go with an OS which sole purpose is to run containers. We quickly went ahead and selected CoreOS.

Installation of CoreOS was a breeze, you simply make a bootable USB stick with the latest ISO, boot from it, then run: coreos-install -d /dev/sda -C stable -c ~/cloud-config.yaml. Boom, it reformats your harddrive and you’re up and running, without any prompts about which timezone you’re in, what keyboard you use or what the name of your favorite pet is. If this is the first time you try CoreOS, you can skip the -c ~/cloud-config.yml  part – that will provide CoreOS with an scripted setup, which we’ll get to in a bit.

CoreOS is a slimmed-down Linux distribution. It comes with some essential tools pre-installed, like docker, rkt, vim, ssh and git. It lacks other features though, like a package manager (Whaaat?!?)! CoreOS development philosophy is sound however – focus on a minimal OS and if you need to install lots of tools, install them in a container instead of the host OS. Another neat feature of CoreOS is that auto-updates are enabled per default. Good, one less thing to think about.

 

With the OS selected and installed, it was time to choose a container engine. The biggest player seems to be Docker, by far – so that was our initial choice. After trying docker for a bit and trying to make it behave the way that I wanted, I became a bit skeptical. docker prioritizes the user experience (which is good), but when doing so, does a lot of magic stuff in the background. If you want to use docker ‘the way it’s meant to be used’, that won’t be a problem – but we tried to push some buttons that weren’t meant to be pushed. Maybe it was due to inexperience, maybe we f*cked something up (probably), but long story short: we went with the underdog instead – rkt. rkt (pronounced “rocket”), with their own words, “Follows the unix tools philosophy, a single binary that integrates with init systems, scripts, and complex devops pipelines. Containers take their correct place in the PID hierachy and can be managed with standard utilities.”. A nice philosophy, if you ask me. Oh, by the way, rkt can run docker images, which means that you can switch between the two until you’ve made your decision.

With the choice of container engine made, it was time to get our hands dirty with the network setup. With both docker and rkt it’s really simple to launch 20 containers that no-one but you are aware of, isolated from the world with only a port or two mapped to the host. Super neat if you are a web and/or server developer and need to test complex server setups before they go live. For us – we actually want the exact opposite. We want servers which have a fixed IP and for all intents and purposes should act like a physical server. So how do we accomplish that?

The solution is to use a network bridge. It does require two NICs on the host, but other than that, it’s really straight-forward. The key idea is to use one NIC to talk to the host, while the other NIC is used to talk to all containers. CoreOS uses systemd as its init system, so configuring persistent network changes is just a matter of writing a couple of files to the /etc/systemd/init/network folder. Our setup consists of the following four files and uses two network cards named eno1 (which we assign a static IP) and eno2 (which we use as a bridge to our containers). These names can of course vary between systems and OSes, but you can always run ifconfig to figure out what they are called. Our router can be reached at 10.0.10.1 and we assign eno1 an IP of 10.0.10.2 and eno2 an IP of 10.0.10.3 respectively. The “/16” at the end of each adress is just the IP adress in CIDR notation and indicates that we want a subnet mask of 255.255.0.0 (enabling us to put 65,534 computers on our LAN, which should be enough (for now!)). The DNS IPs we use (8.8.8.8 and 8.8.4.4) point to Google’s name servers.

Once we have the bridge up and running and a NIC connected to one end of it, we need to configure rkt to connect its containers to the other end. We do this by adding files in the /etc/rkt/net.d/ folder. The configuration can look like this:

Once the bridge is up and running (either by rebooting the server or restarting the network service after we’ve written the files in /etc/systemd/network), we can start new containers with  rkt run --net=rktnet:IP=10.0.11.123 docker://busybox. Now our external computers and the host can reach the container and vice versa. Nice!

 

But now that we’ve done some configuration on the server, we’ve also like to keep it somehow. What if our server crashes or we (for some reason) need to reinstall CoreOS? Here’s when cloud-config.yaml enters the picture. We can simply put everything we just wrote into it and the next time we install CoreOS, we just tell it to read the settings from that file. Following our example, cloud-config.yaml can look like this:

As you can see, we can also add default users for the system, set update strategies, mount network shares and lots of other stuff if we want to. Also, once you’ve written your cloud-config.yaml, be sure to validate it, so you know the formatting is correct.

In summary:

  • CoreOS features a super simple install process and comes with both docker and rkt pre-installed.
  • We’ve learned how to configure CoreOS and rkt so we can choose whether we’d like containers to be accessible by computers on our network or not.
  • We’ve written a cloud-config.yaml file (which of course is under version control) which contains our server configuration in case we ever need to reinstall CoreOS.

Hopefully some of this information will be of use to you, in case you ever decide to deploy your own container host.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

By | 2016-10-17T12:13:57+00:00 October 3rd, 2016|IT, rkt|0 Comments

Leave A Comment

2 × two =