docker build gitlab

My Road to Docker – Part 4: Automated Container Builds with GitLab CI

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


In the course of migrating most of the services I run over to running as Docker containers I’ve mostly tried to steer clear of building my own images. Thanks to the popularity of Docker this has been pretty easy for the most part. Many projects now ship an official Docker image as part of their releases. However, the inevitable happened when I had to build my own image in my recent TT-RSS migration.

The reason for my reluctance to build my own images is not some unfounded fear of Dockerfiles. It’s not building the images that I’m worried about, it’s maintaining them over time.

Every Docker image contains a mini Linux distribution, frozen in time from the day the image was built. Like any distribution of Linux these contain many software packages, some of which are going to need an update sooner or later in order to maintain their security. Of course you could just run the equivalent of an apt upgrade inside each container, but this defeats some of the immutability and portability guarantees which Docker provides.

The solution to this, is basically no more sophisticated than ‘rebuild your images and re-deploy’. The problem is no more help is offered. How do you go about doing that in a consistent and automated way?

Enter GitLab CI

If you have been following this blog you’ll know I’m a big fan of GitLab CI for these kinds of jobs. Luckily, GitLab already has support for building Docker images. The first thing I had to do was get this set up on my self-hosted GitLab runner. This was complicated by the fact that my runner was already running as a Docker container. It’s basically trying to start a Docker-in-Docker container on the host, from inside a Docker container, which is a little tricky (if you want to go further down, Docker itself is running from a Snap on a Virtual Machine! It really is turtles all the way down).

After lots of messing around and tweaking, I finally came up with a runner configuration which allowed the DinD service to start and the Docker client in the CI build to see it:

concurrent = 4
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "MyRunner"
  url = "https://gitlab.com/"
  token = "insert your token"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "ubuntu:18.04"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    environment = ["DOCKER_DRIVER=overlay2", "DOCKER_TLS_CERTDIR=/certs"]
    volumes = ["/certs/client", "/cache"]
    shm_size = 0
    dns = ["my internal DNS server"]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
  [runners.custom]
    run_exec = ""

The important parts here seem to be that the privileged flag must be true in order to allow the starting of privileged containers and that the DOCKER_TLS_CERTDIR should be set to a known location. The client subdirectory of this should then be shared as a volume in order to allow clients to connect via TLS.

You can ignore any setting of the DOCKER_HOST environment variable (unless you are using Kubernetes, maybe?). It should be populated with the correct value automatically. If you set it you will most likely get it wrong and the Docker client won’t be able to connect.

The CI Pipeline

I started the pipeline with a couple of static analysis jobs. First up is my old favourite yamllint. The configuration for this is pretty much the same as previously described. Here it’s only really checking the .gitlab-ci.yml file itself.

Next up is a new one for me. hadolint is a linting tool for Dockerfiles. It seems to make sensible suggestions, though I disabled a couple of them due to personal preference or technical issues. Conveniently, there is a pre-built Docker image, which makes configuring this pretty simple:

hadolint:
  <<: *preflight
  image: hadolint/hadolint
  before_script:
    - hadolint --version
  script:
    - hadolint Dockerfile

Building the Image

Next comes the actual meat of the pipeline, building the image and pushing it to our container registry:

variables:
  IMAGE_TAG: registry.gitlab.com/robconnolly/docker-ttrss:latest

build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  before_script:
    - docker info
  script:
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG
  tags:
    - docker

This is relatively straightforward if you are already familiar with the Docker build process. There are a couple of GitLab specific parts such as starting the docker:dind service, so that we have a usable Docker daemon in the build environment. The login to the GitLab registry uses the built-in $CI_BUILD_TOKEN variable as the password and from there we just build and push using the $IMAGE_TAG variable to name the image. Simple!

docker build gitlab
The pipeline for this is actually rather simple and boring compared to some of my others!

I also added another pipeline stage to send me a notification via my Gotify server when the build is done. This pretty much just follows the Gotify docs to send a message via curl:

notify:
  stage: notify
  image: alpine:latest
  before_script:
    - apk add curl
  script:
    - |
      curl -X POST "https://$GOTIFY_HOST/message?token=$GOTIFY_TOKEN" \
           -F "title=CI Build Complete" \
           -F "message=Scheduled CI build for TT-RSS is complete."
  tags:
    - docker

Of course the $GOTIFY_HOST and $GOTIFY_TOKEN variables are defined as secrets in the project configuration.

You can find the full .gitlab-ci.yml file for this pipeline in the docker-ttrss project repository.

Scheduling Periodic Builds

GitLab actually has support for scheduled jobs built in. However, if you are running on gitlab.com as I am you will find that you have very little control over when your jobs actually run, since it runs all cron jobs at 19 minutes past the hour. This would be fine for this single job, but I’m planning on having more scheduled builds run in the future, so I’d like to distribute them in time a little better, in order to reduce the load on my runner.

docker build gitlab
Pipeline triggers are created via the Settings->CI/CD menu in GitLab.

In order to do this I’m using a pipeline trigger, kicked off by a cron job at my end:

# Run TT-RSS build on Tuesday and Friday mornings
16  10 *   *   2,5   /usr/bin/curl -s -X POST -F token=SECRET_TOKEN -F ref=master https://gitlab.com/api/v4/projects/2707270/trigger/pipeline > /dev/null

This will start my pipeline at 10:16am (NZ time) on Tuesdays and Fridays. If the build is successful there will be a new image published shortly after these times.

An Aside: Managing random cron jobs like these really sucks once you have more than one or two machines. More than once I’ve lost a job because I forgot which machine I put it on! I know Ansible can manage cron jobs, but you are still going to end up with them configured across multiple files. I also haven’t found any way of managing cron jobs which relate to a container deployment (e.g. running WordPress cron for a containerised instance). Currently I’m managing all this manually, but I can’t help feeling that there should be a better way. Please suggest any approaches you may have in the comments!

Updating Deployed Containers

I could extend the pipeline above to update the container on my server with the new image. However, this would leave the other containers on the server without automated updates. Therefore I’m using Watchtower to periodically check for updates and upgrade any out of date images. To do this I added the following to the docker-compose.yml file on that server:

watchtower:
    image: containrrr/watchtower
    environment:
      WATCHTOWER_SCHEDULE: 0 26,56 10,22 * * *
      WATCHTOWER_CLEANUP: "true"
      WATCHTOWER_NOTIFICATIONS: gotify
      WATCHTOWER_NOTIFICATION_GOTIFY_URL: ${WATCHTOWER_NOTIFICATION_GOTIFY_URL}
      WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN: ${WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN}
      TZ: Pacific/Auckland
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    restart: always

Here I’m just checking for container updates four times a day. However, these are divided into two pairs of sequential checks separated by 30 minutes. This is more than sufficient for this server since it’s only accessible from my local networks. One of the update times is scheduled to start ten minutes after my CI build to pick up the new TT-RSS container. The reason for the second check after 30 mins is to ensure that the container gets updated even if the CI pipeline is delayed (as sometimes happens on gitlab.com).

You’ll note that I’m also sending notifications from Watchtower via Gotify. I was pleasantly surprised to discover the Gotify support in Watchtower when I deployed this. However, I think the notifications will get old pretty quickly and I’ll probably disable them eventually.

Conclusion

With all this in place I no longer have to worry about keeping my custom Docker images up to date. They will just automatically build at regular intervals and be updated automatically on the server. It actually feels pretty magical watching this all work end-to-end since there are a lot of moving parts here.

I’ll be deploying this approach as I build more custom images. I also still have to deploy Watchtower on all my other Docker hosts. The good news is that this has cured my reluctance to build custom images, so there will hopefully be more to follow. In the meantime, the community can benefit from a well maintained and updated TT-RSS image. Just pull with the following command:

docker pull registry.gitlab.com/robconnolly/docker-ttrss:latest

Enjoy!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

tiny tiny rss docker

Self-Hosted RSS with Tiny Tiny RSS in Docker

This post may contain affiliate links. Please see the disclaimer for more information.

With the rise of social media, RSS seems to have been largely forgotten. However, there are still those who are dedicated enough to keep curating their own list of feeds and plenty of software to support them. I’ve always been a fan of RSS and believe it’s probably time for a resurgence in use that would free us from our algorithmic overlords. As such I’ve run an instance of Tiny Tiny RSS for several years and the time has come to migrate it to Docker.

It’s relatively unknown that you can still get RSS feeds for most news sources on the Internet. For example, I use TT-RSS to keep up with Youtube and Reddit (just add .rss to the end of any subreddit URL) as well as the usual blogs and news sites.

About Tiny Tiny RSS

Tiny Tiny RSS is a web based RSS application (think Google Reader replacement). It’s PHP based and supports Postgres or MySQL (like) databases. I’ve been using it for may years and although I’ve tried out other web based RSS readers (such as Miniflux), I’ve never found anything as good as TT-RSS.

tiny tiny rss docker
My Tiny Tiny RSS install

A particular favourite feature of mine is the ability to generate feeds from any internal view, which makes it great for integrating with other systems which may consume RSS/Atom.

There is also an Android app which is available via the Play Store or F-Droid.

Finding a Tiny Tiny RSS Docker Image

I had initially planned to use the LinuxServer.io TT-RSS image, but it seems to have been deprecated. With a bit of searching I found this repo, however it’s pretty out of date and doesn’t build any more. After looking through my GitLab repos, it turned out I’d already tried upgrading that image as part of one of my previous attempts with Docker. I’ve finished off this migration and made the repo public so everyone can benefit from my efforts. Thanks to some CI magic and GitLab’s built in container registry you can pull the latest version like so:

docker pull registry.gitlab.com/robconnolly/docker-ttrss

Feel free to read through the project README to familiarise yourself with the options available in the image. It’s pretty much as it was in the original repository, so I’ll go through my setup below.

Setup with docker-compose

I’m integrating this with my existing Docker setup via docker-compose and Traefik. As such I added the following to my docker-compose.yml file:

ttrss:
    image: registry.gitlab.com/robconnolly/docker-ttrss:latest
    depends_on:
      - mariadb-ttrss
    environment:
      DB_NAME: ${TTRSS_DB_NAME}
      DB_USER: ${TTRSS_DB_USER}
      DB_PASS: ${TTRSS_DB_USER_PASSWD}
      DB_HOST: mariadb-ttrss
      DB_PORT: 3306
      DB_TYPE: mysql
      SELF_URL_PATH: https://ttrss.example.com
    volumes:
      - /mnt/docker-data/ttrss/plugins:/var/www/plugins.local
    labels:
      - 'traefik.enable=true'
      - "traefik.http.middlewares.ttrss_redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.ttrss_insecure.rule=Host(`ttrss.example.com`)"
      - "traefik.http.routers.ttrss_insecure.entrypoints=web"
      - "traefik.http.routers.ttrss_insecure.middlewares=ttrss_redirect@docker"
      - "traefik.http.routers.ttrss.rule=Host(`ttrss.example.com`)"
      - "traefik.http.routers.ttrss.entrypoints=websecure"
      - "traefik.http.routers.ttrss.tls.certresolver=mydnschallenge"
      - "traefik.http.services.ttrss.loadbalancer.server.port=80"
    networks:
      - external
      - internal
    restart: always

Breaking this down, we first create a new service using my TT-RSS image. We then define a dependency on the database container, which we will create later. The environment configuration uses another file env.sh in which we store our secrets. This is of the form:

export TTRSS_DB_ROOT_PASSWD=supersecret
export TTRSS_DB_USER=ttrss
export TTRSS_DB_USER_PASSWD=justalittlebitsecret
export TTRSS_DB_NAME=ttrss

In order to use this the file must be sourced before running docker-compose:

$ source env.sh
$ docker-compose .... # whatever you're doing

We can see that the database is configured entirely via environment variables as shown in the project README. We also set the SELF_URL_PATH variable so that TT-RSS knows where it is located (the URL should be updated for your configuration). I also chose to mount the plugins.local directory on the host machine to allow me to install plugins easily. The remainder of the configuration is for Traefik and is covered in my earlier post (you’ll need to update the hostnames used here too).

Database Setup

As mentioned earlier, we need a database container for TT-RSS to talk to. I’m using MariaDB for this because it’s what I’m familiar with. Also my original TT-RSS installation was in mysql and I wanted to migrate the data. The setup for this is pretty simple using the official MariaDB image:

mariadb-ttrss:
    image: mariadb
    environment:
      MYSQL_ROOT_PASSWORD: ${TTRSS_DB_ROOT_PASSWD}
      MYSQL_USER: ${TTRSS_DB_USER}
      MYSQL_PASSWORD: ${TTRSS_DB_USER_PASSWD}
      MYSQL_DATABASE: ${TTRSS_DB_NAME}
    volumes:
      - /mnt/docker-data/mariadb-ttrss/init/ttrss.sql.gz:/docker-entrypoint-initdb.d/backup.sql.gz
      - /mnt/docker-data/mariadb-ttrss/data:/var/lib/mysql
    networks:
      - internal
    restart: always

As you can see, I re-use the previous environment variables to create the database and user. I also mount the mysql data directory locally and mount a compressed backup of my previous database. This backup will only be loaded the first time the database comes up. You can remove this line if you are doing a clean install.

If you are following my install you’ll also see that I use a couple of Docker networks:

networks:
  external:
  internal:

The external network connects the local service containers to Traefik, whilst the internal network is used between TT-RSS and the database container.

With all this in place you should be able to launch your TT-RSS server with:

docker-compose up -d

At this point, it’s usually a good idea to check the container logs for problems and adjust your configuration accordingly.

Conclusion

Aside from having to update the Docker image for TT-RSS (which took quite a while) this migration was relatively painless. I’m quite happy with my newly Dockerised TT-RSS server. In addition to migrating it into Docker this step has also moved it off my ageing mailserver in preparation for it’s upcoming migration to something newer and moved it from the cloud onto my home server. All positive steps!

Next Steps

I’m pretty keen to keep maintaining my new Docker image for Tiny Tiny RSS, since it seems to be a gap in the community that can be filled. I’m currently building this in CI, but the configuration is pretty basic. In the coming weeks, I’m intending to expand upon this a little and set up a scheduled build which will automatically keep the container up to date. This will hopefully be the topic of a follow up article, so stay tuned!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

ansible roles

Automating My Infrastructure with Ansible and Gitlab CI: Part 2 – Deploying Stuff with Roles

This post may contain affiliate links. Please see the disclaimer for more information.

In the first post of this series, I got started with Ansible running in Gitlab CI. This involved setting up the basic pipeline, configuring the remote machines for use with our system and making a basic playbook to perform package upgrades. In this post we’re going to build on top of this to create a re-usable Ansible role to deploy some software and configuration to our fleet of servers. We will do this using the power of Ansible roles.

In last week’s post I described my monitoring system, based on checkmk. At the end of the post I briefly mentioned that it would be great to use Ansible to deploy the checkmk agent to all my systems. That’s what I’m going to describe in this post. The role I’ve created for this deploys the checkmk agent from the package download on my checkmk instance and configures it to be accessed via SSH. It also installs a couple of plugins to enable some extra checks on my systems.

A Brief Aside: ansible-lint

In my previous post I set up a job which ran all the playbooks in my repository with the --check flag. This performs a dry run of the playbooks and will alert me to any issues. In that post I mentioned that all I was really looking for was some kind of syntax/sanity checking on the playbooks and didn’t really need the full dry run. Several members of the community stepped forward to suggest ansible-lint – thanks to all those that suggested it!

I’ve now updated my CI configuration to run ansible-lint instead of the check job. The updated job is shown below:

ansible-lint:
  <<: *ansible
  stage: check
  script:
    - ansible-lint -x 403 playbooks/*.yml

This is a pretty basic use of ansible-lint. All I’m doing is running it on all the playbooks in my playbooks directory. I do skip a single rule (403) with the -x argument. The rule in question is about specifying latest in package installs, which conflicts with my upgrade playbook. Since I’m only tweaking this small thing I just pass this via the CLI rather than creating a config file.

I’ve carried the preflight jobs and the ansible-lint job over to the CI configuration for my new role (described below). Since this is pretty much an exact copy of that of my main repo, I’m not going to explain it any further.

Creating a Base Role

I decided that I wanted my roles self contained in their own git repositories. This keeps everything a bit tidier at the price of a little extra complexity. In my previous Ansible configuration I had all my roles in the same repo and it just got to be a big mess after a while.

To create a role, first initialise it with ansible-galaxy. Then create a new git repo in the resulting directory:

$ ansible-galaxy init ansible_role_checkmk_agent
$ cd ansible_role_checkmk_agent
$ git init .

I actually didn’t perform these steps and instead started from a copy of the old role I had for this in my previous configuration. This role has been tidied up and expanded upon for the new setup.

The ansible-galaxy command above will create a set of files and directories which provide a skeleton role. The first thing to do is to edit the README.md and meta/main.yml files for your role. Just update everything to suit what you are doing here, it’s pretty self explanatory. Once you’ve done this all the files can be added to git and committed to create the first version of your role.

Installing the Role

Before I move on to exactly what my role does, I’m going to cover how we will use this role in our main infrastructure project. This is done by creating a requirements.yml file which will list the required roles. These will then be installed by passing the file to ansible-galaxy. Since the installation tool can install from git we will specify the git URL as the installation location. Here are the contents of my requirements.yml file:

---
 - name: checkmk_agent
   scm: git
   src: git+https://gitlab.com/robconnolly/ansible_role_checkmk_agent.git
   version: master

Pretty simple. In order to do the installation all we have to do is run the following command:

$ ansible-galaxy install --force -r requirements.yml -p playbooks/roles/

This will install the required Ansible roles to the playbooks/roles directory in our main project, where our playbooks can find them. The --force flag will ensure that the role always gets updated when we run the command. I’ve added this command in the before_script commands in the CI configuration to enable me to use the role in my CI jobs.

Now the role will be installed where we actually need it. However, we are not yet using it. I’ll come back to this later. Let’s make the role actually do something first!

What the Role Does

The main behaviour of the role is defined in the tasks/main.yml file. This file is rather long, so I won’t reproduce this here. Instead I’ll ask you to open the link and follow along with my description below:

  • The first task creates a checkmk user on the target system. This will be used by checkmk to log in and run the agent.
  • The next task creates a .ssh directory for the checkmk user and sets it’s permissions correctly.
  • Next we create an authorized_keys file for the user. This uses a template file which will restrict what the key can do. The actual key comes from the checkmk_pub_key variable which will be passed in from the main project. The template is as follows:
command="/usr/bin/sudo /usr/bin/check_mk_agent",no-port-forwarding,no-x11-forwarding,no-agent-forwarding {{ checkmk_pub_key }}
  • Next are a couple of tasks to install some dependent packages for the rest of the role. There is one task for Apt based systems and another for Yum based systems. I’m not sure if the monitoring-plugins package is actually required. I had it in my previous role and have just copied it over.
  • The two tasks remove the xinetd package on both types of system. Since we are accessing the agent via SSH we don’t need this. I was previously using this package for my agent access so I want to make sure it is removed. This behaviour can be disabled by setting the checkmk_purge_xinetd variable to false.
  • The next task downloads the checkmk agent deb file to the local machine. This is done to account for some of the remote servers not having direct access to the checkmk server. I then upload the file in the following task. The variables checkmk_server, checkmk_site_name and checkmk_agent_deb are used to specify the server address, monitoring instance (site) and deb file name. The address and site name are designed to be externally overridden by the main project.
  • The next two tasks repeat the download and upload process for the RPM version of the agent.
  • We then install the correct agent in the next two tasks.
  • The following task disables the systemd socket file for the agent to stop it being accessible over an unencrypted TCP port. Right now I don’t do this on my CentOS machines because they are too old to have systemd!
  • The final few tasks get in to installing the Apt and Docker plugins on systems that require it. I follow the same process of downloading then uploading the files and make them executable. The Docker plugin requires that the docker Python module be installed, which we achieve via pip. It also requires a config file, which as discussed in my previous post needs to be modified. I keep my modified copy in the repository and just upload it to the correct location.

The variables that are used in this are specified in the vars/main.yml and defaults/main.yml files. The default file contains the variables that should be overridden externally. I don’t specify a default for the SSH public key because I couldn’t think of a sensible value, so this at least must be specified for the role to run.

With all this in place our role is ready to go. Next we should try that from our main project.

Applying the Role

The first thing to do is to configure the role via the variables described above. I did this from my hosts.yml file which is encrypted, but the basic form is as follows:

all:
  vars:
    checkmk_server: <server_ip>
    checkmk_site_name: <mysitename>
    checkmk_pub_key: <mypubkey>

The public key has to be that which will be used by the checkmk server. As such the private key must be installed on the server. I’ll cover how to set this up in checkmk below.

Next we have the playbook which will apply our role. I’ve opted to create a playbook for applying common roles to all my systems (of which this is the first). This goes in the file playbooks/common.yml:

---
- hosts: all
  roles:
    - { role: checkmk_agent }

This is extremely basic, all it does is apply the checkmk_agent role to all servers.

The corresponding CI job is only marginally more complex:

common-roles:
  <<: *ansible
  stage: deploy
  script:
    - ansible-playbook playbooks/common.yml
  only:
    refs:
      - master

With those two in place a push to the server will start the pipeline and eventually deploy our role to our servers.

ansible roles
Our updated CI pipeline showing the ansible-lint job and the new common-roles job

Configuring Checkmk Agent Access via SSH

Of course the deployment on the remote servers is only one side of the coin. We also need to have our checkmk instance set up to access the agents via SSH. This is documented pretty well in the checkmk documentation. Basically it comes down to putting the private key corresponding to the public key used earlier in a known location on the server and then setting up an “Individual program call instead of agent access” rule in the “Hosts and Service Parameters” page of WATO.

I modified the suggested SSH call to specify the private key and user to use. Here is the command I ended up using in my configuration.

/usr/bin/ssh -i /omd/sites/site_name/.ssh/id_rsa -o StrictHostKeyChecking=no checkmk@$HOSTADDRESS$

When you create the rule you can apply it to as many hosts as you like. In my setup this is all of them, but you should adjust as you see fit.

ansible roles
The checkmk WATO rule screen for SSH agent access

Conclusion

If you’ve been following along you should now be able to add new hosts to your setup (via hosts.yml) and have the checkmk agent deployed on them automatically. You should also have an understanding of how to create reasonably complex Ansible roles in external repositories and how to use them in your main Ansible project.

There are loads of things about roles that I haven’t covered here (e.g. handlers). The best place to start learning more would be the Ansible roles documentation page. You can then fan out from there on other concepts as they arise.

Next Steps

So far on this adventure I’ve tested my playbooks and roles by just making sure they work against my servers (initially on a non-critical one). It would be nice to have a better way to handle this and to be able to run these tests and verify that the playbook is working from a CI job. I’ll be investigating this for the next part of this series.

The next instalment will probably be delayed by a few weeks. I have something else coming which will take up quite a bit of time. For my regular readers, there will still be blog posts – they just won’t be about Ansible or CI. This is probably a good thing, I’ve been covering a lot of CI stuff recently!

As always please get in contact if you have any feedback or improvements to suggest, or even if you just want to chat about your own Ansible roles.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

checkmk docker container

Monitoring All The Things with checkmk

This post may contain affiliate links. Please see the disclaimer for more information.

Once your number of connected devices grows to a certain size, it becomes difficult to keep track of them all manually. At this point you’re going to want to turn to software to do this job for you. Network monitoring software fills this need nicely. The old stable Nagios used to be my go to for this. However, I switched to checkmk in the last couple of years and have found it quite superior. I’ve recently been doing some maintenance and upgrades to my monitoring setup, so now was a good time to write it up.

Checkmk is an Open Source infrastructure and application monitoring tool. It will keep track various attributes of your networked devices and alert you if they fall outside of pre-programmed thresholds. At its core it wraps Nagios, but provides a nicer UI and GUI configuration tool. checkmk also supports autodiscovery of services and checks to be performed on each system. This means you can spin up a monitoring system with great coverage in less than half the time you would spend fiddling around with a Nagios configuration.

Monitoring vs. Metrics

I’m going to take a little time to discuss the modern trend towards “metrics” based monitoring and how it relates to more traditional monitoring approaches. Modern metrics based systems such as Prometheus and Influxdb collect a series of aggregated data points about a running system. Typically these are written to a time series database (Influxdb actually is purely a time series database and requires external data collection tools). This data would then be used to feed some graphing/dashboarding tool, such as Grafana and also to generate alerts.

This is a different approach to traditional monitoring. To my mind metrics gathering is more passive. The system is only asking what the other systems (or typically only the applications on those systems) know about themselves. It is not actually probing the network to check that things are working. Monitoring systems not only collect data from systems, but actually probe the network to create data, for example whether a given service is responding or not.

The metrics gathering approach works well if all you care about are the applications. Obviously the application knows everything about itself, so why not ask it? It’s particularly popular in cloud environments where someone else is caring about the underlying infrastructure. Anywhere you care about physical infrastructure you’d probably be better off with a monitoring system first. Of course, this doesn’t preclude adding a metrics system later!

I think its important not to get these tools mixed up, they serve different purposes, even if there is some overlap.

Let’s Start Monitoring

I’ve had checkmk installed inside an LXD container for some time. In that time it’s been running pretty much flawlessly and alerting me to issues with my network as they come up. I recently decided it was time for an update, since I was still running 1.4.x and 1.5 had been out for a while. In the process I thought I’d move it into Docker, as I’ve been moving everything else. However, I ran into some issues with that.

I had several issues moving my configuration to the new server and getting it to run in the Docker container. Most of these could have been solved if I’d migrated the configuration to the new version (described below) before moving it. I did hit one show stopper issue though. This came about because my Docker data volumes are stored on NFS. Basically it looks like NFS doesn’t support file locking very well. This causes checkmk to throw “Bad file descriptor” errors all over the place.

checkmk nfs error
Oops, looks like checkmk doesn’t like it’s sites directory on NFS

In the end I decided to stick with my LXD container for this system. Eventually this will be migrated to an LXC container when I switch the host server over to Proxmox. The Docker setup is not actually recommended for a full monitoring install anyway, so I don’t feel too bad about this.

This is all a long winded way of saying, go and follow the Linux installation instructions if you are installing checkmk from scratch!

Upgrading My Configuration

Since I already had an existing system I just updated it by installing the new version. In the process checkmk 1.6 was released (good timing!), so I actually did this twice. To do the update I basically just downloaded the latest .deb file and installed it with dpkg -i.

It should be noted that this won’t overwrite your current install. There is a good reason for this, since checkmk has a configuration migration step which must be performed. This can be done using the omd tool. The steps to do so look like this:

$ sudo omd sites # list the monitoring instances we have available
SITE             VERSION          COMMENTS
mysitename       1.5.0p30.cre     default version
$ sudo omd stop mysitename # stop the instance in question
$ sudo omd update mysitename
# at this point omd will update the config, it will ask you
# questions along the way in a curses window, just follow the 
# prompts
$ sudo omd start mysitename # start the instance back up

Assuming that all went OK, you’re good to start actually monitoring other systems. We’ll cover the things I monitor in the next few sections.

Monitoring Standard Linux Systems

This is the easy one and pretty much standard fare for any monitoring system. Just install the checkmk agent for your distribution by following the official instructions and off you go. When you add a new host via WATO (the configuration GUI) checkmk will auto-discover as many services as it can for you. There are also a whole load of plugins that can be deployed to monitor other services.

checkmk linux server
Some of the service checks running against “eomer” my home automation Docker host

In my system, Linux machines are the majority of the hosts. This includes physical host machines, VMs, LXC/LXD containers and Raspberry Pis (since these are just Debian machines really). The only thing I haven’t had much luck with are my Libreelec machines. This seems to be because Libreelec doesn’t include a shell capable of running the agent script. I’ve been wondering if running the agent in a sufficiently privileged Docker container would work. However for now I’m just monitoring them externally (see “Monitoring Other Devices” below).

Monitoring pfSense

Since pfSense is really just FreeBSD underneath the checkmk BSD agent works fine with it. pfSense also supports SNMP out of the box. Therefore you can get more monitoring coverage by using an Agent+SNMP approach. There is a great tutorial over at Open School Solutions, which is what I followed to set all that up.

checkmk pfsense
pfSense will report the status of all its network interfaces among its checks

Monitoring OpenWRT Devices

OpenWRT devices can be monitored just like Linux machines. However you will need to install the OpenWRT specific agent from the agents page of your checkmk install. The agent can be run via either xinetd or SSH just as on a standard Linux machine.

OpenWRT devices will also return some extra information over SNMP if mini_snsmpd is installed on them. This makes the same Agent+SNMP approach adopted for pfSense desirable.

Monitoring Docker Containers

This one is new to me since I’ve just set it up after updating to the new version. The basic idea here is that you set up monitoring on the Docker host as you would a standard Linux machine. However, you also install the mk_docker.py plugin. This will do two things. Firstly it will expose a load of Docker checks on the host the next time you run service discovery. These by themselves are pretty useful. Secondly it will allow you to add each Docker container on the host as a host in it’s own right. The check data for the Docker container host is piggybacked from the agent connection to its Docker host.

At first I thought this was going to be pretty unwieldy, since you have to manually add a host entry for each Docker container. However, checkmk provides us with some tools to make this easier, which I’ll cover below.

Before you finish the setup on the Docker host, I’d recommend that you set the container_id variable in the /etc/checkmk/docker.cfg config file to name. This will allow you to add your container hosts to checkmk by name rather than by hex ID, which is much more human readable. It also avoids the situation where the hex ID changes if the container is destroyed and re-created. This obviously happens during an update of the container.

Adding Docker Container Hosts Quickly

checkmk wato folder
Setting up a host folder in WATO allows you to apply the same initial options to a bunch of hosts quickly

The next thing to do is to create a directory in WATO. Adding a host to a directory automatically populates the host with the configuration template used when the directory was created. This makes it super quick to just run through adding hosts with the same configuration such as our Docker containers.

The third thing to do is to create a new Host Group for your Docker containers. In this way you can easily see separate listings of your containers and other hosts on the monitoring dashboards.

checkmk hostgroup
Creating a host group for my Docker contains allow me to view just the statuses of the containers

With these things in place I’m finding the Docker container monitoring pretty useful in my fairly static environment. Due to the manual steps involved it’s really not going to work in a highly dynamic environment, but it works for my needs.

checkmk docker container
My Gotify container is unhappy – because I artificially stopped it for the screenshot!

Monitoring Windows Machines

Obviously checkmk also has an agent for Windows, so you can monitor those machines too. If you have any. I don’t, so I can’t comment on how well it works!

Monitoring Other Devices

For monitoring other devices that don’t support any of the available agents, you’re pretty much limited to external probing. With this you can get basic information on whether the device is up or down (via ping) and can check the availability of services on any open ports via manual checks.

Manual checks are in fact useful against any of the systems listed above as an external verification that a service is responding. As a minimum I typically implement an SSH check as well as HTTP checks on any open web services.

I have this approach deployed against several devices on my network including the two Libreelec devices mentioned above, my HDHomeRun and various IoT devices. It works pretty well – especially as the usual problem with these devices is that someone has unplugged them!

Conclusion

I hope this has given you a glimpse into my monitoring setup. It’s impossible to describe every part of it in full detail so I’ve opted for presenting only an overview here. Even after nearly two years I’m still adding and changing stuff on this system, so I may write up some of the more interesting details in future.

I’m really happy with my choice of checkmk to drive my monitoring system. It’s a nice blend of the reliability and stability of Nagios, with a layer of UI which makes it much easier to get a fully featured system. The recent upgrades have provided some useful features and it’s great to see it under such active development!

Next Steps

I still have some corners of my network where my monitoring setup doesn’t cover. Mainly just through lack of time to get it all set up. I’ll be looking to Ansible to help with this soon.

I’d also like to add some kind of metrics gathering system, probably Influxdb. This will be for Home Assistant data and metrics from my Traefik proxies (as well as anything else I can feed to it). I’d also like to round out the trifecta of observation systems with a log storage/aggregation system. I just wish there was something more lightweight than an ELK stack!

If you have any comments or improvements to my setup, as always, let me know in the feedback channels. I’m also interested in how others are handling this, so get in touch and let me know.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

ansible gitlab ci

Automating My Infrastructure with Ansible and Gitlab CI: Part 1 – Getting Started

This post may contain affiliate links. Please see the disclaimer for more information.

This is the first part in a multi-part series following my adventures in automating my self-hosting infrastructure with Ansible, running from Gitlab CI. In this post I’ll cover setting up my Ansible project, setting up the remote machines for Ansible/CI deployment, some initial checks in CI and automating of routine updates via our new system.

I’ve used Ansible quite extensively in the past, but with my recent focus on Docker and Gitlab CI I thought it was worth having a clean break. Also my previous Ansible configurations were a complete mess, so it’s a good opportunity to do things better. I’ll still be pulling in parts of my old config where needed to prevent re-inventing the wheel.

Since my ongoing plan is to deploy as many of my applications as possible with Docker and docker-compose, I’ll be focusing mainly in tasks relating to the host machines. Some of these will be set up tasks which will deploy a required state to each machine. Others will be tasks to automate routine maintenance.

Inventory Setup

Before we get started, I’ll link you to the Gitlab repository for this post. You’ll find that some of the files are encrypted with ansible-vault, since they contain sensitive data. Don’t worry though, I’ll go through them as examples, starting with hosts.yml.

The hosts.yml file is my Ansible inventory and contains details of all the machines in my infrastructure. Previously, I’d only made use of inventories in INI format, so the YAML support is a welcome addition. The basic form of my inventory file is as follows:

---
all:
  vars:
      ansible_python_interpreter: /usr/bin/python3
      ansible_user: ci
      ansible_ssh_private_key_file: ./id_rsa
  children:
    vms:
      hosts:
        vm1.mydomain.com:
          ansible_sudo_pass: supersecret
          password_hash: "[hashed version of ansible_sudo_pass]"
          # use this if the hostname doesn't resolve
          #ansible_host: [ip address]
        vm2.mydomain.com:
          ...
    rpis:
      # Uncomment this for running before the CI user is created
      #vars:
      #  ansible_user: pi
      hosts:
        rpi1.mydomain.com:
          ...
    physical:
      hosts:
        phys1.mydomain.com:
          ...

To create this file we need to use the command ansible-vault create hosts.yml. This will ask for a password which you will need when running your playbooks. To edit the file later, replace the create subcommand with edit.

As you can see we start out with the top level group called all. This group has several variables set to configure all the hosts. The first of these ensures that we are using Python 3 on each remote system. The other two set the remote user and the SSH key used for authentication. I’m connecting to all of my systems with a specific user, called ci, which I will handle setting up in the next section.

The remainder of the file specifies the remote hosts in the infrastructure. For now I’ve divided these up into three groups, corresponding to VMs, Raspberry Pis and physical host machines. It’s worth noting that a host can be in multiple groups, so you can pretty much get as complicated as you like.

Each host has several variables associated with it. The first of these is the password for sudo operations. I like each of my hosts to have individual passwords for security purposes. The second variable (password_hash) is a hashed version of the same password which we will use later when setting up the user. These hashes are generated with the mkpasswd command as per the documentation. The final variable (ansible_host) I’m using here is optional and you only need to include it if the hostname of the server in question can’t be resolved via DNS. In this case you can specify an IP address for the server here.

In order to use this inventory file we need to pass the -i flag to Ansible (along with the filename) on every run. Alternatively, we can configure Ansible to use this file by creating an ansible.cfg file in our current directory. To do this we download the template config file and edit the inventory line so it looks like this:

inventory      = hosts.yml

Setting Up the Remote Machines

At this point you should comment out the ansible_user and ansible_ssh_private_key_file lines in your inventory, since we don’t currently have a ci user on any of our machines and we haven’t added the key yet. This we will take care of now – via Ansible itself. I’ve created a playbook which will create that user and set it up for use with our Ansible setup:

---
- hosts: all
  tasks:
    - name: Create CI user
      become: true
      user:
        name: ci
        comment: CI User
        groups: sudo
        append: true
        password: "{{ password_hash }}"

    - name: Create .ssh directory
      become: true
      file:
        path: /home/ci/.ssh
        state: directory
        owner: ci
        group: ci
        mode: 0700

    - name: Copy authorised keys
      become: true
      copy:
        src: ci_authorized_keys
        dest: /home/ci/.ssh/authorized_keys
        owner: ci
        group: ci
        mode: 0600

Basically all we do here is create the user (with the password from the inventory file) and set it up for access via SSH with a key. I’ve put this in the playbooks subdirectory in the name of keeping things organised. You’ll also need a playbooks/files directory in which you should put the ci_authorized_keys file. This file will be copied to .ssh/authorized_keys on the server, so obviously has that format. In order to create your key generate it in the normal way with ssh-keygen and save it locally. Copy the public part into ci_authorized_keys and keep hold of the private part for later (don’t commit it to git though!).

Now we should run that against our servers with the command:

ansible-playbook --ask-vault-pass playbooks/setup.yml

This will prompt for your vault password from earlier and then configure each of your servers with the playbook.

Spinning Up the CI

At this point we have our ci user created and configured, so we should uncomment those lines in our inventory file. We can now perform a connectivity check to check that this worked:

ansible all -m ping

If that works you should see success from each host.

Next comes our base CI setup. I’ve imported some of my standard preflight jobs from my previous CI pipelines, specifically the shellcheck, yamllint and markdownlint jobs. The next stage in the pipeline is a check stage. Here I’m putting jobs which check that Ansible itself is working and that the configuration is valid, but don’t make any changes to the remote systems.

I started off with a generic template for Ansible jobs:

.ansible: &ansible
  image:
    name: python:3.7
    entrypoint: [""]
  before_script:
    - pip install ansible
    - ansible --version
    - echo $ANSIBLE_VAULT_PASSWORD > vault.key
    - echo "$DEPLOYMENT_SSH_KEY" > id_rsa
    - chmod 600 id_rsa
  after_script:
    - rm vault.key id_rsa
  variables:
    ANSIBLE_CONFIG: $CI_PROJECT_DIR/ansible.cfg
    ANSIBLE_VAULT_PASSWORD_FILE: ./vault.key
  tags:
    - ansible

This sets up the CI environment for Ansible to run. We start from a base Python 3.7 image and install Ansible via Pip. This takes a little bit of time doing it on each run so it would probably be better to build a custom image which includes Ansible (all the ones I found were out of date).

The next stage is to set up the authentication tokens. First we write the ANSIBLE_VAULT_PASSWORDvariable to a file, followed by the DEPLOYMENT_SSH_KEY variable. These variables are defined in the Gitlab CI configuration as secrets. The id_rsa file requires its permissions set to 0600 for the SSH connection to succeed. We also make sure to remove both these files after the job completes.

The last thing to set are a couple of environment variables to allow Ansible to pick up our config file (by default it won’t in the CI environment due to a permissions issue). We also need to tell it the vault key file to use.

Check Jobs

I’ve implemented two check jobs in the pipeline. The first of these performs the ping action which we tested before. This is to ensure that we have connectivity to each of our remote machines from the CI runner. The second iterates over each of the YAML files in the playbooks directory and runs them in check mode. This is basically a dry-run. I’d prefer if this was just a syntax/verification check without having to basically run through the whole thing, but there doesn’t seem to be a way to do that in Ansible.

The jobs for both of these are shown below:

ping-hosts:
  <<: *ansible
  stage: check
  script:
    - ansible all -m ping

check-playbooks:
  <<: *ansible
  stage: check
  script:
    - |
      for file in $(find ./playbooks -maxdepth 1 -iname "*.yml"); do
        ansible-playbook $file --check
      done

Before these will run on the CI machine, we need to make a quick modification to our ansible.cfg file. This will allow Ansible to accept the SSH host keys without prompting. Basically you just uncomment the host_key_checking line and ensure it is set to false:

host_key_checking = False

Doing Something Useful

At this stage we have our Ansible environment set up in CI and our remote machines are ready to accept instructions. We’ve also performed some verification steps to give us some confidence that we are doing something sensible. Sounds like we’re ready to make this do something useful!

Managing package updates has always been a pain for me. Once you get beyond a couple of machines, manually logging in and applying updates becomes pretty unmanageable. I’ve traditionally taken care of this on a Saturday morning, sitting down and applying updates to each machine whilst having my morning coffee. The main issue with this is just keeping track of which machines I already updated. Luckily there is a better way!

[[Side note: Yes, before anyone asks, I am aware of the unattended-upgrades package and I have it installed. However, I only use the default configuration of applying security updates automatically. This is with good reason. I want at least some manual intervention in performing other updates, so that if something critical goes wrong, I’m on hand to fix it.]]

With our shiny new Ansible setup encased in a layer of CI, we can quite easily take care of applying package upgrades to a whole fleet of machines. Here’s the playbook to do this (playbooks/upgrades.yml):

---
- hosts: all
  serial: 2
  tasks:
    - name: Perform apt upgrades
      become: true
      when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
      apt:
        update_cache: true
        upgrade: dist
      notify:
        - Reboot

    - name: Perform yum upgrades
      become: true
      when: ansible_distribution == 'CentOS'
      yum:
        name: '*'
        state: latest
      notify:
        - Reboot

  handlers:
    - name: Reboot
      when: "'physical' not in group_names"
      become: true
      reboot:

This is pretty simple, first we apply it to all hosts. Secondly we specify the line serial: 2 to run only on two hosts at a time (to even out the load on my VM hosts). Then we get into the tasks. These basically run an upgrade selectively based upon the OS in question (I still have a couple of CentOS machines knocking around). Each of the update tasks will perform the notify block if anything changes (i.e. any packages got updated). In this case all this does is execute the Reboot handler, which will reboot the machine. The when clause of that handler causes it not to execute if the machine is in the physical group (so that my host machines are not updated). I still need to handle not rebooting the CI runner host, but so far I haven’t added it to this system.

We could take this further, for example snapshotting any VMs before doing the update, if our VM host supports that. For now this gets the immediate job done and is pretty simple.

I’ve added a CI job for this. The key thing to note about this is that it is a manual job, meaning it must be directly triggered from the Gitlab UI. This gives me the manual step I mentioned earlier:

package-upgrades:
  <<: *ansible
  stage: deploy
  script:
    - ansible-playbook playbooks/upgrades.yml
  only:
    refs:
      - master
  when: manual

Now all I have to do on a Saturday morning is click a button and wait for everything to be updated!

ansible gitlab ci
Our finished pipeline. Note the state of the final “package-upgrades” job which indicates a manual job. Gitlab helpfully provides a play button to run it.

Conclusion

I hope you’ve managed to follow along this far. We’ve certainly come a long way, from nothing to an end to end automated system to update all our servers (with a little manual step thrown in for safety).

Of course there is a lot more we can do now that we’ve got the groundwork set up. In the next instalment I’m going to cover installing and configuring software that you want to be deployed across a fleet of machines. In the process we’ll take a look at Ansible roles. I’d also like to have a look at testing my Ansible roles with Molecule, but that will probably have to wait for yet another post.

I’m interested to hear about anyone else’s Ansible setup, feel free to share via the feedback channels!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.