This post may contain affiliate links. Please see the disclaimer for more information.
Way back in the when I first started using Docker in earnest, I wrote about my web hosting stack. Recently, this has undergone an upgrade as I’m working on a new website which will be served from the same server. I took the opportunity to split the system up into multiple docker-compose projects, which makes deployment of further sites much easier. It allows me to manage the common containers from one docker-compose project and then each of the sites from their own project. This will be of further use in future as I move towards deploying these with Ansible.
The Approach
My basic approach here is to move my two common containers (my Traefik container and SMTP forwarder) into their own project. This project will create a couple of networks for interfacing to the containers from other projects. To create these networks I add the following to my common project docker-compose.yml:
Here I create two networks as per normal. The key is to give them a proper name, rather than the auto generated one that would be assigned by Docker. This will enable us to address them easily from our other projects. We then assign these to our common containers:
services:
traefik:
image: traefik:2.1
command:
...
volumes:
...
ports:
- "80:80"
- "443:443"
- "127.0.0.1:8080:8080"
networks:
gateway:
aliases:
# add hostnames you might want to refer to this container by
- example.com
restart: always
postfix:
image: boky/postfix
ports:
...
environment:
...
volumes:
...
networks:
smtp:
aliases:
- postfix
restart: always
Here I simply assign the relevant network to each container. The aliases section allows other containers on these networks to find our common containers by whatever name we specify. In the case of the postfix container this is to connect via SMTP. For the traefik container, adding hostnames which internal apps my need to refer to can help (for example with the WordPress loopback test).
External Projects
With this in place, the other applications can be moved out into their own projects. For each one we need to access the gateway and smtp networks in order to have access to our common services. These are accessed as external networks via the docker-compose.yaml file for our project:
Here I add my varnish cache, as per my previous article. The key thing here is to specify the label traefik.docker.network=gateway to allow Traefik to reliably discover the container. We then also make sure the container is added to the gateway network. I’ve also added a WordPress container, which is on the smtp network. This will allow sending of email from WordPress via the SMTP forwarder.
Conclusion
This is a pretty simple approach for better management of my increasingly complex web stack. As I mentioned earlier that the next step will be to deploy these projects via Ansible. In this case the common containers will become part of a role which can be used across my infrastructure.
The splitting out of the apps into their own projects has enabled me to duplicate my current WordPress+Varnish+Mariadb setup for the new site I’m working on. There will be more info to come about that site as soon as I am ready to share!
If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter.If you want to show your appreciation, feel free to buy me a coffee.
This post may contain affiliate links. Please see the disclaimer for more information.
I’ve mentioned before that I’m a big fan of RSS as a medium for consuming my daily news and for following the blogs of others. However, there are an increasing number of websites that don’t provide an RSS feed (or at least don’t advertise a feed if one exists). Luckily for us there is an awesome piece of self-hosted software which aims to fill in the gaps left by these missing feeds – RSS-bridge.
My use case for this was twofold. First I wanted to follow some sites for which I couldn’t find RSS feeds, specifically The Guardian. Second, I wanted to get updates from some local groups, who only have a Facebook page. Obviously, I don’t actually want to actually check in to Facebook to do this, that would be intolerable. RSS-Bridge fills both these needs.
Installation
There are several public instances of RSS-bridge available, but of course I wanted to host my own. Doing so is extremely easy with Docker. I added the following to my docker-compose.yml file on the server in question:
This uses Traefik, with my internal HTTPS setup to serve the bridge over HTTPS. You can also set up authentication for the bridge if you like. This isn’t really required unless you are hosting the bridge on a publicly available URL and would rather keep it private. I elected not to bother with authentication, since mine is on my internal network. It should also be noted that the bridge is totally stateless. All the parameters are sent in the URL, so there is no data to protect.
Grabbing Feeds
You’ll see above that we mounted a text file called whitelist.txt inside the container. This contains a list of all the bridges you want to use, from the full list of bridges. Here’s mine:
I’ll demonstrate the use of a couple of these below, but it’s pretty simple. First up TheGuardianBridge, just select the section of the site you are interested in and click a button – couldn’t be easier!
Super simple!
I like to use the HTML button so that I can see that the bridge is working right there in the browser. You can then grab the (M)RSS or Atom links directly from the resulting page:
The resulting feed page
I’m also going to grab a feed of my local council news from their Facebook page, using the FacebookBridge:
The Facebook Bridge
Here we just enter the name of the page or user we are interested in. There is another dialogue below this for groups, but I haven’t tried that yet. I assume this only works for public pages, since it doesn’t ask for any login credentials. Of course, when we click through we are greeted by our feed:
The resultant Facebook feed
The Twitter bridge works similarly. I haven’t had much luck with the Youtube bridge, but I’m already using a well known trick to get RSS feeds of my favourite Youtube channels.
Setting Up Email Notifications
So far, this has all been very easy. Let’s step it up (just a little) and get notified when one of our feeds gets updated. I’m using this to be notified of events and goings on in my local area via some of the Facebook feeds. This closes the loop quite nicely and takes “social media” back to the promise it had in the early days.
To do this I’m using a tool called rss2email. This is a brilliant little tool, which I actually used as my primary RSS reader for some years, until I got too many feeds to get through all the emails! I’m glad to press it back into service for this.
I elected not to install rss2email in Docker, since I couldn’t find a nicely updated image and didn’t fancy building my own. It’s also kind of a personal tool, so fits nicely in a Unix user account as a cron job. On Ubuntu rss2email can be installed via APT:
$ sudo apt install rss2email
Next it’s best to follow the official documentation to get it up and running. You’ll need some access to an SMTP server to be able to send mail. One place where the documentation seems to differ is in enabling SMTP, where I had to use the line email-protocol = smtp rather than the use-smtp specified in the docs.
Once this is all set up you can add your feeds like so:
This has been a really simple project (by my standards). Everything went according to plan, which almost never happens! Regardless, I’m very happy with the result and it’s something I’ll continue to make use of every day.
RSS-Bridge fills a much needed hole in the modern web. With the dominance of the big social media platforms and increasing “appification”, we’ve lost the real promise of the web to be an open and connected platform. RSS-bridge brings back at least some of this.
The addition of rss2email fulfils the hopes I had for social networks in the early days – that they would become notification platforms for events/people/things in the world around us. Instead, they’ve become locked down walled gardens which force you to use their app or website in order to engage with what’s going on.
Technology should come to us, on our own terms and via whatever medium we choose. This makes projects like RSS-Bridge, rss2email and the myriad of RSS readers out there incredibly important for those who refuse to be locked inside the gardens, but still require access to the information contained within.
If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter.If you want to show your appreciation, feel free to buy me a coffee.
In the course of migrating most of the services I run over to running as Docker containers I’ve mostly tried to steer clear of building my own images. Thanks to the popularity of Docker this has been pretty easy for the most part. Many projects now ship an official Docker image as part of their releases. However, the inevitable happened when I had to build my own image in my recent TT-RSS migration.
The reason for my reluctance to build my own images is not some unfounded fear of Dockerfiles. It’s not building the images that I’m worried about, it’s maintaining them over time.
Every Docker image contains a mini Linux distribution, frozen in time from the day the image was built. Like any distribution of Linux these contain many software packages, some of which are going to need an update sooner or later in order to maintain their security. Of course you could just run the equivalent of an apt upgrade inside each container, but this defeats some of the immutability and portability guarantees which Docker provides.
The solution to this, is basically no more sophisticated than ‘rebuild your images and re-deploy’. The problem is no more help is offered. How do you go about doing that in a consistent and automated way?
Enter GitLab CI
If you have been following this blog you’ll know I’m a big fan of GitLab CI for thesekinds of jobs. Luckily, GitLab already has support for building Docker images. The first thing I had to do was get this set up on my self-hosted GitLab runner. This was complicated by the fact that my runner was already running as a Docker container. It’s basically trying to start a Docker-in-Docker container on the host, from inside a Docker container, which is a little tricky (if you want to go further down, Docker itself is running from a Snap on a Virtual Machine! It really is turtles all the way down).
After lots of messing around and tweaking, I finally came up with a runner configuration which allowed the DinD service to start and the Docker client in the CI build to see it:
The important parts here seem to be that the privileged flag must be true in order to allow the starting of privileged containers and that the DOCKER_TLS_CERTDIR should be set to a known location. The client subdirectory of this should then be shared as a volume in order to allow clients to connect via TLS.
You can ignore any setting of the DOCKER_HOST environment variable (unless you are using Kubernetes, maybe?). It should be populated with the correct value automatically. If you set it you will most likely get it wrong and the Docker client won’t be able to connect.
The CI Pipeline
I started the pipeline with a couple of static analysis jobs. First up is my old favourite yamllint. The configuration for this is pretty much the same as previously described. Here it’s only really checking the .gitlab-ci.yml file itself.
Next up is a new one for me. hadolint is a linting tool for Dockerfiles. It seems to make sensible suggestions, though I disabled a couple of them due to personal preference or technical issues. Conveniently, there is a pre-built Docker image, which makes configuring this pretty simple:
This is relatively straightforward if you are already familiar with the Docker build process. There are a couple of GitLab specific parts such as starting the docker:dind service, so that we have a usable Docker daemon in the build environment. The login to the GitLab registry uses the built-in $CI_BUILD_TOKEN variable as the password and from there we just build and push using the $IMAGE_TAG variable to name the image. Simple!
The pipeline for this is actually rather simple and boring compared to some of my others!
I also added another pipeline stage to send me a notification via my Gotify server when the build is done. This pretty much just follows the Gotify docs to send a message via curl:
notify:
stage: notify
image: alpine:latest
before_script:
- apk add curl
script:
- |
curl -X POST "https://$GOTIFY_HOST/message?token=$GOTIFY_TOKEN" \
-F "title=CI Build Complete" \
-F "message=Scheduled CI build for TT-RSS is complete."
tags:
- docker
Of course the $GOTIFY_HOST and $GOTIFY_TOKEN variables are defined as secrets in the project configuration.
GitLab actually has support for scheduled jobs built in. However, if you are running on gitlab.com as I am you will find that you have very little control over when your jobs actually run, since it runs all cron jobs at 19 minutes past the hour. This would be fine for this single job, but I’m planning on having more scheduled builds run in the future, so I’d like to distribute them in time a little better, in order to reduce the load on my runner.
Pipeline triggers are created via the Settings->CI/CD menu in GitLab.
In order to do this I’m using a pipeline trigger, kicked off by a cron job at my end:
# Run TT-RSS build on Tuesday and Friday mornings
16 10 * * 2,5 /usr/bin/curl -s -X POST -F token=SECRET_TOKEN -F ref=master https://gitlab.com/api/v4/projects/2707270/trigger/pipeline > /dev/null
This will start my pipeline at 10:16am (NZ time) on Tuesdays and Fridays. If the build is successful there will be a new image published shortly after these times.
An Aside: Managing random cron jobs like these really sucks once you have more than one or two machines. More than once I’ve lost a job because I forgot which machine I put it on! I know Ansible can manage cron jobs, but you are still going to end up with them configured across multiple files. I also haven’t found any way of managing cron jobs which relate to a container deployment (e.g. running WordPress cron for a containerised instance). Currently I’m managing all this manually, but I can’t help feeling that there should be a better way. Please suggest any approaches you may have in the comments!
Updating Deployed Containers
I could extend the pipeline above to update the container on my server with the new image. However, this would leave the other containers on the server without automated updates. Therefore I’m using Watchtower to periodically check for updates and upgrade any out of date images. To do this I added the following to the docker-compose.yml file on that server:
Here I’m just checking for container updates four times a day. However, these are divided into two pairs of sequential checks separated by 30 minutes. This is more than sufficient for this server since it’s only accessible from my local networks. One of the update times is scheduled to start ten minutes after my CI build to pick up the new TT-RSS container. The reason for the second check after 30 mins is to ensure that the container gets updated even if the CI pipeline is delayed (as sometimes happens on gitlab.com).
You’ll note that I’m also sending notifications from Watchtower via Gotify. I was pleasantly surprised to discover the Gotify support in Watchtower when I deployed this. However, I think the notifications will get old pretty quickly and I’ll probably disable them eventually.
Conclusion
With all this in place I no longer have to worry about keeping my custom Docker images up to date. They will just automatically build at regular intervals and be updated automatically on the server. It actually feels pretty magical watching this all work end-to-end since there are a lot of moving parts here.
I’ll be deploying this approach as I build more custom images. I also still have to deploy Watchtower on all my other Docker hosts. The good news is that this has cured my reluctance to build custom images, so there will hopefully be more to follow. In the meantime, the community can benefit from a well maintained and updated TT-RSS image. Just pull with the following command:
If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter.If you want to show your appreciation, feel free to buy me a coffee.
This post may contain affiliate links. Please see the disclaimer for more information.
With the rise of social media, RSS seems to have been largely forgotten. However, there are still those who are dedicated enough to keep curating their own list of feeds and plenty of software to support them. I’ve always been a fan of RSS and believe it’s probably time for a resurgence in use that would free us from our algorithmic overlords. As such I’ve run an instance of Tiny Tiny RSS for several years and the time has come to migrate it to Docker.
It’s relatively unknown that you can still get RSS feeds for most news sources on the Internet. For example, I use TT-RSS to keep up with Youtube and Reddit (just add .rss to the end of any subreddit URL) as well as the usual blogs and news sites.
About Tiny Tiny RSS
Tiny Tiny RSS is a web based RSS application (think Google Reader replacement). It’s PHP based and supports Postgres or MySQL (like) databases. I’ve been using it for may years and although I’ve tried out other web based RSS readers (such as Miniflux), I’ve never found anything as good as TT-RSS.
My Tiny Tiny RSS install
A particular favourite feature of mine is the ability to generate feeds from any internal view, which makes it great for integrating with other systems which may consume RSS/Atom.
There is also an Android app which is available via the Play Store or F-Droid.
Finding a Tiny Tiny RSS Docker Image
I had initially planned to use the LinuxServer.io TT-RSS image, but it seems to have been deprecated. With a bit of searching I found this repo, however it’s pretty out of date and doesn’t build any more. After looking through my GitLab repos, it turned out I’d already tried upgrading that image as part of one of my previous attempts with Docker. I’ve finished off this migration and made the repo public so everyone can benefit from my efforts. Thanks to some CI magic and GitLab’s built in container registry you can pull the latest version like so:
Feel free to read through the project README to familiarise yourself with the options available in the image. It’s pretty much as it was in the original repository, so I’ll go through my setup below.
Setup with docker-compose
I’m integrating this with my existing Docker setup via docker-compose and Traefik. As such I added the following to my docker-compose.yml file:
Breaking this down, we first create a new service using my TT-RSS image. We then define a dependency on the database container, which we will create later. The environment configuration uses another file env.sh in which we store our secrets. This is of the form:
We can see that the database is configured entirely via environment variables as shown in the project README. We also set the SELF_URL_PATH variable so that TT-RSS knows where it is located (the URL should be updated for your configuration). I also chose to mount the plugins.local directory on the host machine to allow me to install plugins easily. The remainder of the configuration is for Traefik and is covered in my earlier post (you’ll need to update the hostnames used here too).
Database Setup
As mentioned earlier, we need a database container for TT-RSS to talk to. I’m using MariaDB for this because it’s what I’m familiar with. Also my original TT-RSS installation was in mysql and I wanted to migrate the data. The setup for this is pretty simple using the official MariaDB image:
As you can see, I re-use the previous environment variables to create the database and user. I also mount the mysql data directory locally and mount a compressed backup of my previous database. This backup will only be loaded the first time the database comes up. You can remove this line if you are doing a clean install.
If you are following my install you’ll also see that I use a couple of Docker networks:
networks:
external:
internal:
The external network connects the local service containers to Traefik, whilst the internal network is used between TT-RSS and the database container.
With all this in place you should be able to launch your TT-RSS server with:
docker-compose up -d
At this point, it’s usually a good idea to check the container logs for problems and adjust your configuration accordingly.
Conclusion
Aside from having to update the Docker image for TT-RSS (which took quite a while) this migration was relatively painless. I’m quite happy with my newly Dockerised TT-RSS server. In addition to migrating it into Docker this step has also moved it off my ageing mailserver in preparation for it’s upcoming migration to something newer and moved it from the cloud onto my home server. All positive steps!
Next Steps
I’m pretty keen to keep maintaining my new Docker image for Tiny Tiny RSS, since it seems to be a gap in the community that can be filled. I’m currently building this in CI, but the configuration is pretty basic. In the coming weeks, I’m intending to expand upon this a little and set up a scheduled build which will automatically keep the container up to date. This will hopefully be the topic of a follow up article, so stay tuned!
If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter.If you want to show your appreciation, feel free to buy me a coffee.
This post may contain affiliate links. Please see the disclaimer for more information.
In the first post of this series, I got started with Ansible running in Gitlab CI. This involved setting up the basic pipeline, configuring the remote machines for use with our system and making a basic playbook to perform package upgrades. In this post we’re going to build on top of this to create a re-usable Ansible role to deploy some software and configuration to our fleet of servers. We will do this using the power of Ansible roles.
In last week’s post I described my monitoring system, based on checkmk. At the end of the post I briefly mentioned that it would be great to use Ansible to deploy the checkmk agent to all my systems. That’s what I’m going to describe in this post. The role I’ve created for this deploys the checkmk agent from the package download on my checkmk instance and configures it to be accessed via SSH. It also installs a couple of plugins to enable some extra checks on my systems.
A Brief Aside: ansible-lint
In my previous post I set up a job which ran all the playbooks in my repository with the --check flag. This performs a dry run of the playbooks and will alert me to any issues. In that post I mentioned that all I was really looking for was some kind of syntax/sanity checking on the playbooks and didn’t really need the full dry run. Several members of the community stepped forward to suggest ansible-lint – thanks to all those that suggested it!
I’ve now updated my CI configuration to run ansible-lint instead of the check job. The updated job is shown below:
This is a pretty basic use of ansible-lint. All I’m doing is running it on all the playbooks in my playbooks directory. I do skip a single rule (403) with the -x argument. The rule in question is about specifying latest in package installs, which conflicts with my upgrade playbook. Since I’m only tweaking this small thing I just pass this via the CLI rather than creating a config file.
I’ve carried the preflight jobs and the ansible-lint job over to the CI configuration for my new role (described below). Since this is pretty much an exact copy of that of my main repo, I’m not going to explain it any further.
Creating a Base Role
I decided that I wanted my roles self contained in their own git repositories. This keeps everything a bit tidier at the price of a little extra complexity. In my previous Ansible configuration I had all my roles in the same repo and it just got to be a big mess after a while.
To create a role, first initialise it with ansible-galaxy. Then create a new git repo in the resulting directory:
I actually didn’t perform these steps and instead started from a copy of the old role I had for this in my previous configuration. This role has been tidied up and expanded upon for the new setup.
The ansible-galaxy command above will create a set of files and directories which provide a skeleton role. The first thing to do is to edit the README.md and meta/main.yml files for your role. Just update everything to suit what you are doing here, it’s pretty self explanatory. Once you’ve done this all the files can be added to git and committed to create the first version of your role.
Installing the Role
Before I move on to exactly what my role does, I’m going to cover how we will use this role in our main infrastructure project. This is done by creating a requirements.yml file which will list the required roles. These will then be installed by passing the file to ansible-galaxy. Since the installation tool can install from git we will specify the git URL as the installation location. Here are the contents of my requirements.yml file:
This will install the required Ansible roles to the playbooks/roles directory in our main project, where our playbooks can find them. The --force flag will ensure that the role always gets updated when we run the command. I’ve added this command in the before_script commands in the CI configuration to enable me to use the role in my CI jobs.
Now the role will be installed where we actually need it. However, we are not yet using it. I’ll come back to this later. Let’s make the role actually do something first!
What the Role Does
The main behaviour of the role is defined in the tasks/main.yml file. This file is rather long, so I won’t reproduce this here. Instead I’ll ask you to open the link and follow along with my description below:
The first task creates a checkmk user on the target system. This will be used by checkmk to log in and run the agent.
The next task creates a .ssh directory for the checkmk user and sets it’s permissions correctly.
Next we create an authorized_keys file for the user. This uses a template file which will restrict what the key can do. The actual key comes from the checkmk_pub_key variable which will be passed in from the main project. The template is as follows:
Next are a couple of tasks to install some dependent packages for the rest of the role. There is one task for Apt based systems and another for Yum based systems. I’m not sure if the monitoring-plugins package is actually required. I had it in my previous role and have just copied it over.
The two tasks remove the xinetd package on both types of system. Since we are accessing the agent via SSH we don’t need this. I was previously using this package for my agent access so I want to make sure it is removed. This behaviour can be disabled by setting the checkmk_purge_xinetd variable to false.
The next task downloads the checkmk agent deb file to the local machine. This is done to account for some of the remote servers not having direct access to the checkmk server. I then upload the file in the following task. The variables checkmk_server, checkmk_site_name and checkmk_agent_deb are used to specify the server address, monitoring instance (site) and deb file name. The address and site name are designed to be externally overridden by the main project.
The next two tasks repeat the download and upload process for the RPM version of the agent.
We then install the correct agent in the next two tasks.
The following task disables the systemd socket file for the agent to stop it being accessible over an unencrypted TCP port. Right now I don’t do this on my CentOS machines because they are too old to have systemd!
The final few tasks get in to installing the Apt and Docker plugins on systems that require it. I follow the same process of downloading then uploading the files and make them executable. The Docker plugin requires that the docker Python module be installed, which we achieve via pip. It also requires a config file, which as discussed in my previous post needs to be modified. I keep my modified copy in the repository and just upload it to the correct location.
The variables that are used in this are specified in the vars/main.yml and defaults/main.yml files. The default file contains the variables that should be overridden externally. I don’t specify a default for the SSH public key because I couldn’t think of a sensible value, so this at least must be specified for the role to run.
With all this in place our role is ready to go. Next we should try that from our main project.
Applying the Role
The first thing to do is to configure the role via the variables described above. I did this from my hosts.yml file which is encrypted, but the basic form is as follows:
The public key has to be that which will be used by the checkmk server. As such the private key must be installed on the server. I’ll cover how to set this up in checkmk below.
Next we have the playbook which will apply our role. I’ve opted to create a playbook for applying common roles to all my systems (of which this is the first). This goes in the file playbooks/common.yml:
---
- hosts: all
roles:
- { role: checkmk_agent }
This is extremely basic, all it does is apply the checkmk_agent role to all servers.
The corresponding CI job is only marginally more complex:
With those two in place a push to the server will start the pipeline and eventually deploy our role to our servers.
Our updated CI pipeline showing the ansible-lint job and the new common-roles job
Configuring Checkmk Agent Access via SSH
Of course the deployment on the remote servers is only one side of the coin. We also need to have our checkmk instance set up to access the agents via SSH. This is documented pretty well in the checkmk documentation. Basically it comes down to putting the private key corresponding to the public key used earlier in a known location on the server and then setting up an “Individual program call instead of agent access” rule in the “Hosts and Service Parameters” page of WATO.
I modified the suggested SSH call to specify the private key and user to use. Here is the command I ended up using in my configuration.
When you create the rule you can apply it to as many hosts as you like. In my setup this is all of them, but you should adjust as you see fit.
The checkmk WATO rule screen for SSH agent access
Conclusion
If you’ve been following along you should now be able to add new hosts to your setup (via hosts.yml) and have the checkmk agent deployed on them automatically. You should also have an understanding of how to create reasonably complex Ansible roles in external repositories and how to use them in your main Ansible project.
There are loads of things about roles that I haven’t covered here (e.g. handlers). The best place to start learning more would be the Ansible roles documentation page. You can then fan out from there on other concepts as they arise.
Next Steps
So far on this adventure I’ve tested my playbooks and roles by just making sure they work against my servers (initially on a non-critical one). It would be nice to have a better way to handle this and to be able to run these tests and verify that the playbook is working from a CI job. I’ll be investigating this for the next part of this series.
The next instalment will probably be delayed by a few weeks. I have something else coming which will take up quite a bit of time. For my regular readers, there will still be blog posts – they just won’t be about Ansible or CI. This is probably a good thing, I’ve been covering a lot of CI stuff recently!
As always please get in contact if you have any feedback or improvements to suggest, or even if you just want to chat about your own Ansible roles.
If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter.If you want to show your appreciation, feel free to buy me a coffee.
Bad Behavior has blocked 438 access attempts in the last 7 days.