rss-bridge

Reconnecting the Web with RSS-Bridge

This post may contain affiliate links. Please see the disclaimer for more information.

I’ve mentioned before that I’m a big fan of RSS as a medium for consuming my daily news and for following the blogs of others. However, there are an increasing number of websites that don’t provide an RSS feed (or at least don’t advertise a feed if one exists). Luckily for us there is an awesome piece of self-hosted software which aims to fill in the gaps left by these missing feeds – RSS-bridge.

My use case for this was twofold. First I wanted to follow some sites for which I couldn’t find RSS feeds, specifically The Guardian. Second, I wanted to get updates from some local groups, who only have a Facebook page. Obviously, I don’t actually want to actually check in to Facebook to do this, that would be intolerable. RSS-Bridge fills both these needs.

Installation

There are several public instances of RSS-bridge available, but of course I wanted to host my own. Doing so is extremely easy with Docker. I added the following to my docker-compose.yml file on the server in question:

services:
  rss-bridge:
    image: rssbridge/rss-bridge:latest
    volumes:
      - /mnt/docker-data/rss-bridge/whitelist.txt:/app/whitelist.txt
    labels:
      - 'traefik.enable=true'
      - "traefik.http.middlewares.rssbridge_redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.rssbridge_insecure.rule=Host(`rssbridge.example.com`)"
      - "traefik.http.routers.rssbridge_insecure.entrypoints=web"
      - "traefik.http.routers.rssbridge_insecure.middlewares=rssbridge_redirect@docker"
      - "traefik.http.routers.rssbridge.rule=Host(`rssbridge.example.com`)"
      - "traefik.http.routers.rssbridge.entrypoints=websecure"
      - "traefik.http.routers.rssbridge.tls.certresolver=mydnschallenge"
      - "traefik.http.services.rssbridge.loadbalancer.server.port=80"
    networks:
      - external
    restart: always

This uses Traefik, with my internal HTTPS setup to serve the bridge over HTTPS. You can also set up authentication for the bridge if you like. This isn’t really required unless you are hosting the bridge on a publicly available URL and would rather keep it private. I elected not to bother with authentication, since mine is on my internal network. It should also be noted that the bridge is totally stateless. All the parameters are sent in the URL, so there is no data to protect.

Grabbing Feeds

You’ll see above that we mounted a text file called whitelist.txt inside the container. This contains a list of all the bridges you want to use, from the full list of bridges. Here’s mine:

FacebookBridge
TheGuardianBridge
TwitterBridge
YoutubeBridge

I’ll demonstrate the use of a couple of these below, but it’s pretty simple. First up TheGuardianBridge, just select the section of the site you are interested in and click a button – couldn’t be easier!

rss-bridge
Super simple!

I like to use the HTML button so that I can see that the bridge is working right there in the browser. You can then grab the (M)RSS or Atom links directly from the resulting page:

rss-bridge
The resulting feed page

I’m also going to grab a feed of my local council news from their Facebook page, using the FacebookBridge:

rss-bridge
The Facebook Bridge

Here we just enter the name of the page or user we are interested in. There is another dialogue below this for groups, but I haven’t tried that yet. I assume this only works for public pages, since it doesn’t ask for any login credentials. Of course, when we click through we are greeted by our feed:

rss-bridge
The resultant Facebook feed

The Twitter bridge works similarly. I haven’t had much luck with the Youtube bridge, but I’m already using a well known trick to get RSS feeds of my favourite Youtube channels.

Setting Up Email Notifications

So far, this has all been very easy. Let’s step it up (just a little) and get notified when one of our feeds gets updated. I’m using this to be notified of events and goings on in my local area via some of the Facebook feeds. This closes the loop quite nicely and takes “social media” back to the promise it had in the early days.

To do this I’m using a tool called rss2email. This is a brilliant little tool, which I actually used as my primary RSS reader for some years, until I got too many feeds to get through all the emails! I’m glad to press it back into service for this.

I elected not to install rss2email in Docker, since I couldn’t find a nicely updated image and didn’t fancy building my own. It’s also kind of a personal tool, so fits nicely in a Unix user account as a cron job. On Ubuntu rss2email can be installed via APT:

$ sudo apt install rss2email

Next it’s best to follow the official documentation to get it up and running. You’ll need some access to an SMTP server to be able to send mail. One place where the documentation seems to differ is in enabling SMTP, where I had to use the line email-protocol = smtp rather than the use-smtp specified in the docs.

Once this is all set up you can add your feeds like so:

$ r2e add FeedName https://rss-bridge.example.com/.....

Of course you can add non-RSS-bridge feeds too. Just add whatever feeds you’d like to receive notifications on!

The last thing is to schedule this as a cron job:

14  *  *   *   *     /usr/local/bin/log-output "/usr/bin/r2e run"

I’m using the wrapper script I’ve mentioned previously. Done!

Conclusion

This has been a really simple project (by my standards). Everything went according to plan, which almost never happens! Regardless, I’m very happy with the result and it’s something I’ll continue to make use of every day.

RSS-Bridge fills a much needed hole in the modern web. With the dominance of the big social media platforms and increasing “appification”, we’ve lost the real promise of the web to be an open and connected platform. RSS-bridge brings back at least some of this.

The addition of rss2email fulfils the hopes I had for social networks in the early days – that they would become notification platforms for events/people/things in the world around us. Instead, they’ve become locked down walled gardens which force you to use their app or website in order to engage with what’s going on.

Technology should come to us, on our own terms and via whatever medium we choose. This makes projects like RSS-Bridge, rss2email and the myriad of RSS readers out there incredibly important for those who refuse to be locked inside the gardens, but still require access to the information contained within.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

docker build gitlab

My Road to Docker – Part 4: Automated Container Builds with GitLab CI

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


In the course of migrating most of the services I run over to running as Docker containers I’ve mostly tried to steer clear of building my own images. Thanks to the popularity of Docker this has been pretty easy for the most part. Many projects now ship an official Docker image as part of their releases. However, the inevitable happened when I had to build my own image in my recent TT-RSS migration.

The reason for my reluctance to build my own images is not some unfounded fear of Dockerfiles. It’s not building the images that I’m worried about, it’s maintaining them over time.

Every Docker image contains a mini Linux distribution, frozen in time from the day the image was built. Like any distribution of Linux these contain many software packages, some of which are going to need an update sooner or later in order to maintain their security. Of course you could just run the equivalent of an apt upgrade inside each container, but this defeats some of the immutability and portability guarantees which Docker provides.

The solution to this, is basically no more sophisticated than ‘rebuild your images and re-deploy’. The problem is no more help is offered. How do you go about doing that in a consistent and automated way?

Enter GitLab CI

If you have been following this blog you’ll know I’m a big fan of GitLab CI for these kinds of jobs. Luckily, GitLab already has support for building Docker images. The first thing I had to do was get this set up on my self-hosted GitLab runner. This was complicated by the fact that my runner was already running as a Docker container. It’s basically trying to start a Docker-in-Docker container on the host, from inside a Docker container, which is a little tricky (if you want to go further down, Docker itself is running from a Snap on a Virtual Machine! It really is turtles all the way down).

After lots of messing around and tweaking, I finally came up with a runner configuration which allowed the DinD service to start and the Docker client in the CI build to see it:

concurrent = 4
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "MyRunner"
  url = "https://gitlab.com/"
  token = "insert your token"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "ubuntu:18.04"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    environment = ["DOCKER_DRIVER=overlay2", "DOCKER_TLS_CERTDIR=/certs"]
    volumes = ["/certs/client", "/cache"]
    shm_size = 0
    dns = ["my internal DNS server"]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
  [runners.custom]
    run_exec = ""

The important parts here seem to be that the privileged flag must be true in order to allow the starting of privileged containers and that the DOCKER_TLS_CERTDIR should be set to a known location. The client subdirectory of this should then be shared as a volume in order to allow clients to connect via TLS.

You can ignore any setting of the DOCKER_HOST environment variable (unless you are using Kubernetes, maybe?). It should be populated with the correct value automatically. If you set it you will most likely get it wrong and the Docker client won’t be able to connect.

The CI Pipeline

I started the pipeline with a couple of static analysis jobs. First up is my old favourite yamllint. The configuration for this is pretty much the same as previously described. Here it’s only really checking the .gitlab-ci.yml file itself.

Next up is a new one for me. hadolint is a linting tool for Dockerfiles. It seems to make sensible suggestions, though I disabled a couple of them due to personal preference or technical issues. Conveniently, there is a pre-built Docker image, which makes configuring this pretty simple:

hadolint:
  <<: *preflight
  image: hadolint/hadolint
  before_script:
    - hadolint --version
  script:
    - hadolint Dockerfile

Building the Image

Next comes the actual meat of the pipeline, building the image and pushing it to our container registry:

variables:
  IMAGE_TAG: registry.gitlab.com/robconnolly/docker-ttrss:latest

build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  before_script:
    - docker info
  script:
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG
  tags:
    - docker

This is relatively straightforward if you are already familiar with the Docker build process. There are a couple of GitLab specific parts such as starting the docker:dind service, so that we have a usable Docker daemon in the build environment. The login to the GitLab registry uses the built-in $CI_BUILD_TOKEN variable as the password and from there we just build and push using the $IMAGE_TAG variable to name the image. Simple!

docker build gitlab
The pipeline for this is actually rather simple and boring compared to some of my others!

I also added another pipeline stage to send me a notification via my Gotify server when the build is done. This pretty much just follows the Gotify docs to send a message via curl:

notify:
  stage: notify
  image: alpine:latest
  before_script:
    - apk add curl
  script:
    - |
      curl -X POST "https://$GOTIFY_HOST/message?token=$GOTIFY_TOKEN" \
           -F "title=CI Build Complete" \
           -F "message=Scheduled CI build for TT-RSS is complete."
  tags:
    - docker

Of course the $GOTIFY_HOST and $GOTIFY_TOKEN variables are defined as secrets in the project configuration.

You can find the full .gitlab-ci.yml file for this pipeline in the docker-ttrss project repository.

Scheduling Periodic Builds

GitLab actually has support for scheduled jobs built in. However, if you are running on gitlab.com as I am you will find that you have very little control over when your jobs actually run, since it runs all cron jobs at 19 minutes past the hour. This would be fine for this single job, but I’m planning on having more scheduled builds run in the future, so I’d like to distribute them in time a little better, in order to reduce the load on my runner.

docker build gitlab
Pipeline triggers are created via the Settings->CI/CD menu in GitLab.

In order to do this I’m using a pipeline trigger, kicked off by a cron job at my end:

# Run TT-RSS build on Tuesday and Friday mornings
16  10 *   *   2,5   /usr/bin/curl -s -X POST -F token=SECRET_TOKEN -F ref=master https://gitlab.com/api/v4/projects/2707270/trigger/pipeline > /dev/null

This will start my pipeline at 10:16am (NZ time) on Tuesdays and Fridays. If the build is successful there will be a new image published shortly after these times.

An Aside: Managing random cron jobs like these really sucks once you have more than one or two machines. More than once I’ve lost a job because I forgot which machine I put it on! I know Ansible can manage cron jobs, but you are still going to end up with them configured across multiple files. I also haven’t found any way of managing cron jobs which relate to a container deployment (e.g. running WordPress cron for a containerised instance). Currently I’m managing all this manually, but I can’t help feeling that there should be a better way. Please suggest any approaches you may have in the comments!

Updating Deployed Containers

I could extend the pipeline above to update the container on my server with the new image. However, this would leave the other containers on the server without automated updates. Therefore I’m using Watchtower to periodically check for updates and upgrade any out of date images. To do this I added the following to the docker-compose.yml file on that server:

watchtower:
    image: containrrr/watchtower
    environment:
      WATCHTOWER_SCHEDULE: 0 26,56 10,22 * * *
      WATCHTOWER_CLEANUP: "true"
      WATCHTOWER_NOTIFICATIONS: gotify
      WATCHTOWER_NOTIFICATION_GOTIFY_URL: ${WATCHTOWER_NOTIFICATION_GOTIFY_URL}
      WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN: ${WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN}
      TZ: Pacific/Auckland
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    restart: always

Here I’m just checking for container updates four times a day. However, these are divided into two pairs of sequential checks separated by 30 minutes. This is more than sufficient for this server since it’s only accessible from my local networks. One of the update times is scheduled to start ten minutes after my CI build to pick up the new TT-RSS container. The reason for the second check after 30 mins is to ensure that the container gets updated even if the CI pipeline is delayed (as sometimes happens on gitlab.com).

You’ll note that I’m also sending notifications from Watchtower via Gotify. I was pleasantly surprised to discover the Gotify support in Watchtower when I deployed this. However, I think the notifications will get old pretty quickly and I’ll probably disable them eventually.

Conclusion

With all this in place I no longer have to worry about keeping my custom Docker images up to date. They will just automatically build at regular intervals and be updated automatically on the server. It actually feels pretty magical watching this all work end-to-end since there are a lot of moving parts here.

I’ll be deploying this approach as I build more custom images. I also still have to deploy Watchtower on all my other Docker hosts. The good news is that this has cured my reluctance to build custom images, so there will hopefully be more to follow. In the meantime, the community can benefit from a well maintained and updated TT-RSS image. Just pull with the following command:

docker pull registry.gitlab.com/robconnolly/docker-ttrss:latest

Enjoy!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

tiny tiny rss docker

Self-Hosted RSS with Tiny Tiny RSS in Docker

This post may contain affiliate links. Please see the disclaimer for more information.

With the rise of social media, RSS seems to have been largely forgotten. However, there are still those who are dedicated enough to keep curating their own list of feeds and plenty of software to support them. I’ve always been a fan of RSS and believe it’s probably time for a resurgence in use that would free us from our algorithmic overlords. As such I’ve run an instance of Tiny Tiny RSS for several years and the time has come to migrate it to Docker.

It’s relatively unknown that you can still get RSS feeds for most news sources on the Internet. For example, I use TT-RSS to keep up with Youtube and Reddit (just add .rss to the end of any subreddit URL) as well as the usual blogs and news sites.

About Tiny Tiny RSS

Tiny Tiny RSS is a web based RSS application (think Google Reader replacement). It’s PHP based and supports Postgres or MySQL (like) databases. I’ve been using it for may years and although I’ve tried out other web based RSS readers (such as Miniflux), I’ve never found anything as good as TT-RSS.

tiny tiny rss docker
My Tiny Tiny RSS install

A particular favourite feature of mine is the ability to generate feeds from any internal view, which makes it great for integrating with other systems which may consume RSS/Atom.

There is also an Android app which is available via the Play Store or F-Droid.

Finding a Tiny Tiny RSS Docker Image

I had initially planned to use the LinuxServer.io TT-RSS image, but it seems to have been deprecated. With a bit of searching I found this repo, however it’s pretty out of date and doesn’t build any more. After looking through my GitLab repos, it turned out I’d already tried upgrading that image as part of one of my previous attempts with Docker. I’ve finished off this migration and made the repo public so everyone can benefit from my efforts. Thanks to some CI magic and GitLab’s built in container registry you can pull the latest version like so:

docker pull registry.gitlab.com/robconnolly/docker-ttrss

Feel free to read through the project README to familiarise yourself with the options available in the image. It’s pretty much as it was in the original repository, so I’ll go through my setup below.

Setup with docker-compose

I’m integrating this with my existing Docker setup via docker-compose and Traefik. As such I added the following to my docker-compose.yml file:

ttrss:
    image: registry.gitlab.com/robconnolly/docker-ttrss:latest
    depends_on:
      - mariadb-ttrss
    environment:
      DB_NAME: ${TTRSS_DB_NAME}
      DB_USER: ${TTRSS_DB_USER}
      DB_PASS: ${TTRSS_DB_USER_PASSWD}
      DB_HOST: mariadb-ttrss
      DB_PORT: 3306
      DB_TYPE: mysql
      SELF_URL_PATH: https://ttrss.example.com
    volumes:
      - /mnt/docker-data/ttrss/plugins:/var/www/plugins.local
    labels:
      - 'traefik.enable=true'
      - "traefik.http.middlewares.ttrss_redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.ttrss_insecure.rule=Host(`ttrss.example.com`)"
      - "traefik.http.routers.ttrss_insecure.entrypoints=web"
      - "traefik.http.routers.ttrss_insecure.middlewares=ttrss_redirect@docker"
      - "traefik.http.routers.ttrss.rule=Host(`ttrss.example.com`)"
      - "traefik.http.routers.ttrss.entrypoints=websecure"
      - "traefik.http.routers.ttrss.tls.certresolver=mydnschallenge"
      - "traefik.http.services.ttrss.loadbalancer.server.port=80"
    networks:
      - external
      - internal
    restart: always

Breaking this down, we first create a new service using my TT-RSS image. We then define a dependency on the database container, which we will create later. The environment configuration uses another file env.sh in which we store our secrets. This is of the form:

export TTRSS_DB_ROOT_PASSWD=supersecret
export TTRSS_DB_USER=ttrss
export TTRSS_DB_USER_PASSWD=justalittlebitsecret
export TTRSS_DB_NAME=ttrss

In order to use this the file must be sourced before running docker-compose:

$ source env.sh
$ docker-compose .... # whatever you're doing

We can see that the database is configured entirely via environment variables as shown in the project README. We also set the SELF_URL_PATH variable so that TT-RSS knows where it is located (the URL should be updated for your configuration). I also chose to mount the plugins.local directory on the host machine to allow me to install plugins easily. The remainder of the configuration is for Traefik and is covered in my earlier post (you’ll need to update the hostnames used here too).

Database Setup

As mentioned earlier, we need a database container for TT-RSS to talk to. I’m using MariaDB for this because it’s what I’m familiar with. Also my original TT-RSS installation was in mysql and I wanted to migrate the data. The setup for this is pretty simple using the official MariaDB image:

mariadb-ttrss:
    image: mariadb
    environment:
      MYSQL_ROOT_PASSWORD: ${TTRSS_DB_ROOT_PASSWD}
      MYSQL_USER: ${TTRSS_DB_USER}
      MYSQL_PASSWORD: ${TTRSS_DB_USER_PASSWD}
      MYSQL_DATABASE: ${TTRSS_DB_NAME}
    volumes:
      - /mnt/docker-data/mariadb-ttrss/init/ttrss.sql.gz:/docker-entrypoint-initdb.d/backup.sql.gz
      - /mnt/docker-data/mariadb-ttrss/data:/var/lib/mysql
    networks:
      - internal
    restart: always

As you can see, I re-use the previous environment variables to create the database and user. I also mount the mysql data directory locally and mount a compressed backup of my previous database. This backup will only be loaded the first time the database comes up. You can remove this line if you are doing a clean install.

If you are following my install you’ll also see that I use a couple of Docker networks:

networks:
  external:
  internal:

The external network connects the local service containers to Traefik, whilst the internal network is used between TT-RSS and the database container.

With all this in place you should be able to launch your TT-RSS server with:

docker-compose up -d

At this point, it’s usually a good idea to check the container logs for problems and adjust your configuration accordingly.

Conclusion

Aside from having to update the Docker image for TT-RSS (which took quite a while) this migration was relatively painless. I’m quite happy with my newly Dockerised TT-RSS server. In addition to migrating it into Docker this step has also moved it off my ageing mailserver in preparation for it’s upcoming migration to something newer and moved it from the cloud onto my home server. All positive steps!

Next Steps

I’m pretty keen to keep maintaining my new Docker image for Tiny Tiny RSS, since it seems to be a gap in the community that can be filled. I’m currently building this in CI, but the configuration is pretty basic. In the coming weeks, I’m intending to expand upon this a little and set up a scheduled build which will automatically keep the container up to date. This will hopefully be the topic of a follow up article, so stay tuned!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.