docker common containers

Quick Project: Splitting Docker Compose Projects

This post may contain affiliate links. Please see the disclaimer for more information.

Way back in the when I first started using Docker in earnest, I wrote about my web hosting stack. Recently, this has undergone an upgrade as I’m working on a new website which will be served from the same server. I took the opportunity to split the system up into multiple docker-compose projects, which makes deployment of further sites much easier. It allows me to manage the common containers from one docker-compose project and then each of the sites from their own project. This will be of further use in future as I move towards deploying these with Ansible.

The Approach

My basic approach here is to move my two common containers (my Traefik container and SMTP forwarder) into their own project. This project will create a couple of networks for interfacing to the containers from other projects. To create these networks I add the following to my common project docker-compose.yml:

networks:
  gateway:
    name: 'gateway'
  smtp:
    name: 'smtp'

Here I create two networks as per normal. The key is to give them a proper name, rather than the auto generated one that would be assigned by Docker. This will enable us to address them easily from our other projects. We then assign these to our common containers:

services:
  traefik:
    image: traefik:2.1
    command:
      ...
    volumes:
      ...
    ports:
      - "80:80"
      - "443:443"
      - "127.0.0.1:8080:8080"
    networks:
      gateway:
        aliases:
          # add hostnames you might want to refer to this container by
          - example.com
    restart: always

  postfix:
    image: boky/postfix
    ports:
      ...
    environment:
      ...
    volumes:
      ...
    networks:
      smtp:
        aliases:
          - postfix
    restart: always

Here I simply assign the relevant network to each container. The aliases section allows other containers on these networks to find our common containers by whatever name we specify. In the case of the postfix container this is to connect via SMTP. For the traefik container, adding hostnames which internal apps my need to refer to can help (for example with the WordPress loopback test).

External Projects

With this in place, the other applications can be moved out into their own projects. For each one we need to access the gateway and smtp networks in order to have access to our common services. These are accessed as external networks via the docker-compose.yaml file for our project:

networks:
  gateway:
    external: true
  smtp:
    external: true

We then go ahead and add our services to access these networks:

services:
  varnish:
    image: wodby/varnish:latest
    depends_on:
      - wordpress
    environment:
      ...
    labels:
      - 'traefik.enable=true'
      - "traefik.docker.network=gateway"
      ...
    networks:
      - gateway
      - cache
    restart: always

  wordpress:
    image: wordpress:latest
    depends_on:
      - mariadb
    environment:
      ...
    volumes:
      ...
    networks:
      smtp:
      cache:
      database:
    restart: always

Here I add my varnish cache, as per my previous article. The key thing here is to specify the label traefik.docker.network=gateway to allow Traefik to reliably discover the container. We then also make sure the container is added to the gateway network. I’ve also added a WordPress container, which is on the smtp network. This will allow sending of email from WordPress via the SMTP forwarder.

Conclusion

This is a pretty simple approach for better management of my increasingly complex web stack. As I mentioned earlier that the next step will be to deploy these projects via Ansible. In this case the common containers will become part of a role which can be used across my infrastructure.

The splitting out of the apps into their own projects has enabled me to duplicate my current WordPress+Varnish+Mariadb setup for the new site I’m working on. There will be more info to come about that site as soon as I am ready to share!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

my road to docker smtp

My Road to Docker: Sorting Out SMTP

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


Having said in my last ‘Road to Docker’ post that I didn’t have any current plans for another post, something came up which warrants a write up here. I managed to solve the problem in question in a Dockerised fashion and I’m quite pleased with the solution. Let’s get into it….

The Backstory

After converting my web stack over to Docker recently, I had installed the WP Mail SMTP plugin in WordPress to handle mail from the site. This was required since WordPress could no longer send via a local mail setup. I configured the plugin to send via my existing mail server. This worked well for a while – then I encountered a speed bump on my road to Docker!

For some reason (I think due to an update of the WordPress container), I just stopped receiving email from the site. Upon investigation it seemed that the TLS connection was unable to be started correctly. I got the following debug log when testing mail via WordPress:

SMTP Debug:

2019-07-30 08:38:11	Connection: opening to mail.webworxshop.com:25, timeout=300, options=array (
                   	                  )
2019-07-30 08:38:11	Connection: opened
2019-07-30 08:38:11	SERVER -> CLIENT: 220 mail.webworxshop.com ESMTP Postfix
2019-07-30 08:38:11	CLIENT -> SERVER: EHLO webworxshop.com
2019-07-30 08:38:11	SERVER -> CLIENT: 250-mail.webworxshop.com
                   	                  250-PIPELINING
                   	                  250-SIZE 30720000
                   	                  250-VRFY
                   	                  250-ETRN
                   	                  250-STARTTLS
                   	                  250-ENHANCEDSTATUSCODES
                   	                  250-8BITMIME
                   	                  250 DSN
2019-07-30 08:38:11	CLIENT -> SERVER: STARTTLS
2019-07-30 08:38:11	SERVER -> CLIENT: 220 2.0.0 Ready to start TLS
2019-07-30 08:38:12	SMTP Error: Could not connect to SMTP host.
2019-07-30 08:38:12	CLIENT -> SERVER: QUIT
2019-07-30 08:38:12	SERVER -> CLIENT: 
2019-07-30 08:38:12	SMTP ERROR: QUIT command failed: 
2019-07-30 08:38:12	Connection: closed
2019-07-30 08:38:12	SMTP Error: Could not connect to SMTP host.

Looking into the logs on the mail server, I found the following corresponding error:

warning: TLS library problem: 27261:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1275:SSL alert number 40:

I tried googling the error, but after trying a few of the suggested fixes, I gave up and decided to solve the problem a different way.

Fixing it, Take 1

My first attempt involved installing Postfix on the host in a smarthost configuration to my main server. This is the same setup as I use on most of my servers for system mail from cron, etc (via a custom Ansible role).

After getting the mail system running and able to send mail from the host, I tried to configure it in WordPress. However, I was unable to connect to the host machine from the container on the Docker host IP address. I investigated this and found that because I was using a private network the IP address was different than the standard Docker interface address.

Trying this address didn’t work either. Perhaps this is a security feature in Docker, or perhaps I was doing it wrong. Either way it pushed me on to a better solution.

Fixing it, Take 2

My next plan involved putting Postfix into a container. This would be put on the same private network as the WordPress container to allow access. I needed to keep the smarthost configuration to talk to the main mailserver. A quick search turned up a suitable image in the form of boky/postfix. This image is intended exactly for this purpose and I was able to set it up without too much trouble.

To spin this up I added the following to my previous docker-compose.yml file:

postfix:
    image: boky/postfix
    ports:
      - "127.0.0.1:587:587"
    environment:
      HOSTNAME: ${POSTFIX_HOSTNAME}
      RELAYHOST: ${POSTFIX_RELAYHOST}
      RELAYHOST_USERNAME: ${POSTFIX_RELAYHOST_USERNAME}
      RELAYHOST_PASSWORD: ${POSTFIX_RELAYHOST_PASSWORD}
      RELAYHOST_TLS_LEVEL: "verify"
      ALLOWED_SENDER_DOMAINS: webworxshop.com
    volumes:
      - /home/rob/docker-data/postfix/spool:/var/spool/postfix
    networks:
      - internal
    restart: always

Pretty simple! As per the previous post I put all the secrets into an env.sh file to keep them separate from the stack.

The mail forwarder is available both on the internal Docker network and on the host system to replace the native mail forwarding setup. Setting this up with WordPress ended up being trivial (see screenshot below). However, I required some further configuration to make the mail on the host system work.

my road to docker smtp
We just use the container name as the hostname and 587 as the port! Authentication and SMTP aren’t required for the local connection.

MSMTP Setup

In order to redirect the system mail from the host via the Dockerised mail forwarder, I had to set up MSMTP. This is pretty effectively documented elsewhere, so I won’t go into details. The only differences in this setup are that we don’t require authentication or TLS to the mail forwarder because it’s only available locally. The mail forwarder itself is already handling authentication and TLS to the main mail server.

For reference, here is the msmtprc file I ended up with:

# Set default values for all following accounts.
defaults
auth           off
tls            off
logfile        /var/log/msmtp/msmtp.log

# Local mailserver
account        local
host           localhost
port           587
from           someone@example.com

# Set a default account
account default : local

aliases         /etc/aliases

Conclusion

Here we’ve seen how to quickly deploy a mail forwarding server for your Dockerised applications. We’ve also configured our host system to also use it, so we don’t need to run two forwarders.

I feel that this is the best solution to this problem (although I still don’t quite know what the original problem was!). It feels like a nicer solution than my original solution of running the mail forwarder locally. It has also resulted in one more Dockerised application!

I’m intending to convert my other Docker servers over to this approach in the near future. For the system mail part, I still need to create an Ansible role to push out the MSMTP configuration. Actually, the whole question of Ansible/host configuration and how it fits with Dockerised services is still something I need to work out. If anyone has any ideas feel free to share in the comments.

As I said in the last post, I don’t have any more ‘Road to Docker’ posts planned in the immediate future. However, the migration is ongoing so there will be more at some point!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

My Road to Docker: Traefik

My Road to Docker – Part 1: My Web Stack

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


I’ve been following and using Docker since it’s early days. I think 0.6 was one of the first releases I tried. That said, I’ve never actually had that much luck getting it deployed on meaningful parts of my infrastructure. Usually, I would start some new deployment of something with the intention of using Docker. Then I’d hit some issue which made it difficult (or at least more difficult than it should be). Either that, or I’d run some Docker containers as a testing deployment for a while and then pull them down. It’s safe to say my road to Docker has been long, winding and full of potholes!

Recently, I’ve been giving my infrastructure choices more thought. My main goals are that I want to reduce maintenance and keep the high level of separation and security I have between my different services. As detailed in my last post on my self hosting setup, I’ve been using LXD fairly successfully. The problem with this is really that LXD containers are more like virtual machines. Each container is a little Linux system which needs to be kept up to date, monitored and generally looked after. When you have one container per service this becomes a chore. In actuality I don’t have one LXD container per service. Instead I group dependent services such as databases with their parent. However, I’ve still ended up with quite a few containers.

Decisions, decisions…

I really like the separation I’m able to have by keeping my LXD containers on separate VLANs. Docker does support this via macvlan, but the last time I played with it I had to manually assign IPs to each container. It looks like Docker will now do this automatically. However, it allocates from it’s own IP pool and won’t use your DHCP server. This means I can’t assign IPs from my Pfsense firewall. Also having one public (LAN) IP per Docker service kinda sucks and I also don’t want to rely on just one project for the security of my system.

Additionally, I have the need to run at least one VM anyway for my virtualised Pfsense firewall. This brings me to the idea of running Docker inside VMs. Each VM can be assigned to the relevant VLAN interface and get it’s IP from DHCP. I also get the benefit of two levels of isolation using different technologies. I haven’t yet decided on whether to use Proxmox or Ubuntu Server+Libvirt+Cockpit for the host systems. Hopefully this will be the subject of a future post.

On To Today

All the above is pretty much background, because I actually don’t want to talk about my locally hosted stuff today. Instead I’m going to talk about my stuff in the cloud – specifically the hosting for this site.

I’ve used Linode since around 2011 when I set up my self hosted mail server. Since this time, I’ve hosted this site on a fairly standard LAMP setup on that same server. The mail server install is now getting on a bit, but being CentOS based it’s all still supported for updates until next year. However, in preparation for re-building the mail server I decided to move the web stuff out onto another server and run it in Docker. This basically gives me the same setup of Docker-on-VM as I have on my local infrastructure. Just the hypervisor UI differs – it’s all KVM underneath.

Since I was moving the site to a new server, I also decided to move closer to my audience. According to my Matomo statistics this is predominantly US based. To this end I span up a new $5 Linode (Nanode), running Ubuntu 18.04 in their New Jersey datacenter. This should also be pretty fast for European visitors (to be honest it’s lightning fast even from NZ).

Getting Going

After doing some pretty standard first 10 minutes on a server stuff, I set about installing Docker. Instead of installing via my usual method of adding the APT repository I decided to see how installing from a Snap would work in production:

$ sudo snap install docker

A few seconds later Docker was installed. However, I ran into a wrinkle when trying to add myself to the Docker group. Basically, it wouldn’t allow me to run Docker commands without root access (still). The solution is to add the Docker group before installing the Snap, so I removed it, added the group and reinstalled. So full instructions for installing Docker via Snap should more accurately be:

$ sudo addgroup --system docker
$ sudo adduser $USER docker
$ newgrp docker
$ sudo snap install docker

Stacking Containers

I’d resolved to install the blog inside the official WordPress container and connect use mariadb for the database. Since I was moving my install over, it was slightly more complex than it would be for a clean install, but this article put me on the right track.

The main issue I encountered didn’t happen until I had the site running and was trying to get HTTPS running. Due to Let’s Encrypt’s HTTP verification this had to be done after the DNS settings were updated i.e. the site was live. This issue manifested as WordPress serving HTTP URLs for all links and embedded content. This leads to mixed content warnings/blocks in Firefox and Chrome. The issue seems to be common to any setup where HTTPS is handled by a reverse proxy. Basically, in this setup WordPress isn’t aware that it should be using HTTPS and needs to be told. Adding the following to wp-config.php fixes the problem:

$_SERVER['HTTPS']='on';

I decided to use Traefik as my reverse proxy, which I’m quite pleased with. The automated service discovery is pretty awesome and will really come into it’s own on my other self hosted infrastructure. I also whacked a Varnish cache (from this Docker image) in between. So far I haven’t done much with Varnish, but it’s there for when I get time to tweak it.

Moving Matomo

I also moved my Matomo Analytics instance to the new server using the official Docker image. This gave me much less trouble than WordPress since I just allowed it to use the embedded Matomo version with my configuration file and database from the old install.

I connected Matomo up directly to my Traefik instance, without running it through Varnish, to avoid any potential issues with the cache interfering with my statistics. Traefik just worked with this setup, which was pretty refreshing.

My Road to Docker: Traefik
My Traefik Dashboard

PSA: Docker will Ignore Your Firewall!

Along the way, I ran into another issue. Before I changed the DNS settings to make the site live, I was binding Traefik to port 8080 and accessing it via an SSH tunnel for making changes in WordPress. Since I had configured UFW to block everything (except SSH) when I set up the server, I thought this was a nice secure setup for debugging.

I was wrong.

I only noticed something was off because I happened to have the log output from Traefik open and saw random IPs in the logs. Luckily, the only virtual host I had configured at that point was localhost:8080 so all the unwanted visitors got 404 responses. Needless to say I pulled down all the containers until I could work out what was going on.

This appears to be a known interaction between Docker and any firewall utility (including both UFW and Firewalld). The issue is inherent in the way that Docker uses iptables to route traffic to your containers. Basically, the port forwarding rules go in the NAT chain in iptables. This means incoming traffic is re-routed before it hits the INPUT chain containing the rules from UFW/Firewalld.

I tried the fix suggested – disabling iptables support in Docker, but this completely broke inter-container connectivity (unsurprising, when you think about it). My solution for now is just to be really careful when binding ports. Make sure that any ports you don’t want to give access to the outside world are bound only to 127.0.0.1. There is also the DOCKER-USER iptables chain if you need more flexibility, but it means you need to use raw iptables rules.

This issue is a major security flaw and needs to be given more attention. Breaking administrator expectations of security like this is going to lead to loads of services being exposed to the big bad Internet that really shouldn’t be!

My Full Web Stack

Below is the docker-compose.yml file for my full web stack. I store any secret variables in an env.sh file which I source before running my docker-compose commands.

version: '3'
 
services:
  traefik:
    image: traefik:latest
    volumes:
      - /home/rob/docker-data/traefik/traefik.toml:/etc/traefik/traefik.toml
      - /home/rob/docker-data/traefik/acme.json:/etc/traefik/acme.json
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - "80:80"
      - "443:443"
      - "127.0.0.1:8080:8080"
    networks:
      external:
        aliases:
          - webworxshop.com
          - analytics.webworxshop.com
    restart: always

  varnish:
    image: wodby/varnish:latest
    depends_on:
      - wordpress
    environment:
      VARNISH_SECRET: ${VARNISH_SECRET}
      VARNISH_BACKEND_HOST: wordpress
      VARNISH_BACKEND_PORT: 80
      VARNISH_CONFIG_PRESET: wordpress
      VARNISH_ALLOW_UNRESTRICTED_PURGE: 1
    labels:
      - 'traefik.enable=true'
      - 'traefik.backend=varnish'
      - 'traefik.port=6081'
      - 'traefik.frontend.rule=Host:webworxshop.com'
    networks:
      - external
    restart: always

  wordpress:
    image: wordpress:apache
    depends_on:
      - mariadb-wp
    environment:
      WORDPRESS_DB_HOST: ${WP_DB_HOST}
      WORDPRESS_DB_USER: ${WP_DB_USER}
      WORDPRESS_DB_PASSWORD: ${WP_DB_USER_PASSWD}
      WORDPRESS_DB_NAME: ${WP_DB_NAME}
    volumes:
      - /home/rob/docker-data/wordpress:/var/www/html
    networks:
      - external
      - internal
    restart: always
 
  mariadb-wp:
    image: mariadb
    environment:
      MYSQL_ROOT_PASSWORD: ${WP_DB_ROOT_PASSWD}
      MYSQL_USER: ${WP_DB_USER}
      MYSQL_PASSWORD: ${WP_DB_USER_PASSWD}
      MYSQL_DATABASE: ${WP_DB_NAME}
    volumes:
      - /home/rob/docker-data/mariadb-wp/init/webworxshop.com.sql.gz:/docker-entrypoint-initdb.d/backup.sql.gz
      - /home/rob/docker-data/mariadb-wp/data:/var/lib/mysql
    networks:
      - internal
    restart: always

  matomo:
    image: matomo:apache
    depends_on:
      - mariadb-matomo
    volumes:
      - /home/rob/docker-data/matomo:/var/www/html
    labels:
      - 'traefik.enable=true'
      - 'traefik.backend=matomo'
      - 'traefik.port=80'
      - 'traefik.frontend.rule=Host:analytics.webworxshop.com'
    networks:
      - external
      - internal
    restart: always
 
  mariadb-matomo:
    image: mariadb
    environment:
      MYSQL_ROOT_PASSWORD: ${MATOMO_DB_ROOT_PASSWD}
      MYSQL_USER: ${MATOMO_DB_USER}
      MYSQL_PASSWORD: ${MATOMO_DB_USER_PASSWD}
      MYSQL_DATABASE: ${MATOMO_DB_NAME}
    volumes:
      - /home/rob/docker-data/mariadb-matomo/init/analytics.webworxshop.com.sql.gz:/docker-entrypoint-initdb.d/backup.sql.gz
      - /home/rob/docker-data/mariadb-matomo/data:/var/lib/mysql
    networks:
      - internal
    restart: always

networks:
  external:
  internal:

This is all pretty much as described, however there are two notable points:

  • I went for separate mariadb containers for the databases for WordPress and Matomo. This is because the official mariadb container only supports creation of one database/user via environment variables. I’m not super happy with this arrangement, but it is working well and doesn’t seem to use too much memory.
  • The aliases portion under the network configuration for the Traefik container allow the other containers to route internal requests back to themselves. This helps with things such as the loopback check in WordPress, which will otherwise fail.

That’s pretty much all there is to it!

Conclusion

Overall, I’m really happy with the way this migration has worked out. I’ve even been able to downgrade the Linode plan for the original server to the $5/month plan, since it now has less load. This means I’m paying the same as I was for the single server for two servers, although with half the RAM each. I think I paid an extra $2-3 during the migration period, since that took me a while to compete. Even that isn’t too bad.

I’ve already started on further Docker migrations on some of the infrastructure I have on my home servers. These should be the subject of further posts.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Why email will never be Free…

Hello! This blog is not dead and neither am I. It’s been a long while since I’ve written anything here and the site is starting to look a bit abandoned. I’ve been seriously busy with what I refer to as ‘life stuff’ over the last few months and my ‘tech-time’ has been a bit squeezed, so I haven’t had much to write about. I’m now getting back into the swing of things and starting to think of stuff to write about.

Email. We all use it. It works, right? Well for the most part. It’s probably one of the most used and simplest forms of digital communication available today. Shame it’s so horribly complicated then.

My major task over the last couple of weeks has been getting my new server set up so that it hosts my own email and provides accounts for members of my family. I’ve actually been doing this on and off pretty much since I got the server, but with the release of CentOS 6 I decided to cut my losses and upgrade sooner rather than later.

So I set out installing Postfix and Dovecot, following the instructions on the CentOS wiki pages. I managed to get through all the configuration of Amavisd, ClamAV and Spamassasin and I installed Roundcube for webmail access. Then I started thinking about adding users for my family and came to the decision that I didn’t want to add shell users for every email account, so I would modify my set up to incorporate virtual users and domains. After following a dead end of setting this up using plain text files (which turns out to be more fiddly to administer than shell accounts) I settled on using Postfix.admin with a database based system. Then there were my adventures with sieve, managesieve and Roundcube, which turned into a whole afternoon of getting nowhere.

Now my gripe here isn’t with the software. Far from it, all the previously mentioned software is excellent. Nor is it even with the documentation, although that is lacking in some areas, usually a quick web search will find you a useful forum post. No, my problem here is that I had to do all this in the first place in order to set up a Autonomous Free Software based mail system.

Given that I did this partly for the fun of it (yes I appear to like the pain of making tea), this isn’t really a problem for me. When I changed over my DNS everything pretty much worked, but I’m not your average user. Now, I know there are things like iRedMail, which can automate all this stuff and if I hadn’t wanted to learn how a mail system works I probably would have tried it. However, I’m skeptical as to the reliability of such a system.

This brings us to the crux of the problem. Email is too complicated! Think about all those disparate components I listed above, each of which is developed separately and has to interact seamlessly in order for the system to keep going. If one breaks, you’re screwed. I think this is probably worse in a iRedMail based system as the administrator would have no expertise in the inner workings of the system.

This leads me to my actual point (finally!). This is why people use Gmail, Hotmail, etc. It’s not because they would otherwise have to provide their own hosting, it’s because these services make it easy. I think what I’m basically asking for is “WordPress for email”. Something I can just unpack point it at a database and go, without having to know an what an RBL is. Yes, this wouldn’t be as easy as Gmail, but Blogger or Tumblr are easier than self-hosted WordPress and there are still tons of WordPress blogs out there. It would put Free Software based email within the reach of the ordinary computer literate person.

My fear is that the Free Software community treats email as a largely solved problem. We have loads of great software, which works for those of us with beards. However, until we make it easy to use, simple, cohesive and pretty, email is destined to languish in the land of the non-Free.