Integrating Remote Servers Into My Local Network

This post may contain affiliate links. Please see the disclaimer for more information.

I’ve been using Linode for many years to host what I consider to be my most “production grade” self-hosted services, namely this blog and my mail server. My initial Linode server was built in 2011 on CentOS 6. This is approaching end of life so, I’ve been starting to build its replacement. Since originally building this server my home network has grown up and now provides a myriad of services. When starting out to build the new server, I thought it would be nice to be able to make use of these more easily from my remote servers. So I’ve begun some work to integrate the two networks more closely.

Integration Points

There are a few integration points I’m targeting here, some of which I’ve done already and others are still to be done:

  • Get everything onto the same network, or at least on different subnets of my main network so I can control traffic between networks via my pfSense firewall. This gives me the major benefit of being able to access selected services on my local network from the cloud without having to make that service externally accessible. In order to do this securely you want to make sure the connection is encrypted – i.e. you want a VPN. I’m using OpenVPN.
  • Use ZFS snapshots for backing up the data on the remote systems. I’d previously been using plain old rsync for copying the data down locally where it gets rolled into my main backups using restic. There is nothing wrong with this approach, but using ZFS snapshots gives more flexibility for restoring back to a certain point without having to extract the whole backup.
  • Integrate into my check_mk setup for monitoring. I’m already doing this and deploying the agent via Ansible/CI. However, now the agent connection will go via the VPN tunnel (it’s SSH anyway, so this doesn’t make a huge difference).
  • Deploy the configuration to everything with Ansible and Gitlab CI – I’m still working on this!
  • Build a centralised logging server and send all my logs to it. This will be a big win, but sits squarely in the to-do column. However, it will benefit from the presence of the VPN connection, since the syslog protocol isn’t really suitable for running over the big-bad Internet.

Setting Up OpenVPN

I’m setting this up with the server being my local pfSense firewall and the clients being the remote cloud machines. This is somewhat the reverse to what you’d expect, since the remote machines have static IPs. My local IP is dynamic, but DuckDNS does a great job of not making this a problem.

The server setup is simplified somewhat due to using pfSense with the OpenVPN Client Export package. I’m not going to run through the full server setup here – the official documentation does a much better job. One thing worth noting is that I set this up as the second OpenVPN server running on my pfSense box. This is important as I want these clients to be on a different IP range, so that I can firewall them well. The first VPN server runs my remote access VPN which has unrestricted access, just as if I were present on my LAN.

In order to create the second server, I just had to select a different UDP port and set the IP range I wanted in the wizard. It should also be noted that the VPN configuration is set up not to route any traffic through it by default. This prevents all the traffic from the remote server trying to go via my local network.

On the client side, I’m using the standard OpenVPN package from the Ubuntu repositories:

$ sudo apt install openvpn

After that you can extract the configuration zip file from the server and test with OpenVPN in your terminal:

$ unzip <your_config>.zip
$ cd <your_config>
$ sudo openvpn --config <your_config>.ovpn

After a few seconds you should see the client connect and should be able to ping the VPN address of the remote server from your local network.

Always On VPN Connection

To make this configuration persistent we first move the files into /etc/openvpn/client, renaming the config file to give it the .conf extension:

$ sudo mv <your_config>.key /etc/openvpn/client.key
$ sudo mv <your_config>.p12 /etc/openvpn/client.p12
$ sudo mv <your_config>.ovpn /etc/openvpn/client.conf

You’ll want to update the pkcs12 and tls-auth lines to point to the new .p12 and .key files. I used full paths here just to makes sure it would work later. I also added a route to my local network in the client config:

route 10.0.0.0 255.255.0.0

You should then be able to activate the OpenVPN client service via systemctl:

$ sudo systemctl start openvpn-client@client.service
$ sudo systemctl enable openvpn-client@client.service

If you check your system logs, you should see the connection come up again. It’ll now persist across reboots and should also reconnect if the connection goes down for any reason. So far it’s been 100% stable for me.

At this point I added a DNS entry on my pfSense box to allow me to access the remote machine via it’s hostname from my local network. This isn’t required, but it’s quite nice to have. The entry points to the VPN address of the machine, so all traffic will go via the tunnel.

Firewall Configuration

Since these servers have publicly available services running on them, I don’t want them to have unrestricted access to my local network. Therefore, I’m blocking all incoming traffic from the new VPN’s IP range in pfSense. I’ll then add specific exceptions for the services I want them to access. This is pretty much how you would set up a standard DMZ.

remote server integrate
The firewall rules for the OpenVPN interface, note the SSH rule to allow traffic for our ZFS snapshot sync later

To do this I added an alias for the IP range in question and then added a block rule on the OpenVPN firewall tab in pfSense. This is slightly different to the way my DMZ is set up, since I don’t want to block all traffic on the OpenVPN interface, just traffic from that specific IP range (to allow my remote access VPN to continue working!).

You’ll probably also want to configure the remote server to accept traffic from the VPN so that you can access any services on the server from your local network. Do this with whatever Linux firewall tool you usually use (I use ufw).

Storing Data on ZFS

And now for something completely different….

As discussed before, I was previously backing up the data on these servers with rsync. However, I was missing the snapshotting I get on my local systems. These local systems mount their data directories via NFS to my main home server, which then takes care of the snapshot duties. I didn’t want to use NFS over the VPN connection for performance reasons, so I opted for local snapshots and ZFS replication.

In order to mount a ZFS pool on our cloud VM we need a device to store our data on. I could add some block storage to my Linodes (and I may in future), but I can also use a loopback file in ZFS (and not have to pay for extra space). To do this I just created a 15G blank file and created the zpool on top of that:

$ sudo mkdir /zpool
$ sudo dd if=/dev/zero of=/zpool/storage bs=1G count=15
$ sudo apt install zfsutils-linux
$ sudo zpool -m /storage storage /zpool/storage

I can then go about creating my datasets (one for the mail storage and one for docker volumes):

sudo zfs create storage/mail
sudo zfs create storage/docker-data

Automating ZFS Snapshots

To automate my snapshots I’m using Sanoid. To install it (and it’s companion tool Syncoid) I did the following:

$ sudo apt install pv lzop mbuffer libconfig-inifiles-perl libconfig-inifiles-perl git
$ git clone https://github.com/jimsalterjrs/sanoid
$ sudo mv sanoid /opt/
$ sudo chown -R root:root /opt/sanoid
$ sudo ln /opt/sanoid/sanoid /usr/sbin/
$ sudo ln /opt/sanoid/syncoid /usr/sbin/

Basically all we do here is install a few dependencies and then download Sanoid and install it in /opt. I then hard link the sanoid and syncoid executables into /usr/sbin so that they are on the path. We then need to copy over the default configuration:

$ sudo mkdir /etc/sanoid
$ sudo cp /opt/sanoid/sanoid.conf /etc/sanoid/sanoid.conf
$ sudo cp /opt/sanoid/sanoid.defaults.conf /etc/sanoid/sanoid.defaults.conf

I then edited the sanoid.conf file for my setup. My full configuration is shown below:

[storage/mail]
        use_template=production

[storage/docker-data]
        use_template=production
        recursive=yes

#############################
# templates below this line #
#############################

[template_production]
        frequently = 0
        hourly = 36
        daily = 30
        monthly = 12
        yearly = 2
        autosnap = yes
        autoprune = yes

This is pretty self explanatory. Right now I’m keeping loads of snapshots, I’ll pare this down later if I start to run out of disk space. The storage/docker-data dataset has recursive snapshots enabled because I will most likely make each Docker volume its own dataset.

This is all capped off with a cron job in /etc/cron.d/zfs-snapshots:

*  *    * * *   root    TZ=UTC /usr/local/bin/log-output '/usr/sbin/sanoid --cron'

Since my rant a couple of weeks ago, I’ve been trying to assemble some better practices around cron jobs. The log-output script is one of these, from this excellent article.

Syncing the Snapshots Locally

The final part of the puzzle is using Sanoid’s companion tool Syncoid to sync these down to my local machine. This seems difficult to do in a secure way, due to the permissions that zfs receive needs. I tried to use delegated permissions, but it looks like the mount permission doesn’t work on Linux.

The best I could come up with was to add a new unprivileged user and allow it to only run the zfs command with sudo by adding the following via visudo:

syncoid ALL=(ALL) NOPASSWD:/sbin/zfs

I also set up an SSH key on the remote machine and added it to the syncoid user on my home server. Usually I would restrict the commands that could be run via this key for added security, but it looks like Syncoid does quite a bit so I wasn’t sure how to go about this (if any one has any idea let me know).

With that in place we can test our synchronisation:

$ sudo syncoid -r storage/mail syncoid@<MY HOME SERVER>:storage/backup/mail
$ syncoid -r storage/docker-data syncoid@<MY HOME SERVER>:storage/docker/`hostname`

For this to work you should make sure that the parent datasets are created on the receiving server, but not the destination datasets themselves, Syncoid likes to create them for you.

I then wrote a quick script to automate this which I dropped in /root/replicator.sh:

#!/bin/bash

USER=syncoid
HOST=<MY HOME SERVER>

HOSTNAME=$(hostname)

/usr/sbin/syncoid -r storage/mail $USER@$HOST:storage/backup/mail 2>&1
/usr/sbin/syncoid -r storage/docker-data $USER@$HOST:storage/docker/$HOSTNAME 2>&1

Then another cron job in /etc/cron.d/zfs-snapshots finishes the job:

56 *    * * *   root    /usr/local/bin/log-output '/root/replicator.sh'

Conclusion

Phew! There was quite a bit there. Thanks for sticking with me if you made it this far!

With this setup I’ve come a pretty long way towards my goal of better integrating my remote servers. So far I’ve only deployed this to a single server (which will become the new mailserver). There are a couple of others to go, so the next step will be to automate as much as possible of this via Ansible roles.

I hope you’ve enjoyed this journey with me. I’m interested to hear how others are integrating remote and local networks together. Let me know if you have anything to add via the feedback channels.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

home assistant occupancy simulation

Keeping Baddies Away with Occupancy Simulation and Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

Home security is one of the more interesting and useful applications of home automation technologies. Of-course it’s a whole field and industry in itself with all manner of products and services available. Of course the first part of any security system should be good physical security (good doors, windows and locks). Once you get past this it seems like you can go two ways: prevention or detection.

Most traditional systems are in the detection camp. They aim to detect an intruder and alert someone, either via the traditional loud noise or other means. The presence of an alarm box or sign my provide a minimal amount of prevention capability, but not much. Camera systems on the other hand provide both pretty robust prevention as well as (increasingly) advanced detection capabilities.

The third option is to ramp up the prevention angle by making it look like someone is home. It’s this that I’m going to tackle here, with an occupancy simulation application for Home Assistant called Occusim.

Note: I’m not a home security specialist and you should definitely consult a professional. This post is provided for informational purposes and should not be relied upon to provide a full security system. The author excepts no responsibility for any consequence of you relying on this information and/or not consulting someone who actually does this stuff for a living. YOU HAVE BEEN WARNED!

What is Occupancy Simulation?

Basically it’s pretending you are home, by switching on and off things as you would when you’re there. The reasoning behind it is that potential intruders are less likely to invade a house with people inside. I’m sure there are parts of the world where that doesn’t hold. However, I’d guess it applies in enough places to be useful.

Methods of occupancy simulation range from the very low tech “light on a timer” to the high tech approach we’re going to take here, using Home Assistant. The key is to make the sequence as close to the normal usage pattern as possible. So that anyone potentially watching your house can’t tell the difference.

Introducing OccuSim

OccuSim is a tool for defining complex occupancy simulation rules, for use with Home Assistant. These rules are defined as sequences which proceed from beginning to end (with potentially some randomness applied). In this way you can program in a full day’s operation. There is also a fully random mode, which works better for non-scheduled events. Both modes can be combined to give a good representation of your normal home operation.

OccuSim is actually an AppDaemon app, so you will need AppDaemon installed as a dependency. I’m not going to cover that now. You can either see the link above or read my previous post on the subject.

Once you have a functional AppDaemon install you can install OccuSim by cloning the repository into the apps subdirectory of you AppDaemon configuration. Rather than just a pure clone, I prefer to add it as a submodule in my AppDaemon configuration repository. This makes it easy to reinstall if you move your configuration around:

cd apps
git submodule add https://github.com/acockburn/occusim.git occusim
git commit -m "Add OccuSim"

With this setup, updating OccuSim is a little more involved. This is because you also need to commit the submodule change:

cd apps/occusim
git pull origin
cd ../..
git add apps/occusim
git commit -m "Update OccuSim"

Initial Configuration

OccuSim is configured as an app in your apps.yaml file. There is some initial config before you get down to creating your simulation sequences:

occupancy_simulator:
  class: OccuSim
  module: occusim
  log: '1'
  notify: '1'
  enable: input_boolean.vacation_mode,on
  test: '0'
  dump_times: '1'
  reset_time: '02:00:00'

Here I add a new app to my setup, called occupancy_simulator. We set the Python class to load as OccuSim from the module occusim, since we installed it in it’s own subdirectory. I set OccuSim to log it’s activity and to notify when a sequence step is activated. This will use the default notify.notify service in HASS, so you better have that set up to go to the right place. I haven’t found a way to change the notifier that it uses.

The enable setting takes an input_boolean entity and a state in which OccuSim should be active. Here I use my vacation mode toggle. I actuate mine manually, but it’s perfectly possible to set this from a HASS automation if you so desire.

The last few settings are some basic housekeeping. Test mode is disabled to ensure that OccuSim does it’s thing (though this can be useful when debugging your sequences). I also set the dump_times option to true so that I can see the times of the steps in the logs. I then set the time when I want to re-calculate the steps for the upcoming day. In my case this is set to 2am.

Setting up Sequences

A simple sequence might be the following:

step_evening_name: Evening
step_evening_start: 'sunset'
step_evening_on_1: scene.main_lights

This sequence simply turns on my main lights scene at sunset. The configuration variables take the form step_<id>_<variable> where <id> is a custom identifier that needs to be common across all variables for the step. Multiple entities can be turned on or off by appending numbers to the variables in question, e.g. step_evening_on_1, step_evening_on_2, etc.

You can add an offset to the times for sunset or sunrise, such as sunset - 00:20:00 to trigger 20 minutes before sunset. You can of course also specify an absolute time in the form HH:MM:SS.

More Advanced Sequences

So far that’s great, but we could have just turned on a light at sunset with a basic HASS automation rule. The real power of OccuSim comes in randomising the sequence times (within bounds) and creating multi-step sequences.

Expanding on our previous example:

step_evening_name: Evening
step_evening_start: 'sunset - 00:40:00'
step_evening_end: 'sunset - 00:10:00'
step_evening_on_1: scene.main_lights

This will turn on the lights at a random time between 40 minutes before sunset and ten minutes before sunset.

Let’s now create a multi-step sequence:

step_movie1_name: Movie Scene
step_movie1_start: '20:00:00'
step_movie1_end: '20:30:00'
step_movie1_on_1: scene.movie

step_movie2_name: Movie Scene Pause
step_movie2_relative: Movie Scene
step_movie2_start_offset: '00:35:00'
step_movie2_end_offset: '00:45:00'
step_movie2_on_1: script.downlights_bright

step_movie3_name: Movie Scene Play
step_movie3_relative: Movie Scene Pause
step_movie3_start_offset: '00:03:00'
step_movie3_end_offset: '00:06:00'
step_movie3_on_1: scene.movie

Here I start a sequence that will execute my movie scene sometime between 8pm and 8.30pm. I then specify a second step which will emulate us pausing the movie and putting the kitchen lights on. This happens sometime between 35 and 45 minutes later and is relative to the previous step. This means that whatever time the previous step is executed the second step will always come 35-45 minutes after that. The third step in the sequence is another relative one. This will execute 3-6 minutes after the previous one and emulates us starting up the movie again.

Random Mode

As mentioned earlier, you can also create totally random events. For example you might use the following to simulate overnight bathroom trips:

random_bathroom_name: Overnight Bathroom
random_bathroom_start: Bedtime
random_bathroom_end: Morning
random_bathroom_minduration: 00:02:00
random_bathroom_maxduration: 00:05:00
random_bathroom_number: 2
random_bathroom_on_1: light.bathroom
random_bathroom_off_1: light.bathroom

step_bedtime_name: Bedtime
step_bedtime_start: '21:30:00'
step_bedtime_end: '22:30:00'

step_morning_name: Morning
step_morning_start: 'sunrise + 00:20:00'

This configuration creates a 2 randomised events lasting 2-5 minutes, which turn on and off the bathroom light. These are defined as starting and ending relative to other steps in the sequence. I’ve defined the Bedtime and Morning steps in my example to illustrate this. These steps can of course contain actions of their own just like the steps above. The full configuration basically says “sometime starting between 9.30pm and 10.30pm and lasting until 20 minutes after sunrise there will be two instances of the bathroom light coming on for 2-5 minutes”. Pretty cool!

Conclusion

OccuSim is a pretty powerful tool, which allows you to create complex occupancy simulation rules with Home Assistant. I’ve only really just begun to map out my own simulations. You can find these on GitLab. Some of the examples here are based upon my rules, but the real times have been changed. I’m going to continue expanding upon this in the coming weeks. Hopefully I’ll end up with a pretty accurate simulation.

I’m also exploring further home security options and will hopefully be expanding my existing ZoneMinder setup this year. Stay tuned to the blog for more info when that happens.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

docker build gitlab

My Road to Docker – Part 4: Automated Container Builds with GitLab CI

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


In the course of migrating most of the services I run over to running as Docker containers I’ve mostly tried to steer clear of building my own images. Thanks to the popularity of Docker this has been pretty easy for the most part. Many projects now ship an official Docker image as part of their releases. However, the inevitable happened when I had to build my own image in my recent TT-RSS migration.

The reason for my reluctance to build my own images is not some unfounded fear of Dockerfiles. It’s not building the images that I’m worried about, it’s maintaining them over time.

Every Docker image contains a mini Linux distribution, frozen in time from the day the image was built. Like any distribution of Linux these contain many software packages, some of which are going to need an update sooner or later in order to maintain their security. Of course you could just run the equivalent of an apt upgrade inside each container, but this defeats some of the immutability and portability guarantees which Docker provides.

The solution to this, is basically no more sophisticated than ‘rebuild your images and re-deploy’. The problem is no more help is offered. How do you go about doing that in a consistent and automated way?

Enter GitLab CI

If you have been following this blog you’ll know I’m a big fan of GitLab CI for these kinds of jobs. Luckily, GitLab already has support for building Docker images. The first thing I had to do was get this set up on my self-hosted GitLab runner. This was complicated by the fact that my runner was already running as a Docker container. It’s basically trying to start a Docker-in-Docker container on the host, from inside a Docker container, which is a little tricky (if you want to go further down, Docker itself is running from a Snap on a Virtual Machine! It really is turtles all the way down).

After lots of messing around and tweaking, I finally came up with a runner configuration which allowed the DinD service to start and the Docker client in the CI build to see it:

concurrent = 4
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "MyRunner"
  url = "https://gitlab.com/"
  token = "insert your token"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "ubuntu:18.04"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    environment = ["DOCKER_DRIVER=overlay2", "DOCKER_TLS_CERTDIR=/certs"]
    volumes = ["/certs/client", "/cache"]
    shm_size = 0
    dns = ["my internal DNS server"]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
  [runners.custom]
    run_exec = ""

The important parts here seem to be that the privileged flag must be true in order to allow the starting of privileged containers and that the DOCKER_TLS_CERTDIR should be set to a known location. The client subdirectory of this should then be shared as a volume in order to allow clients to connect via TLS.

You can ignore any setting of the DOCKER_HOST environment variable (unless you are using Kubernetes, maybe?). It should be populated with the correct value automatically. If you set it you will most likely get it wrong and the Docker client won’t be able to connect.

The CI Pipeline

I started the pipeline with a couple of static analysis jobs. First up is my old favourite yamllint. The configuration for this is pretty much the same as previously described. Here it’s only really checking the .gitlab-ci.yml file itself.

Next up is a new one for me. hadolint is a linting tool for Dockerfiles. It seems to make sensible suggestions, though I disabled a couple of them due to personal preference or technical issues. Conveniently, there is a pre-built Docker image, which makes configuring this pretty simple:

hadolint:
  <<: *preflight
  image: hadolint/hadolint
  before_script:
    - hadolint --version
  script:
    - hadolint Dockerfile

Building the Image

Next comes the actual meat of the pipeline, building the image and pushing it to our container registry:

variables:
  IMAGE_TAG: registry.gitlab.com/robconnolly/docker-ttrss:latest

build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  before_script:
    - docker info
  script:
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG
  tags:
    - docker

This is relatively straightforward if you are already familiar with the Docker build process. There are a couple of GitLab specific parts such as starting the docker:dind service, so that we have a usable Docker daemon in the build environment. The login to the GitLab registry uses the built-in $CI_BUILD_TOKEN variable as the password and from there we just build and push using the $IMAGE_TAG variable to name the image. Simple!

docker build gitlab
The pipeline for this is actually rather simple and boring compared to some of my others!

I also added another pipeline stage to send me a notification via my Gotify server when the build is done. This pretty much just follows the Gotify docs to send a message via curl:

notify:
  stage: notify
  image: alpine:latest
  before_script:
    - apk add curl
  script:
    - |
      curl -X POST "https://$GOTIFY_HOST/message?token=$GOTIFY_TOKEN" \
           -F "title=CI Build Complete" \
           -F "message=Scheduled CI build for TT-RSS is complete."
  tags:
    - docker

Of course the $GOTIFY_HOST and $GOTIFY_TOKEN variables are defined as secrets in the project configuration.

You can find the full .gitlab-ci.yml file for this pipeline in the docker-ttrss project repository.

Scheduling Periodic Builds

GitLab actually has support for scheduled jobs built in. However, if you are running on gitlab.com as I am you will find that you have very little control over when your jobs actually run, since it runs all cron jobs at 19 minutes past the hour. This would be fine for this single job, but I’m planning on having more scheduled builds run in the future, so I’d like to distribute them in time a little better, in order to reduce the load on my runner.

docker build gitlab
Pipeline triggers are created via the Settings->CI/CD menu in GitLab.

In order to do this I’m using a pipeline trigger, kicked off by a cron job at my end:

# Run TT-RSS build on Tuesday and Friday mornings
16  10 *   *   2,5   /usr/bin/curl -s -X POST -F token=SECRET_TOKEN -F ref=master https://gitlab.com/api/v4/projects/2707270/trigger/pipeline > /dev/null

This will start my pipeline at 10:16am (NZ time) on Tuesdays and Fridays. If the build is successful there will be a new image published shortly after these times.

An Aside: Managing random cron jobs like these really sucks once you have more than one or two machines. More than once I’ve lost a job because I forgot which machine I put it on! I know Ansible can manage cron jobs, but you are still going to end up with them configured across multiple files. I also haven’t found any way of managing cron jobs which relate to a container deployment (e.g. running WordPress cron for a containerised instance). Currently I’m managing all this manually, but I can’t help feeling that there should be a better way. Please suggest any approaches you may have in the comments!

Updating Deployed Containers

I could extend the pipeline above to update the container on my server with the new image. However, this would leave the other containers on the server without automated updates. Therefore I’m using Watchtower to periodically check for updates and upgrade any out of date images. To do this I added the following to the docker-compose.yml file on that server:

watchtower:
    image: containrrr/watchtower
    environment:
      WATCHTOWER_SCHEDULE: 0 26,56 10,22 * * *
      WATCHTOWER_CLEANUP: "true"
      WATCHTOWER_NOTIFICATIONS: gotify
      WATCHTOWER_NOTIFICATION_GOTIFY_URL: ${WATCHTOWER_NOTIFICATION_GOTIFY_URL}
      WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN: ${WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN}
      TZ: Pacific/Auckland
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    restart: always

Here I’m just checking for container updates four times a day. However, these are divided into two pairs of sequential checks separated by 30 minutes. This is more than sufficient for this server since it’s only accessible from my local networks. One of the update times is scheduled to start ten minutes after my CI build to pick up the new TT-RSS container. The reason for the second check after 30 mins is to ensure that the container gets updated even if the CI pipeline is delayed (as sometimes happens on gitlab.com).

You’ll note that I’m also sending notifications from Watchtower via Gotify. I was pleasantly surprised to discover the Gotify support in Watchtower when I deployed this. However, I think the notifications will get old pretty quickly and I’ll probably disable them eventually.

Conclusion

With all this in place I no longer have to worry about keeping my custom Docker images up to date. They will just automatically build at regular intervals and be updated automatically on the server. It actually feels pretty magical watching this all work end-to-end since there are a lot of moving parts here.

I’ll be deploying this approach as I build more custom images. I also still have to deploy Watchtower on all my other Docker hosts. The good news is that this has cured my reluctance to build custom images, so there will hopefully be more to follow. In the meantime, the community can benefit from a well maintained and updated TT-RSS image. Just pull with the following command:

docker pull registry.gitlab.com/robconnolly/docker-ttrss:latest

Enjoy!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

shed lights rainbow

Smart Outdoor Solar Powered Lighting with the ESP32

This post may contain affiliate links. Please see the disclaimer for more information.

The primary form of heating in our home is a wood burning stove. You’re probably thinking that this isn’t very ‘smart’ and you’d be right. It’s the least smart part of my smarthome, requiring daily lighting and monitoring during the winter months. What it is though is eco-friendy and cheap to run. Also, nothing quite beats a wood fire for warmth and hygge. The point of all this is that we have a wood shed (two actually) at the back of our house. During dark winter nights, trips to these have been fraught with danger and mystery due to a lack of lighting. That is until now: this winter my completely over-engineered, ESP32 driven, solar powered, smart LED lights will light the way!

Of course, I could just use a head torch for my after dark trips to the wood shed. Or buy any number of (doubtless cheap and nasty) solar powered lights from the local DIY store. However, that didn’t seem very fun and wouldn’t integrate with Home Assistant and the rest of my smarthome.

Components

For these lights I decided to use WS2812B light strip driven by an ESP32. The solar power system would utilise an off the shelf solar controller and lead acid battery along with a 5W solar panel which I acquired cheap from work.

Full Parts List

solar powered esp32 panel specs
The specs of the solar panel

GCode for the Ender 3 is also available for the 3D printed parts. If you have another printer you’ll need to slice the STL files yourself. I had a few issues with some of the small brackets adhering to the bed, but I was able to get a high enough yield thanks to the OctoPrint-Cancelobject plugin.

led brackets
The LED brackets
battery bracket
The battery bracket

The charge controller is rather over-spec’d for the LEDs I’m actually running (which pull less than 2A at the 12V battery voltage, on full brightness). This is because I originally thought I would use the full 5m of LED strip that I ordered. However when it arrived and I saw how bright they are I downgraded to just 2m. I had also ordered a beefier power supply for this purpose. The one I ended up using was left over from my failed tablet salvaging efforts. This means I have 3m of LED strip and a decent power supply left over for a future project.

Putting it Together

I spent the better part of a day putting together the components and soldering all the cabling. I then spent at least another day mounting things on the shed and fixing issues. All the joins between cables are double covered with heatshrink tubing. The first layer insulates the cable from any neighbouring cables. The second layer is for waterproofing and has roof sealant injected into it. The same is done on the joins between the cable and the LED strip. Hopefully this will stand up to the intense New Zealand rain. Seriously, you have not seen rain until you’ve seen NZ rain! Luckily most of the components are mounted out of the worst of the weather. The cables enter the box through the IP68 cable gland which should be weather tight.

solar powered esp32 mounting
Mounting the electronics in the case
mounting battery
The battery is held securely with that bracket
led mounting bracket
Mounting the first LED bracket
led strip mounted
The LED strip mounted to the shed

When I tested the LEDs after soldering the lead out wires to them I found that the colours were off and transitions and effects were flickery. This could have been due to either a power issue or an issue on the data line. Measuring the 5V line showed minimal voltage drop there. Since the LEDs were OK before adding the lead out I went with the data issue.

solar powered esp32 sacrificial pixel
The sacrificial pixel assembly before mounting in the case

The issue turned out to be due to voltage drop on the 3.3V signal from the ESP32 along the lead out cable. The data line is quite sensitive to voltage drop here because the LEDs are supposed to receive a 5V data signal. They work with 3.3V, but not much lower. In order to solve this I added an single extra pixel at the microcontroller end to boost the signal voltage from 3.3V to 5V. This solves the issue since the data signal is being amplified by each pixel in the chain.

ESPHome Code

The ESPHome code to drive the lights is fairly simple. First we start with the standard setup:

---
esphome:
  name: shed_lights
  platform: ESP32
  board: esp32doit-devkit-v1

wifi:
  ssid: !secret wifi_ssid2
  password: !secret wifi_passwd2
  use_address: !secret shed_lights_ip

mqtt:
  broker: !secret mqtt_broker
  username: !secret mqtt_user
  password: !secret mqtt_passwd

# Enable logging
logger:

ota:
  password: !secret ota_passwd

Here we define the board type, set up the wifi and MQTT connections, enable logging and set up OTA updates. If you’re wondering why I’m using MQTT rather than the ESPHome API, it’s for no other reason than I like MQTT!

power_supply:
  - id: 'led_power'
    pin:
      number: GPIO25
      inverted: true

Next I set up a power supply component. This is used along with the single channel relay to automatically power up and down the power supply to the LEDs. This will save a bit of power and also makes sure that there is no power flowing in the cables outside the box for most of the time, which may help in the event of a leaky connection. In order to do this I’m not running the ESP32 from the same 5V supply, instead using one of the USB ports from the solar controller.

The LED strip configuration is then pretty standard:

light:
  - platform: fastled_clockless
    chipset: WS2812B
    pin: GPIO23
    num_leds: 61
    rgb_order: GRB
    name: "Shed Lights"
    effects:
      - addressable_rainbow:
      - addressable_color_wipe:
      - addressable_scan:
      - addressable_twinkle:
      - addressable_random_twinkle:
      - addressable_fireworks:
      - addressable_flicker:
    power_supply: 'led_power'

Note that there are 61 pixels here, that 2m at 30 pixels per meter plus one sacrificial voltage boosting pixel. The addition of the effects is a bit of a gimmick since I’m mostly interested in white light for the application. I only bought the RGB LEDs because the price difference wasn’t enough to justify only buying white.

solar powered esp32 finished
The finished electronics mounted to the shed
solar powered esp32 panel
The solar panel is mounted to the fence behind the shed
solar powered esp32 relative positions
The electronics box and panel on the end of the shed

Voltage Sensing and Health Monitoring

After putting all this together and mounting it on the shed I decided that I’d like to have some form of monitoring for the voltages from the battery and solar panel. The solar controller obviously monitors these but there is no way to get this data out.

In the end I soldered up a couple of voltage divider circuits and added these to the setup in the box. I also added a DHT22 sensor for temperature and humidity sensing inside the box.

voltage monitoring schematic
Schematic of the voltage dividers used for the power monitoring circuits

The ESPHome configuration for these follows. I also added a binary status sensor and a WiFi signal sensor to allow me to monitor the system remotely.

binary_sensor:
  - platform: status
    name: "Shed Lights Status"

sensor:
  - platform: wifi_signal
    name: "Shed WiFi Signal"
    update_interval: 180s
  - platform: dht
    pin: GPIO15
    model: AM2302
    temperature:
      name: "Shed Battery Box Temperature"
    humidity:
      name: "Shed Battery Box Humidity"
    update_interval: 180s
  - platform: adc
    pin: GPIO39
    name: "Shed Battery Voltage"
    icon: "mdi:car-battery"
    attenuation: "11db"
    filters:
      - multiply: 4.24
      - sliding_window_moving_average:
          window_size: 12
          send_every: 12
    update_interval: 15s
  - platform: adc
    pin: GPIO36
    name: "Shed Solar Panel Voltage"
    icon: "mdi:solar-panel"
    attenuation: "11db"
    filters:
      - multiply: 5.73
      - filter_out: 0.00
      - sliding_window_moving_average:
          window_size: 12
          send_every: 12
    update_interval: 15s

The voltage readings I am getting from the two voltage sensors are a little weird. The battery voltage is higher than I would expect and the solar panel voltage is lower. I double checked the multiplication factors against the raw ADC readings and the resistors used and the readings make sense. Initially I thought this could be due to the temperature inside the box (~50°C when it’s in full sun!), but now I’m not so sure since this is also the case at lower temperatures. It could be the behaviour of the charge controller. I’ll continue to monitor it over different charge states.

shed lights hass card
The card in Home Assistant showing the system status and lighting controls

The Finished Product

Now the moment you’ve all been waiting for – gratuitous photos of the LEDs in fancy colours!

shed lights white
In white, the intended operation mode
shed lights red
Red is insanely bright!
shed lights green
Green
shed lights blue
Blue
shed lights purple
Purple/pink, the favourite of some people in the household
shed lights rainbow
Finally the rainbow effect, the other effects don’t come out so well in a still photo

Conclusion

This is actually my first time using the WS2812B LED strip and I have so say I’m really impressed. You can be sure there will be other LED lighting projects coming in future now that I’ve dipped my toes in!

I’m really pleased with the final product. The LEDs look awesome and provide ample light for their task. The ESPHome base has so far been rock solid in terms of stability, which is what I’ve come to expect from using it in other projects.

The initial intention was to make these lights motion activated. However, I couldn’t find a motion sensor which was suitable for outdoor use. I’d also have to locate the sensor at the other end of the sheds from the battery box which would mean a whole load of wiring. As such I’ve decided to build by own wireless outdoor motion sensor and connect it to my MySensors network. I’ll then trigger the LEDs via an automation in Home Assistant. I’ll post an update on this when I have it running.

This has been a really fun and interesting project. As always, please let me know what you think in the feedback channels and feel free to share your own LED lighting projects.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

mysensors node red proxy flow

Quick Project: MySensors MQTT Proxy with Node-RED

This post may contain affiliate links. Please see the disclaimer for more information.

In my previous exploration of MySensors, I encountered issues with the range of the nrf24l01+ radio. One of the simplest solutions to this problem is to move the gateway closer to the intended deployment of the sensors. In my case the sensor is positioned on the back wall of our house, just outside the master bedroom. Connected to the TV inside the master bedroom is a Raspberry Pi running Libreelec. I reasoned that I should be able to use this Pi to proxy the MySensors serial data to Home Assistant via MQTT, using the MySensors Node-RED nodes. With this approach I can make a new MQTT gateway using hardware I already have deployed.

It was also suggested that adding a 10-100uF 6.3/10V capacitor across the power line to the radio would also help with the range issues. I’m definitely going to do this, but the components haven’t arrived yet!

Hardware Setup

If you followed my previous post all you need to do here is plug in the serial gateway to the Pi! Otherwise you’re going to need to build a gateway (and maybe some sensors). Go check out my original post for more details.

It’s probably worth going through how this is going to work. Basically, the idea is that the RF data will be received by our existing MySensors gateway and sent via serial to the RPi. This will be read by Node-RED, re-encoded and sent via MQTT to Home Assistant which will still act as the MySensors controller:

Serial GW ⇒ Serial Connection ⇒ Node-RED (on RPi) ⇒ MQTT ⇒ HASS (Controller)

In the return direction this dataflow operates in reverse:

HASS ⇒ MQTT ⇒ Node-RED ⇒ Serial Connection ⇒ Serial GW

Deploying Node-RED on Libreelec

Libreelec is a cut down appliance-like Linux distribution, specifically for running Kodi. At first glance it seems like installing a third party program such as Node-RED would be difficult. However, Libreelec has Docker support via an add-on, which makes things as easy as running the command:

$ docker run -d --restart=always --device=/dev/ttyUSB0:/dev/ttyUSB0 --group-add dialout -v /storage/node-red-data:/data -p 1880:1880 --name node-red nodered/node-red:latest

This starts the Node-RED Docker container with access to the serial port for our plugged in Arduino board. You’ll need to adjust the path to the TTY device depending on how the Arduino gets enumerated. We are also storing the data for Node-RED (including flows) in the Libreelec storage directory, so that we can re-create the container without losing our flows.

Once this starts up, you should be able to access a nice clean Node-RED instance on port 1880 of your Libreelec Pi. At this point it’s probably a good idea secure your instance and set up projects for backing up your flows via Git.

MySensors Proxy Flow

So here’s what you came here for. Using the MySensors nodes I’ve created a flow which receives MySensors data via the serial port decodes it to a Javascript object and then immediately encodes it and sends it via MQTT. To do this we make use of the serial decode and MQTT encode nodes. In the opposite direction we receive MQTT data on the mysensors/in/# wildcard and decode it to another JS object. This is then immediately encoded for serial and sent out the serial port. The finished flow looks like this:

mysensors node red proxy flow
The MySensors proxy flow

The full JSON for this is below. You’ll need to update the details for the MQTT broker and serial port to match your installation:

[{"id":"750924cf.e87e14","type":"tab","label":"MySensors MQTT Proxy","disabled":false,"info":""},{"id":"e7a50a55.c700d8","type":"mysdecode","z":"750924cf.e87e14","database":"","name":"Decode Serial","mqtt":false,"enrich":false,"x":244,"y":125,"wires":[["1fe04ed1.423d09"]]},{"id":"1fe04ed1.423d09","type":"mysencode","z":"750924cf.e87e14","name":"Encode MQTT","mqtt":true,"mqtttopic":"mysensors/out","x":429,"y":125,"wires":[["73eb8bcf.3be6bc"]]},{"id":"73eb8bcf.3be6bc","type":"mqtt out","z":"750924cf.e87e14","name":"","topic":"","qos":"","retain":"","broker":"61f77eeb.b75e2","x":584,"y":125,"wires":[]},{"id":"1a5ca295.7db125","type":"serial in","z":"750924cf.e87e14","name":"serial in","serial":"31aea0d6.e5a438","x":87,"y":125,"wires":[["e7a50a55.c700d8"]]},{"id":"5a8336a4.b9d398","type":"mqtt in","z":"750924cf.e87e14","name":"","topic":"mysensors/in/#","qos":"2","datatype":"auto","broker":"61f77eeb.b75e2","x":117,"y":179,"wires":[["b8c6fd6f.3ecf08"]]},{"id":"b8c6fd6f.3ecf08","type":"mysdecode","z":"750924cf.e87e14","database":"","name":"Decode MQTT","mqtt":true,"enrich":false,"x":320,"y":180,"wires":[["440bfad4.44eadc"]]},{"id":"440bfad4.44eadc","type":"mysencode","z":"750924cf.e87e14","name":"Encode Serial","mqtt":false,"mqtttopic":"","x":508,"y":179,"wires":[["e8c5f4bd.bc3d3"]]},{"id":"e8c5f4bd.bc3d3","type":"serial out","z":"750924cf.e87e14","name":"serial out","serial":"31aea0d6.e5a438","x":676,"y":179,"wires":[]},{"id":"61f77eeb.b75e2","type":"mqtt-broker","z":"","name":"Home Broker","broker":"mqtt.example.com","port":"1883","tls":"b8f1023d.4df4","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"nodered/status","birthQos":"2","birthRetain":"true","birthPayload":"online","closeTopic":"nodered/status","closeQos":"2","closeRetain":"true","closePayload":"offline","willTopic":"nodered/status","willQos":"2","willRetain":"true","willPayload":"offline"},{"id":"31aea0d6.e5a438","type":"serial-port","z":"","serialport":"/dev/ttyUSB0","serialbaud":"115200","databits":"8","parity":"none","stopbits":"1","waitfor":"","dtr":"none","rts":"none","cts":"none","dsr":"none","newline":"\\n","bin":"false","out":"char","addchar":"","responsetimeout":"10000"},{"id":"b8f1023d.4df4","type":"tls-config","z":"","name":"Home Broker","cert":"","key":"","ca":"","certname":"","keyname":"","caname":"","verifyservercert":true}]

That’s it for the flow. We can integrate this into HASS by changing our previous gateway configuration to use MQTT:

mysensors:
  gateways:
    - device: mqtt
      persistence_file: '/config/mysensors.json'
      topic_in_prefix: 'mysensors/out'
      topic_out_prefix: 'mysensors/in'

Conclusion

With this in place I’m now able to receive data from my sensor in the greenhouse without the reception issues I was experiencing. I like the use of Node-RED for these kinds of protocol conversion jobs. Its dataflow model is supremely suited to these operations.

Other than the range issue, the MySensors node I built in my previous post has been pretty reliable. I did make a minor hardware change to the setup I described earlier, which was to move positive battery connection from the raw power line to the VCC line of the Arduino. This is because the raw line is regulated to 3.3V. The voltage regulator will consume some power and also won’t work once the battery voltage drops below about 3.3V. Connecting to VCC powers the micro directly from the battery without the extra power loss. Since this modification the sensor has been running flawlessly for several weeks.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.