Road to Docker Part 2

My Road To Docker – Part 2: My Home Automation Stack

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


In my first post of this series, I outlined my plan to convert my infrastructure over to a layered setup. This would consist of virtual machines (in various VLANs), with most of the services running in Docker. This post details the second stage of my road to Docker, although really was is the first stage since I’m writing these out of order! I actually converted my home automation systems over to Docker before tackling the web stack.

The motivation behind upgrading the home automation system first was to do it at the same time as I did a large update to Home Assistant, since I’d been holding back on updating. The main reason for this was the switch to Lovelace as the default UI, which I was dreading. As it turned out, I waited long enough for the awesome HASS developers to make all my problems go away (or at least the Lovelace related ones).

System Summary

I’ve written about my home automation setup before, but here is a brief recap of what I’m running (only the server side stuff):

I had also been running InfluxDB and Grafana. However, something broke in my setup and I hadn’t got around to fixing it. I therefore decided to cut my losses with that and not reinstall it (for now).

Finding Docker Images

Luckily for me, the four main components of my system all have official/recommended Docker images available. This was useful as I’m always pretty reticent to use some questionably maintained image from the Docker Hub, mainly due to the lack of security updates. I also wanted to avoid building custom Docker images for now, until I work out a decent update strategy.

In addition to the four services above I wanted to run the ESPHome dashboard in order to manage my devices better. I had previously just been using the command line tool to build and upload to them. This also has an official Docker image.

Road to Docker Part 2
Looks like I have a few devices to update!

I also ended up running a MaryTTS container to replace PicoTTS that I had been running for my voice announcements. This is due to the lack of PicoTTS inside the HASS Docker image. It was recommended that I use the silversniper/marytts image. Looking at this image, it hasn’t been updated in three years which. This reinforces my point about random images from Docker Hub. Luckily, this isn’t an externally facing application so it isn’t too critical from a security standpoint. However, I think I’ll look into updating that at some point.

Stacking Containers, again…

I set up a new clean VM inside my home automation VLAN. I was a little reserved about doing this, since it means everything in that VLAN (most of which is blocked from the internet) can see the full HA server. However, my main worry over not doing that was the mDNS used by ESPHome. If I can get Avahi/mDNS working across VLANs at some point I will move it. I still have a big network re-organisation to do, so hopefully it will get done then.

The full docker-compose.yml file for my new stack is given below:

There’s nothing particularly earth shattering here. The main point of interest is that I mount the voice file for my preferred MaryTTS voice inside the container. Actually finding the voices is a little interesting. The official way to download them is a GUI tool that won’t run inside the container. I eventually found the XML file which lists all the available voices and extracted the URL of the one I wanted (the online demo helps to decide).

The only other parts worth noting are that I mount all my volumes under /mnt/docker-data, which is an NFS share onto the ZFS array of the virtual machine host. This then gets rolled into my normal backups. I also didn’t bother with a reverse proxy for any of this, since I already have one for HASS sitting in my DMZ (yet to be Dockerised). The other services just get accessed via the machine hostname and port since they are only used internally.

Sometimes Waiting Pays

I didn’t run into any issues particular to setting this up in Docker. At this point I think it’s a pretty well trodden path and this setup is pretty much standard. I did however run into several issues with upgrading to the latest Home Assistant.

First, lets tackle the elephant in the room – Lovelace. I was really worried going into this that it was going to be a huge amount of work. My mind was somewhat put at rest by seeing the UI editor in action via misperry’s video. When I actually came to it, the migration process automatically re-created my existing UI pretty much perfectly. Lesson learned: there is something to be said for waiting for mature software, rather than jumping on the new shiny thing immediately!

Lovelace itself is awesome! The ease of configuration has made me actually focus on making my HASS UI nicer rather than just the bare minimum I could get away with that I had previously. In the screenshot below, you can see my new “Outdoors” panel. This contains weather information, outdoor related sensor readings and a couple of local webcam views.

Road to Docker Part 2
I probably should have taken this screenshot during the day

Remaining Issues

Most of my remaining issues were due to the Home Assistant “Great Migration”. This resulted in a load of entity IDs changing in various components. Obviously, this resulted in my having to update my configuration to change all the names. It took a little while to troubleshoot. This was because if the changed name is used in an automation, the automation has to actually fire to cause an error. In many cases it also just won’t fire, if the name is used in the triggers for the automation.

The final major issue I encountered, was with my frankly awesome vacuuming robot, which appeared to stop reacting to service calls in HASS. The underlying issue appears to be the Botvac D3 returning a different error message than the D7 that the library was tested with. So far this hasn’t been fixed, but I’m currently using the category: 2 workaround suggested and that’s working fine. I think I’ll have a look into fixing that issue and submit a PR when I get time.

Managing Updates

Managing updates to Docker images has always been an bit of an issue for me. In the past I’ve used Watchtower with some success. However, due to the capacity for breaking changes I want to manage HASS updates more carefully. It was suggested to me to just use a bash script which I can run periodically to do this. This isn’t something that had occurred to me before, probably because it’s so simple! Here’s the script I’m using:

This works beautifully and allows me to easily keep up with release to HASS and the other components, once I’ve verified that it’s reasonably safe to update.

Conclusion and Next Steps

Overall I’m pretty happy with how this move has turned out. Once the initial teething issues were all worked out the system has been very stable. I’m appreciating the extra utility of the ESPHome dashboard, which makes it very convenient to update my devices. It’s also great to be back on the latest version of Home Assistant.

In terms of next steps, I would like to give InfluxDB and Grafana another try. My main issue here has always been building the dashboards. It seems to be pretty tricky to get something both good looking and useful in Grafana. I also haven’t seen any pre-built dashboards for use with data from Home Assistant. Perhaps this is because they are so peculiar to individual setups.

I also have an LXD container running ZoneMinder. I’d like to re-deploy this as a Docker container on the same VM. Previously, I’ve not had too much luck running ZoneMinder in Docker. I’ll to see if the situation has improved when I tackle this migration.

I’m not actively working on any further migrations of other services to Docker at the moment, so there will probably be a break in this series for now. However, given my current success I’ll definitely be continuing on with this migration. I just want to work on some other projects for a while!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

HDMI CEC Flow

HDMI CEC for Home Assistant with Node-RED

I set out on a Sunday morning thinking this was going to be a quick project and, not having decided on a blog topic for this week, it seemed like the ideal candidate. I was wrong – about it being a quick project, hopefully not about it being a reasonable subject for a blog post.

This post is brought to you by issue #12846 in Home Assistant (and the letter ‘C’). That is to say, one of my automations was broken by this issue, which has been sitting open on GitHub since the beginning of march with no progress. I don’t want this to sound like the usual “user of Open Source application complains about free stuff”, because I’m not actually complaining. I understand that software breaks and sometimes there aren’t the resources available to fix it. The solution to this is to get more developers paid to work on Free and Open Source Software (but that’s entirely a discussion for outside of this post).

Actually, this post is here to offer a solution (or at least a temporary one) to the issue, outside of Home Assistant, since I couldn’t fix it myself (I took a look at the code in question and I couldn’t work it out – it needs to be done by someone with more familiarity with the Home Assistant core).

My solution is to use Node-RED along with the HDMI CEC nodes to create an auto-discovered MQTT switch with which I can turn on and off my TV. So, let’s get into the flow…

The Flow

HDMI CEC Flow

The HDMI CEC Switch Flow

This flow runs on an instance of Node-RED running on my OSMC based Raspberry Pi sitting behind my TV (for those keeping up at home, this makes two NR instances on my network – so far). Currently, this is the only flow running on this instance, but I’m considering what else I can run now that I have Node-RED available there. I installed Node-RED on OSMC using the official install/upgrade script. I had fully expected installing Node-RED under OSMC to be a major pain, but it turned out to just amount to running that command.

After the install had finished, I created a user for Node-RED since I like it to run under it’s own user and updated the systemd unit file accordingly. I then installed the CEC nodes linked above from the palette manager. Here I ran into a minor bump in the road in that the CEC nodes couldn’t execute the cec-client program. As it turned out the location of that binary is in a weird location on OSMC, so I added the following in the systemd file to set this up:

I also needed to add my new Node-RED user to the video group to allow access to the CEC device:

Where I really got stuck was playing around with the example flow for the CEC nodes. It wasn’t that it didn’t work as advertised, it was that it broke the CEC command passthrough to Kodi running on the same machine, rendering my TV remote useless within Kodi. Many hours, much futile searching and playing with cec-client later, I still wasn’t any closer to a solution. I knew it must work, because somehow the pycec script I was using previously is able to send an receive CEC packets without interfering with Kodi.

The breakthrough was dropping both a CEC-In and a CEC-Out node into my flow and only grabbing a few CEC packet types in the filter of the input node. I say ‘breakthrough’ – this works most of the time, but it throws a few errors and warnings on start up. I found it to be most reliable when I immediately restarted Kodi after deploying it – this also helps Kodi to regain its CEC connection if necessary.

So How Does It Work?

Oh, yeah. I was going to talk about the flow, but I kinda got sidetracked there.

Well, it’s pretty simple there are two sequences in the flow. One which handles the switch state and MQTT discovery configuration (bottom) and one which handles the incoming commands over MQTT and sending the corresponding CEC commands.

Let’s start with the bottom one:

This sequence has two input paths, the bottom of these executes on start up (or at deploy time) and sends the Home Assistant MQTT Discovery configuration, using the same technique I used in my volcano sensors. The start up message also passes through a 3 second delay before passing to an exec node, which restarts Kodi. I added the following to my sudoers file (via visudo), to allow this:

The top input path receives incoming CEC messages of the type REPORT_POWER_STATUS. In my setup, this only receives power messages from the TV, but you may receive messages from other devices on the bus, in this case you can add a check on the source address of the packet in the following function node (clue: the TV is usually address 0).

The message passes through a function node, which converts the power status to the switch status expected by HASS and also sets the MQTT topic:

Both input paths are connected to a common MQTT output node to send their respective messages (config and state) out to Home Assistant.

The top sequence simply subscribes to the command topic from HASS and determines whether the command was on or off. The JSON payload for the CEC command is then set respectively in either branch – this JSON is taken directly from the example flow linked above. Then we pass this out to the CEC adapter – done. When the device acts upon the CEC command it should send its new power state back through and update the state of our switch. The state will also be updated if you turn on the TV by other means, e.g. the remote.

Pure JSON

This JSON was made in clean, green New Zealand from 100% natural ingredients (electrons):

Bonus: Home Assistant Automation Rules

Here are the Home Assistant automations that I’m using with this. Basically I’m turning off the TV five minutes after either Kodi or the Chromecast stops playing, unless it started again in the meantime:

This uses a timer, which is defined as:

Done. Now we can be lazy/forgetful about leaving TV on and also not waste power. Mission Accomplished.

Conclusion

Hopefully, someone will find time to fix the bug above. I’m probably going to stick with this regardless because I had some other issues running pyCEC on top of OSMC – mainly because they don’t build the libcec bindings for Python 3 by default. I had some custom patches to do this, but it would break (in one way or another on every update). Hopefully, this solution should be more robust. Also, the MQTT connection used in this solution runs over TLS (rather than the unencrypted TCP of the pyCEC network mode), so there is a little security win. Plus, as I already mentioned, now I have a Node-RED instance on a Pi in my living room.

Mt. Taranaki

Home Assistant MQTT Discovery Sensors in Node-RED

Alternatively Titled: How I Made Home Assistant Aware of the Volcano Next Door

Mt. Taranaki

If this guy blows, we’re gonna have a bad day

As I’ve previously mentioned, I’m a big fan of the Home Assistant MQTT Discovery feature. I’ve also historically been a fan of Node-RED and have recently been getting back into it. This has been mostly due to the uptick in interest in the platform in the HASS community. So, I decided to have a play around and come up with an implementation of an auto-discovered MQTT sensor in Node-RED. This post documents using this approach pull some interesting data into Home Assistant.

Since moving to a different part of New Zealand last year I’ve wanted implement a sensor in HASS which would monitor the state of the local volcano. Luckily, GeoNet provide a nice API for getting volcanic alert levels for all the volcanic fields in NZ. I was initially going to write a custom component for doing this (and at some point contribute it back). However, being generally even shorter on time than usual at the moment I never quite got there. That was until I was playing around with Node-RED and had a brain wave.

The Flow

I’m going to cut straight to the chase and show a screenshot of the flow I came up with. I’ll then explain it below (the flow JSON can be found later in the post):

The full volcano data flow

The full volcano data flow

The start of the flow is pretty basic – a simple inject node which injects a timestamp every 6 hours. The payload to this is irrelevant since it’s just used to kick off the flow. I didn’t want to hit the API endpoint too often since I’ve so far never seen the data change. If the mountain suddenly goes boom, I think I’ll have more pressing issues than whether my data is up to date.

Next, we have the HTTP Request node which goes out and performs a GET request to the URL given in the API documentation above. I enabled TLS support and opted to get the response data back as a parsed JSON object.

Filtering Data

Since the API returns data for all the volcanic fields in New Zealand, I needed to filter the data. The next node just selects the Taranaki/Egmont field that I am interested in. I used the following code in a function node to do this:

Basically this just iterates over all the features in the data and finds the one with the ID taranakiegmont and then substitutes it’s data in as the message payload. I also build the topic for the subsequent publish to MQTT based on the volcano ID.

The output of this function branches to another function node on one branch and a delay node on the other. The delay node here is used to make sure that the function node above runs and sends it’s output before the original message passes to the the MQTT publish node.

Building HASS Configuration

The top function node is responsible for building the required configuration payloads and topics for the three sensors this will create via Home Assistant MQTT Discovery. Here I create one sensor for each of the quantities in the data from the API. This is achieved with the following snippet of code:

This does the same thing for three new message objects, building a payload and topic for each. I use the ability of HASS to grab data from the payload of the main publish by specifying the state topic. I set this to the topic I built in the previous function node. A value template is also specified for each, pretty much exactly as in the Home Assistant MQTT Discovery documentation.

Output Via MQTT

All three outputs of this node are passed to the MQTT publish node, which publishes with QoS 2 and the retain flag set. This means that whenever Home Assistant comes up after a restart it will see the values in both the configuration and state topics for these sensors and re-create them automatically.

Attentive readers would have also noticed that I publish the configuration messages whenever I publish the state (every 6 hours). This doesn’t matter as HASS will just ignore the configuration messages for sensors which it has already discovered.

So, that’s it. With this in place the sensors should appear in Home Assistant:

Home Assistant Volcano Sensors

Note the reassuring zero for activity level!

The JSON:

As promised, here is the full JSON for the flow. To add this to your Node-RED instance copy it to your clipboard and go to Hamburger->Import->Clipboard in Node-RED and paste the JSON. You can select whether to import to the current flow or a new flow and then hit ‘import’ and you should see the nodes:

If you are importing this directly, you will need to configure your MQTT broker settings under the MQTT publish node before hitting ‘deploy’.

Wrap Up

That’s pretty much all there is to it. I hope this has demonstrated the concept of using Node-RED to create sensors in Home Assistant, without any changes to the HASS configuration. The flow presented is pretty simple but actually serves a useful purpose. Hopefully, you can come up with some uses of your own for this approach. Please feel free to share them in the comments below if you do, so that others may benefit from your ideas.

Thanks for reading. I’m working on a few more things with Node-RED so hopefully I’ll post about them soon. Bye!