continuous integration home assistant

Continuous Integration for Home Assistant, ESPHome and AppDaemon

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I set up continuous integration and deployment from my Home Assistant configuration. This setup has been nothing short of awesome! It’s liberated me from worrying about editing my configuration – all I do is git push and relax. Either HASS will notify me when it restarts or I’ll get an email from Gitlab telling me the pipeline failed.

I wanted to take this configuration further and expand it to other parts of my Home Automation infrastructure. In this post I’ll cover expanding it to perform deployments of my HA stack with Docker, building and deploying to ESPHome devices and unit testing and deploying my AppDaemon apps.

Let’s get on with it!

Automating Docker Deployment

I’d originally held off doing this because I wasn’t looking forward to building custom Docker images in Gitlab CI. However, I managed to complete the original pipeline without having to add any extra dependencies to the HASS containers (such as git which I thought may be required). This makes the job of deploying my HA stack much easier, especially as I already had it mostly scripted. The first step was to add my update.sh script to my repo and tweak it to suit:

This is a pretty simple modification to my previous script. The main additions are that I use the -p argument to set the project name used by docker-compose. By default this is taken from the directory name, but I wanted it to match the name of my previous project even though the directory has changed from ha to home-assistant. The other main modification is that I’ve added the --remove-orphans argument to clean up any lingering containers. This is useful if I remove a container from the docker-compose.yml file. In addition I’ve removed the apt commands and cleaned up the script a bit so that it passes my shellcheck job.

The next step was simply to add the docker-compose.yml file to the repo. Then I continued by editing the CI configuration.

Updated Home Assistant CI Jobs

I first split up my previous deployment job into two jobs. The first of these is the main deployment job which pulls the new configuration. The second restarts HASS. The restart job goes in a new pipeline stage and will only be run when the docker-compose.yml or update.sh files haven’t changed:

I then added another job (again in another pipeline stage) which performs our Docker deployment. This will be run only when either the docker-compose.yml or update.sh files changes:

continuous integration home assistant
A full pipeline run with a deployment of the Docker containers running in the final stage.

With that in place I can now redeploy my HA stack by modifying either of those files, committing to git and pushing. In order to facilitate HASS updates with this workflow, I changed the tag of the HASS Docker image to the explicit version number. That way I can simply update the version number and redeploy for each new release.

Continuous Integration for ESPHome

Inspired by the previous configs I have seen for checking ESPHome files, I wanted to implement the same checks. However, I wanted to go further and have a full continuous deployment setup which would build the relevant firmware when its configuration was changed and send an OTA update to the corresponding device. As it turned out this was relatively easy.

I started out by importing my ESPHome configs into Git, which I hadn’t previously done. You can find the resulting repository on Gitlab. For the CI configuration I first copied over the markdownlint and yamllint jobs from my Home Assistant CI configuration.

I then borrowed the ESPHome config check jobs from Frenck’s configuration. These check against both the current release of ESPHome and the next beta release. The beta release job is allowed to fail and is designed only to provide a heads up for potential future issues.

Then I came to implement the build and deployment job. Traditionally these would be performed in separate steps, but since ESPHome can do this in a single step with it’s run subcommand I decided to do it the easy way. This also removes the requirement to manage build artifacts between steps. I created the following template job to manage this:

Most of the complexity here is in unlocking the git-crypt repository so that we can read the encrypted secrets file. I opted to store the git-crypt key in the repository, encrypted with openssl. The passphrase used for openssl is in turn stored in a Gitlab variable, in this case $OPENSSL_PASSPHRASE. Once the decryption of the key is complete, we can unlock the repo and get on with things. We remove the key after we are done in the after_script step.

Per-Device Jobs

Using the template configuration, I then created a job for each device I want to deploy to. These jobs are executed only when the corresponding YAML file (or secrets.yaml) is changed. This ensures that I only update devices that I need to on each run. The general form of these jobs is:

Of course you need to replace my_device with the name of your device file.

continuous integration home assistant
A run of the ESPHome pipeline with deployments to two devices

With these jobs in place I have a full end-to-end pipeline for ESPHome, which lints and checks my configuration before deploying it only to devices which need updating. Nice! You can check out the full pipeline configuration on Gitlab. I now no longer have need to run the ESPHome dashboard, so I’ve removed it from my server.

Continuous Integration for AppDaemon

I mentioned previously that I wanted to split out my AppDaemon apps and configuration into a separate repo from my HASS config. I did this as a prerequisite step of this setup and you can again find the new repo on Gitlab.

The inspiration for this configuration came mostly to @bachya on the HASS forum, whose post in reply to my earlier setup provided most of the details. Thanks for sharing!

I started out by copying across the now ubiquitous markdownlint and yamllint jobs. I then added jobs for pylint, mypy, flake8 and black:

Although this ends up being very verbose, I decided to implement these all as separate jobs so that I get individual pass/fail states for each. I’m also pretty sure the mypy job doesn’t do anything right now, because I’m not using any type hints in my Python code. However, the job is there for when I start adding those.

Unit Testing AppDaemon

Another thing that @bachya introduced me to was Appdaemontestframework. This provides a pytest based framework for unit testing your AppDaemon apps. Although I’m still working on the unit tests for my so far pretty minimal AD setup I did manage to get the framework up and running, which was a little tricky. I had some issues with setting up the initial configuration for the app, but I managed to work it out eventually.

The unit testing CI job is pretty simple:

All we do here is install the requirements that I need for the tests and then call py.test. Easy!

The deployment job for AppDaemon was also trivial, since it is pretty much a copy of the HASS one. Since AD detects changes to your apps automatically, there’s no need to restart. For more details you can check out the full CI pipeline on Gitlab.

continuous integration home assistant
A run of the AppDaemon pipeline – lots of preflight checks here!

Conclusion

Phew, that was a lot of work, but it was all the logical follow on from work I’d done before or that others had done. I now have a full set of CI pipelines for the three main components of my home automation setup. I’m really happy with each of them, but especially the ESPHome pipeline. As an embedded engineer in my day job I find it really cool that I can update a YAML file locally, commit/push it and then my CI takes over and ends up flashing a physical device! That this is even possible is a testament to all the pieces of software used.

Next Steps

I’m keen to keep going with CI as a means of automating my operations. I think my next target will be sprucing up my Ansible configurations and running them automatically from CI. Stay tuned for that in the hopefully near future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Road to Docker Part 2

My Road To Docker – Part 2: My Home Automation Stack

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


In my first post of this series, I outlined my plan to convert my infrastructure over to a layered setup. This would consist of virtual machines (in various VLANs), with most of the services running in Docker. This post details the second stage of my road to Docker, although really was is the first stage since I’m writing these out of order! I actually converted my home automation systems over to Docker before tackling the web stack.

The motivation behind upgrading the home automation system first was to do it at the same time as I did a large update to Home Assistant, since I’d been holding back on updating. The main reason for this was the switch to Lovelace as the default UI, which I was dreading. As it turned out, I waited long enough for the awesome HASS developers to make all my problems go away (or at least the Lovelace related ones).

System Summary

I’ve written about my home automation setup before, but here is a brief recap of what I’m running (only the server side stuff):

I had also been running InfluxDB and Grafana. However, something broke in my setup and I hadn’t got around to fixing it. I therefore decided to cut my losses with that and not reinstall it (for now).

Finding Docker Images

Luckily for me, the four main components of my system all have official/recommended Docker images available. This was useful as I’m always pretty reticent to use some questionably maintained image from the Docker Hub, mainly due to the lack of security updates. I also wanted to avoid building custom Docker images for now, until I work out a decent update strategy.

In addition to the four services above I wanted to run the ESPHome dashboard in order to manage my devices better. I had previously just been using the command line tool to build and upload to them. This also has an official Docker image.

Road to Docker Part 2
Looks like I have a few devices to update!

I also ended up running a MaryTTS container to replace PicoTTS that I had been running for my voice announcements. This is due to the lack of PicoTTS inside the HASS Docker image. It was recommended that I use the silversniper/marytts image. Looking at this image, it hasn’t been updated in three years which. This reinforces my point about random images from Docker Hub. Luckily, this isn’t an externally facing application so it isn’t too critical from a security standpoint. However, I think I’ll look into updating that at some point.

Stacking Containers, again…

I set up a new clean VM inside my home automation VLAN. I was a little reserved about doing this, since it means everything in that VLAN (most of which is blocked from the internet) can see the full HA server. However, my main worry over not doing that was the mDNS used by ESPHome. If I can get Avahi/mDNS working across VLANs at some point I will move it. I still have a big network re-organisation to do, so hopefully it will get done then.

The full docker-compose.yml file for my new stack is given below:

There’s nothing particularly earth shattering here. The main point of interest is that I mount the voice file for my preferred MaryTTS voice inside the container. Actually finding the voices is a little interesting. The official way to download them is a GUI tool that won’t run inside the container. I eventually found the XML file which lists all the available voices and extracted the URL of the one I wanted (the online demo helps to decide).

The only other parts worth noting are that I mount all my volumes under /mnt/docker-data, which is an NFS share onto the ZFS array of the virtual machine host. This then gets rolled into my normal backups. I also didn’t bother with a reverse proxy for any of this, since I already have one for HASS sitting in my DMZ (yet to be Dockerised). The other services just get accessed via the machine hostname and port since they are only used internally.

Sometimes Waiting Pays

I didn’t run into any issues particular to setting this up in Docker. At this point I think it’s a pretty well trodden path and this setup is pretty much standard. I did however run into several issues with upgrading to the latest Home Assistant.

First, lets tackle the elephant in the room – Lovelace. I was really worried going into this that it was going to be a huge amount of work. My mind was somewhat put at rest by seeing the UI editor in action via misperry’s video. When I actually came to it, the migration process automatically re-created my existing UI pretty much perfectly. Lesson learned: there is something to be said for waiting for mature software, rather than jumping on the new shiny thing immediately!

Lovelace itself is awesome! The ease of configuration has made me actually focus on making my HASS UI nicer rather than just the bare minimum I could get away with that I had previously. In the screenshot below, you can see my new “Outdoors” panel. This contains weather information, outdoor related sensor readings and a couple of local webcam views.

Road to Docker Part 2
I probably should have taken this screenshot during the day

Remaining Issues

Most of my remaining issues were due to the Home Assistant “Great Migration”. This resulted in a load of entity IDs changing in various components. Obviously, this resulted in my having to update my configuration to change all the names. It took a little while to troubleshoot. This was because if the changed name is used in an automation, the automation has to actually fire to cause an error. In many cases it also just won’t fire, if the name is used in the triggers for the automation.

The final major issue I encountered, was with my frankly awesome vacuuming robot, which appeared to stop reacting to service calls in HASS. The underlying issue appears to be the Botvac D3 returning a different error message than the D7 that the library was tested with. So far this hasn’t been fixed, but I’m currently using the category: 2 workaround suggested and that’s working fine. I think I’ll have a look into fixing that issue and submit a PR when I get time.

Managing Updates

Managing updates to Docker images has always been an bit of an issue for me. In the past I’ve used Watchtower with some success. However, due to the capacity for breaking changes I want to manage HASS updates more carefully. It was suggested to me to just use a bash script which I can run periodically to do this. This isn’t something that had occurred to me before, probably because it’s so simple! Here’s the script I’m using:

This works beautifully and allows me to easily keep up with release to HASS and the other components, once I’ve verified that it’s reasonably safe to update.

Conclusion and Next Steps

Overall I’m pretty happy with how this move has turned out. Once the initial teething issues were all worked out the system has been very stable. I’m appreciating the extra utility of the ESPHome dashboard, which makes it very convenient to update my devices. It’s also great to be back on the latest version of Home Assistant.

In terms of next steps, I would like to give InfluxDB and Grafana another try. My main issue here has always been building the dashboards. It seems to be pretty tricky to get something both good looking and useful in Grafana. I also haven’t seen any pre-built dashboards for use with data from Home Assistant. Perhaps this is because they are so peculiar to individual setups.

I also have an LXD container running ZoneMinder. I’d like to re-deploy this as a Docker container on the same VM. Previously, I’ve not had too much luck running ZoneMinder in Docker. I’ll to see if the situation has improved when I tackle this migration.

I’m not actively working on any further migrations of other services to Docker at the moment, so there will probably be a break in this series for now. However, given my current success I’ll definitely be continuing on with this migration. I just want to work on some other projects for a while!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Installed room sensor

Room Sensor Project: Part 2 – Infrastructure and Mounting

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


Looking back, it seems to have been a ridiculously long time since my last room sensor post. It’s well over a year, but it doesn’t seem that long ago. This project has been majorly delayed by a few issues and generally hasn’t been top of my todo list. However, I’m now at the point where I have the prototype sensor installed and working. Actually, it’s been working for several months, but I hadn’t got around to writing it up! In this post I’ll mostly be detailing the infrastructure used to get power to the sensor. There will also be some discussion on the case and mounting as well as a few words on the software.

Finding a Case

It would have been really nice just to 3D print a case and mounting bracket for the sensors. Unfortunately, I don’t (yet) have a 3D printer and it was cheaper to buy cases than to get a 3D printing service to print them. I settled on a 100x60x25mm case and ordered 15 of them. Once they arrived I was able to fit all the electronics inside and cut a slot in the bottom for the DHT22 sensor. A dremel-like tool would have helped a lot here, but I managed to do it manually and it looks OK. I actually reversed the case so that the lid became the rear as this looks a little nicer and helped with the mounting of the components.

I also fitted a light sensor based on an LDR voltage divider circuit to the front of the case. Unfortunately I had issues with the ADC pin on the bare ESP8266 module I used for the prototype. It’s odd because I’ve had this working with the Wemos D1 modules before (which have their own voltage divider on the input also). In the end I didn’t manage to get this working and have resolved to replace the prototype board with a Wemos based one when I do the sensors for the rest of the house.

I drilled a hole in the front of the case for the PIR sensor and mounted the diffuser over the hole. The PIR board itself was hot glued to the reverse of this. This works really well – in fact if anything the sensor is a little too sensitive. I need to tweak the pots a little to dial this down.

You can see pictures from during the assembly as well as the fully assembled case below:

Main room sensor board mounted to case lid.
Main room sensor board mounted to case lid.
PIR difuser mounted to case (front view)
PIR diffuser mounted to case (front view)
PIR difuser mounted to case (top view)
PIR diffuser mounted to case (top view)
PIR sensor mounted to inside of case.
PIR sensor mounted to inside of case.
Side view of room sensor board showing power connections
Side view of room sensor board showing power connections
Fully assembled case (upside down)
Fully assembled case (upside down)
Fully assembled case (side view)
Fully assembled case (side view)
Fully assembled case (standing on the DHT sensor)
Fully assembled case (standing on the DHT sensor)

Power Setup and Wiring

In order to get power to the sensors I had decided on running 12V lines through the roof space of the house. These would then come down through the ceilings in the corner of each room for the sensors. The cables would be fed from a central distribution board mounted near the loft hatch in the ceiling. Since the power requirements for the 12 sensors I wish to eventually install are minimal a single 2A is enough to power the whole lot with some room to spare. Some pictures of the distribution board (before and after installation) are shown below:

Room sensor power distribution board (before installation)
Unwired room sensor power distribution board (before installation)
Room sensor power distribution board (post installation)
Room sensor power distribution board (post installation)

I had initially wanted to mount the power supply down in the ‘server rack’ and run the power up through an (existing) whole in the wall to the loft. However, after an abortive attempt at running a cable through the wall (in which I was foiled by pesky insulation and there was much swearing), I eventually came to the conclusion that mains power was needed in the roof space.

Time passes…

It took a while to really commit to and allocate funds to this option. Eventually the electrician came and installed four shiny new power points in the roof space just next to the loft hatch. Four power points is obviously overkill for this project. However, in the intervening time some other projects had come up for which the remaining plugs would be useful.

Once the power points were in place I ran the cabling for the 12V line to the room in which the sensor was to be installed and connected it all up. As if by magic power flowed and the sensor sprung into life! (barring various frustrating issues with loose connections).

I had quite an interesting time working out how to mount the sensor in the corner of the room. I initially stuck it up with 3M double sided sticky pads, but ended up pulling it up and down several times so that I ran out of these. Eventually I opted for good old fashioned blu-tack as a temporary solution! I’m intending to replace this with a 3D printed bracket which will fit the oddly shaped space between the case and the wall. This will allow the sensor to be attached much more permanently. However, for now the blu-tack does the job and proves the concept.

Installed room sensor
Installed room sensor

Relay Power Control

I mentioned above that I had installed several extra power sockets in the roof space for other projects. One of those other projects required putting a Raspberry Pi in the roof space. The other project is now completed and I’m hoping to document it soon. For this I pressed into duty my old Raspberry Pi Model B (the one with 512Mb of RAM). Although old, this hardware is sufficient for running a small Node-RED instance as well as performing it’s other intended duty.

This gave me a nice way to control the power supply to the room sensor distribution board and hence all the sensors. That meant that if a sensor were to go offline I could cycle the power remotely without having to climb up in the roof space. To do this I inserted a relay into the power 12V line between the power supply and the power distribution board.

The Raspberry Pi with the relay assembly
The Raspberry Pi with the relay assembly

I connected this to the normally closed input of the relay so that the relay must be switched on to kill the power. The state is then inverted in Node-RED. In this way the switch in Home Assistant shows as on most of the time. I only had 5V relays sitting in my parts box. So, I soldered up a quick transistor circuit on a breadboard to allow me to drive the relay from the 3.3V logic of the Pi. Doing this is left as an exercise for the reader, since I forgot to document what I built!

Driving the Relay in Node-RED

To drive the relay I use a variation of my MQTT discovery switch in Node-RED. This is implemented via the following flow:

The room sensor relay control flow
The room sensor relay control flow

The JSON for this is shown below (copy it and import into Node-RED):

Over the following months of usage, I noted several (infrequent) instances where the sensor stopped responding. In these cases it needed a manual (though remote controlled) power cycle. In order to automate this and so reduce downtime, I wrote the following Home Assistant automation:

Basically, this will trigger after the sensor has been offline for 10 minutes. Once triggered it will turn the sensor off and on again (with a 30 seconds delay in between). It will also send a notification to inform me that this has happened. I think this has been triggered twice and as a result the sensor hasn’t been unavailable for any length of time.

Software Changes

Since installing the prototype sensor, I haven’t actually been running my Micropython Room Sensor software on it. Instead I’ve been trying out ESPHome on this and another project, since it’s been getting a lot of attention in the HASS community recently. I specifically wanted to see if ESPHome was an easier/maintenance free option for these types of projects.

My take away from this is that ESPHome is really nice and very easy if you don’t want to do anything complicated. If all you have are a few sensors or actuators that you want to connect, it’s great! In fact it’s almost perfect for this kind of project. You can even do some moderately complicated data conversions and on device automation using the lambda syntax. For this reason, I’d put it in the same basket as the likes of ESPeasy. Although it has some advantages in comparison to other systems, especially if you are already running Home Assistant. Kudos to Otto Winter for coming up with such a great piece of software!

However, it does get more difficult when you want to more complicated things. I ran into some of these issues in my other project, which I’ll detail when I eventually write it up. For now ESPHome gets my wholehearted recommendation.

ESPHome Configuration

I especially like that since the configuration for ESPHome devices is just YAML it’s really easy to store in git. I haven’t got a cleaned up git repo for my projects ready to publish. However, since the configuration for this project is so simple, I can post the whole thing here:

You’ll notice that I’m still using the MQTT transport rather that the native API component with the Home Assistant ESPHome component. This is mainly because I built this before the native API was released and I didn’t need to update it. I understand there are some advantages to using the native API, so I probably will try it at some point, especially if I want to try an esp-cam project.

What’s Next

So far, I’m really happy with the performance of the sensor. I’ve been using it in a few automations which I’m intending to detail in a further post. The next thing to do is build further sensors for the remainder of the house. In order to make this less error prone I’ve decided to design an adapter PCB in Kicad for the Wemos D1 Mini clones I’ve been using in other projects. This will get me away from the fiddling with bare ESP modules and hopefully mean that the light sensor will work.

As mentioned above, I also want to design a 3D printed bracket to fit the oddly shaped space in the corner behind the sensor. This will have to wait until I get a 3D printer, which will hopefully happen later this year.

Aside from that the only other job will be deploying the new sensors once they are built. This will mean running all the remaining power cables through the roof space, so lots of crawling around up there (yay! /s).

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.