continuous integration home assistant

Continuous Integration for Home Assistant, ESPHome and AppDaemon

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I set up continuous integration and deployment from my Home Assistant configuration. This setup has been nothing short of awesome! It’s liberated me from worrying about editing my configuration – all I do is git push and relax. Either HASS will notify me when it restarts or I’ll get an email from Gitlab telling me the pipeline failed.

I wanted to take this configuration further and expand it to other parts of my Home Automation infrastructure. In this post I’ll cover expanding it to perform deployments of my HA stack with Docker, building and deploying to ESPHome devices and unit testing and deploying my AppDaemon apps.

Let’s get on with it!

Automating Docker Deployment

I’d originally held off doing this because I wasn’t looking forward to building custom Docker images in Gitlab CI. However, I managed to complete the original pipeline without having to add any extra dependencies to the HASS containers (such as git which I thought may be required). This makes the job of deploying my HA stack much easier, especially as I already had it mostly scripted. The first step was to add my update.sh script to my repo and tweak it to suit:

This is a pretty simple modification to my previous script. The main additions are that I use the -p argument to set the project name used by docker-compose. By default this is taken from the directory name, but I wanted it to match the name of my previous project even though the directory has changed from ha to home-assistant. The other main modification is that I’ve added the --remove-orphans argument to clean up any lingering containers. This is useful if I remove a container from the docker-compose.yml file. In addition I’ve removed the apt commands and cleaned up the script a bit so that it passes my shellcheck job.

The next step was simply to add the docker-compose.yml file to the repo. Then I continued by editing the CI configuration.

Updated Home Assistant CI Jobs

I first split up my previous deployment job into two jobs. The first of these is the main deployment job which pulls the new configuration. The second restarts HASS. The restart job goes in a new pipeline stage and will only be run when the docker-compose.yml or update.sh files haven’t changed:

I then added another job (again in another pipeline stage) which performs our Docker deployment. This will be run only when either the docker-compose.yml or update.sh files changes:

continuous integration home assistant
A full pipeline run with a deployment of the Docker containers running in the final stage.

With that in place I can now redeploy my HA stack by modifying either of those files, committing to git and pushing. In order to facilitate HASS updates with this workflow, I changed the tag of the HASS Docker image to the explicit version number. That way I can simply update the version number and redeploy for each new release.

Continuous Integration for ESPHome

Inspired by the previous configs I have seen for checking ESPHome files, I wanted to implement the same checks. However, I wanted to go further and have a full continuous deployment setup which would build the relevant firmware when its configuration was changed and send an OTA update to the corresponding device. As it turned out this was relatively easy.

I started out by importing my ESPHome configs into Git, which I hadn’t previously done. You can find the resulting repository on Gitlab. For the CI configuration I first copied over the markdownlint and yamllint jobs from my Home Assistant CI configuration.

I then borrowed the ESPHome config check jobs from Frenck’s configuration. These check against both the current release of ESPHome and the next beta release. The beta release job is allowed to fail and is designed only to provide a heads up for potential future issues.

Then I came to implement the build and deployment job. Traditionally these would be performed in separate steps, but since ESPHome can do this in a single step with it’s run subcommand I decided to do it the easy way. This also removes the requirement to manage build artifacts between steps. I created the following template job to manage this:

Most of the complexity here is in unlocking the git-crypt repository so that we can read the encrypted secrets file. I opted to store the git-crypt key in the repository, encrypted with openssl. The passphrase used for openssl is in turn stored in a Gitlab variable, in this case $OPENSSL_PASSPHRASE. Once the decryption of the key is complete, we can unlock the repo and get on with things. We remove the key after we are done in the after_script step.

Per-Device Jobs

Using the template configuration, I then created a job for each device I want to deploy to. These jobs are executed only when the corresponding YAML file (or secrets.yaml) is changed. This ensures that I only update devices that I need to on each run. The general form of these jobs is:

Of course you need to replace my_device with the name of your device file.

continuous integration home assistant
A run of the ESPHome pipeline with deployments to two devices

With these jobs in place I have a full end-to-end pipeline for ESPHome, which lints and checks my configuration before deploying it only to devices which need updating. Nice! You can check out the full pipeline configuration on Gitlab. I now no longer have need to run the ESPHome dashboard, so I’ve removed it from my server.

Continuous Integration for AppDaemon

I mentioned previously that I wanted to split out my AppDaemon apps and configuration into a separate repo from my HASS config. I did this as a prerequisite step of this setup and you can again find the new repo on Gitlab.

The inspiration for this configuration came mostly to @bachya on the HASS forum, whose post in reply to my earlier setup provided most of the details. Thanks for sharing!

I started out by copying across the now ubiquitous markdownlint and yamllint jobs. I then added jobs for pylint, mypy, flake8 and black:

Although this ends up being very verbose, I decided to implement these all as separate jobs so that I get individual pass/fail states for each. I’m also pretty sure the mypy job doesn’t do anything right now, because I’m not using any type hints in my Python code. However, the job is there for when I start adding those.

Unit Testing AppDaemon

Another thing that @bachya introduced me to was Appdaemontestframework. This provides a pytest based framework for unit testing your AppDaemon apps. Although I’m still working on the unit tests for my so far pretty minimal AD setup I did manage to get the framework up and running, which was a little tricky. I had some issues with setting up the initial configuration for the app, but I managed to work it out eventually.

The unit testing CI job is pretty simple:

All we do here is install the requirements that I need for the tests and then call py.test. Easy!

The deployment job for AppDaemon was also trivial, since it is pretty much a copy of the HASS one. Since AD detects changes to your apps automatically, there’s no need to restart. For more details you can check out the full CI pipeline on Gitlab.

continuous integration home assistant
A run of the AppDaemon pipeline – lots of preflight checks here!

Conclusion

Phew, that was a lot of work, but it was all the logical follow on from work I’d done before or that others had done. I now have a full set of CI pipelines for the three main components of my home automation setup. I’m really happy with each of them, but especially the ESPHome pipeline. As an embedded engineer in my day job I find it really cool that I can update a YAML file locally, commit/push it and then my CI takes over and ends up flashing a physical device! That this is even possible is a testament to all the pieces of software used.

Next Steps

I’m keen to keep going with CI as a means of automating my operations. I think my next target will be sprucing up my Ansible configurations and running them automatically from CI. Stay tuned for that in the hopefully near future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

traccar home assistant

Self-Hosted GPS Tracking with Traccar and Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

One of the outstanding issues I’ve had with my Home Automation system in recent months has been to fix my presence detection. I was using a combination of Owntracks and SNMP to query my Pfsense firewall for devices on the network.

This worked well until my Owntracks setup broke due to internal network changes. This was due to moving separating my HA/MQTT server from the reverse proxy. In turn, this meant that the MQTT server wasn’t able to renew it’s Let’s Encrypt certificate.

The solution to this would have been to use DNS validation to get the certificate issued. However, my DNS provider (Namecheap) don’t allow API access unless you have a large enough account with them.

Eventually I will migrate my DNS to a supported provider and get internal TLS working. However, it’s quite a bit of work to migrate over. Also, DNS is pretty critical to the operation of, well, everything – so I want to take my time and get it right. In the meantime I was looking into other options for GPS based presence detection.

GPS Based Presence Options

Aside from using Owntracks in MQTT mode, it also supports HTTP mode (which is now actually the recommended mode). This is directly supported in HASS. I hadn’t switched over to it because I wanted to try out the Owntracks recorder for logging position data over time. Sending data to both HASS and the recorder would be difficult via HTTP. However, it’s exactly the use case that MQTT is made for!

While I was dithering around not doing very much about this problem, another alternative cropped up: Traccar. Traccar is a self-hosted GPS tracking system which supports a multitude of different devices and has mobile apps available for both major OSes. The main plus point for me is that it is a stand-alone server which I could host in my DMZ. There is also a Home Assistant integration for Traccar.

Installation

It seems to be rather badly documented, but there is an official Docker image for Traccar on Docker Hub. When reading that documentation I initially thought I had to build the image myself. Don’t! Just use the official one (unless you have a good reason not to).

I followed along the instructions in the Readme, until it came to the final docker run command, where I wanted to put it into docker-compose. Here’s what I came up with:

This is pretty much the command in the docs translated over, except for the second to last line. This mounts the data volume needed for persisting the default H2 database files to the host. I’m not sure why this isn’t mentioned in the docs. Perhaps they expect that you will use Mysql for the database, but I didn’t want to do that for my initial test setup.

traccar home assistant
The Traccar Web UI

After running the docker-compose up -d command I had Traccar working on port 8082 of my server and was able to log in as the default user (admin and password admin!). The first thing I did after logging in was change that password and disable user registration, before exposing my instance via the reverse proxy.

traccar home assistant
Deselect “Registration” in the Server Settings dialog (Right Hand Gear Icon->Server)

I found that the server memory usage was reasonably high, which seems to be due to the memory options passed to the Java VM in the dockerfile. There doesn’t seem to be any way to change this except for building a custom image.

Reverse Proxy Setup

I’m intending to migrate the reverse proxy on my home network to Traefik at some point after my previous success with it. However, for now it’s still running good old Nginx. I couldn’t find an example Nginx config for Traccar, so I copied my HASS one and modified it to suit:

Adding Devices

Once your server is up and running, adding devices is easy. Just click the plus icon next to devices and give your device a name. For the ID field, the Android app will generate a six-digit identifier, which you can copy over. This ID is the only security token used to update the position, so you may like to use something more secure. I recommend generating a pseudo-random string with pwgen and using that.

Set up on the app side is pretty trivial too. Just enter the full URL to the server e.g. https://traccar.example.com and make sure the device identifier matches the one you entered on the server. Toggling the service status will then start sending location updates. The Android app seems to put an annoying persistent notification in the notification pull down. For some reason this doesn’t go into the ongoing section (which would make it less annoying). I just hid the notifications from the Traccar app via the relevant Android setting and the app still sends location updates.

As an aside, you can easily send location updates from the command line with curl, for example I made the screenshot above by artificially positioning a device at New Plymouth’s Wind Wand:

Make sure to update the hostname and device ID to match your setup and update the other fields to reflect the position you want to log.

Integrating Traccar with Home Assistant

The integration with HASS also proved to be relatively easy and works well. There were a couple of things which tripped me up, which I’ll come to shortly. First we need an entry for Traccar in our Home Assistant config:

The port and ssl variables are required for the above setup with the reverse proxy, aside from that it’s reasonably obvious.

After adding that to my config file, I expected my devices just to show up in HASS. They didn’t. This turned out to be due to two problems. The first of these was that I had created a user in Traccar specifically for Home Assistant (which you can give read-only permissions, nicely). However, this user didn’t have access to my devices. The solution is to grant access via the ‘Devices’ panel of the user management panel (the icon looks like two little picture frames).

The second issue tripped me up for a bit longer. Even after I set the permissions correctly, I still couldn’t see my devices in the HASS UI. It turns out they were being added to the known_devices.yaml on the server, but not being enabled by default. It wasn’t until I logged into the server and checked this file that I noticed this. This is more problematic if you are editing your config locally and deploying it via git since the change obviously won’t be made to your local copy.

In the end I added the entry in my local copy and deployed it to the server:

Once that was done for each device, the Traccar device_tracker entities appeared in Home Assistant just fine.

Battery Usage Issues

Now comes the big potential problem: the power usage of the Traccar Android app. The first day I had this installed my phone was down to 15% by 9-10pm. However, the results have been less worrying over the last couple of days. I’m not sure why it was so bad for the first day, although there were a few points that were different from my subsequent usage:

  • I was using the version from F-Droid. After noticing the battery issue I switched to the version from the play store. I don’t know if that version uses some proprietary Google Services API, or if they are identical. However, it is a possible reason for the difference.
  • I had changed a couple of the default settings – I lengthened the frequency setting to 15 minutes and changed the distance setting to 50 meters. I’m not sure what effect/interactions these have since I can’t find any documentation about them. I’ve since gone back to the default settings.
  • I moved around quite a bit that first day (some driving around town and a long walk) with less moving in the subsequent two days. If this is really the reason for the battery usage then that would be disappointing. Obviously not moving is not a solution!

This is all based on three days of usage, so I’m going to continue monitoring the situation and potentially testing other variations in settings. I’ve asked about the battery usage on the Traccar forum, but have so far had no answer. One worrying issue is that the Android battery usage panel shows the app as keeping the device awake for long periods.

traccar home assistant
Power usage from the Traccar app

There is another option to the Traccar app, since OwnTracks can also be used. However, at this point you may as well just use the HASS integration directly, unless you want the extra features of the Traccar server.

Conclusion

Traccar seems like a pretty solid piece of software, though due to the power issue I’m not completely sold on it yet. The location updates are certainly more frequent than OwnTracks was (at least over MQTT). The server side UI is nice and responsive and there are quite a few capabilities that I haven’t explored yet. For these reasons I’m going to stick with it for now and continue testing.

Next Steps

Obviously I need to solve the battery issues in order to go much further. If the usage proves acceptable after further testing I’m also going to see if I can dial down the memory usage of the server by editing the Dockerfile.

I also have quite a bit of work to do on the HASS presence detection front. I’d specifically like to try a combination of Bayesian Sensors and the “Not so Binary” approach.

Hopefully I can come up with something which is both responsive to changes and also robust against false readings. I’ll be sure to write a post about that when I do. However, that’s it for now. Thanks for reading!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

home assistant rubbish collection

Quick Project: Rubbish Collection Panel for Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

With my local council rolling out an increasingly complex system of rubbish bins and collection, I’ve been thinking about getting this integrated with Home Assistant so that we don’t have to remember which bins go out when. I was presently surprised to find the garbage_collection custom component whilst browsing HACS the other day. This component does exactly what I need, so I decided to give it a go.

Rubbish Collection Configuration

After installing the component via HACS, I set about configuring it. Here is the configuration I came up with:

That was easy! The only non-obvious thing is working out which bins are collected on even and odd weeks, which is easy to look up online.

Getting it into Lovelace

The garbage_collection component page in HACS has a nice screenshot of the sensors in Lovelace (which doesn’t seem to be in the repository readme). However, the sensors themselves have a state based on whether the bin is due to be put out. The state is nice and machine readable, but I wanted to recreate the panel from the screenshot for the humans that have to look at it. In the end I actually decided to simplify it down to just show “today”, “this week” or “next week” plus the number of days, since the actual date is pretty irrelevant.

home assistant rubbish collection
The finished rubbish collection panel in my Lovelace UI

This proved to be more difficult than I’d expected, since Lovelace doesn’t support templates in cards natively. I had to install the Lovelace Card Templater plugin via HACS. This plugin in turn requires the card-tools plugin, which I couldn’t find in HACS. I ended up installing it by adding it as a git submodule to my configuration repository. I then added the following to my Lovelace config to load the plugins:

Full Panel YAML

The panel itself is made up of a vertical stack card in which I put two horizontal stack cards. These in turn contain two of the templater cards. The configuration of the templater cards is a little involved since you need to specify the entity twice (which seems to be due to some internal limitation of Lovelace). My template cards are based on the sensor card to show just the data from the template. I use a state_template to do this.

Anyway, here’s the full YAML:

Each of the templater cards is pretty much the same. I just change the entity ID for each. The only improvement I would make would be to add a line break between the “Today”/”This week”/Next Week” text and the day count, since this would look slightly better. I couldn’t work out how to do that however.

Conclusion

I think I’ve achieved my goal of simplifying the task of remembering which bins go out when. Now I can quickly check that info with a glance at my HASS UI. Of course, now that I have the rubbish collection data in Home Assistant I can use it for other things such as notifications or reminder lights. I already have some ideas for status lighting, so that might become part of a larger project.

I’d like to say thanks to the authors of the custom components and plugins that I’ve used to achieve this. The HASS community really is thriving will all these third party addons at the moment!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

getting started with appdaemon

Getting Started with AppDaemon for Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

Continuing on from last weeks post, I was also recently persuaded to try out AppDaemon for Home Assistant (again). I have previously tried out AppDaemon, so that I could use the excellent OccuSim app. I never got as far as writing any apps of my own and I hadn’t reinstalled it in my latest HASS migration. This post is going to detail my first steps in getting started with AppDaemon more seriously.

I should probably start with a run down of what AppDaemon is for anyone that doesn’t know. The AppDaemon website provides a high level description:

AppDaemon is a loosely coupled, multithreaded, sandboxed python execution environment for writing automation apps for home automation projects, and any environment that requires a robust event driven architecture.

In plain English, that basically means it provides an environment for writing home automation rules in Python. The supported home automation platforms are Home Assistant and plain MQTT. For HASS, this forms an alternative to both the built in YAML automation functionality and 3rd party systems such as Node-RED. Since AppDaemon is Python based, it also opens up the entirety of the Python ecosystem for use in your automations.

AppDaemon also provides dashboarding functionality (known as HADashboard). I’ve decided not to use this for now because I currently have no use for it. I also think the dashboards look a little dated next to the shiny new HASS Lovelace UI.

Installation

I installed AppDaemon via Docker by following the tutorial. The install went pretty much as expected. However, I had to clean up the config files from my old install before proceeding. The documentation doesn’t provide an example docker-compose configuration, so here’s mine:

I’ve linked the AppDaemon container to an internal network, on which I’ve also placed my HomeAssistant instance. That way AppDaemon can talk to HASS pretty easily.

You’ll note that I’m not passing any environment variables as per the documentation. This is because my configuration is passed only via the appdaemon.yaml file, since it allows me to use secrets:

You’ll see here that I use the docker0 interface IP to connect to HASS. I tried using the internal hostname (which should be homeassistant on my setup), but it didn’t seem to work. I think this is due to the HASS container being configured with host networking.

Writing My First App

I wanted to test of the capabilities and ease of use of AppDaemon. So, I decided to convert one of my existing automations into app form. I chose my bathroom motion light automation, because it’s reasonably complex but simple enough to complete quickly.

I started out by copying the motion light example from the tutorial. Then I updated it to take configuration parameters for the motion sensor, light and off timeout:

I’ve also added a couple of utility methods to manage the timer better and also to specify more complex logic to restrict when the light will come on. Encapsulating both of these in their own methods will allow re-use of them later on.

The timer logic of the example app is particularly problematic in the case of multiple motion events. In the original logic one timer will be set for each motion event. This leads to the light being turned off even if there is still motion in the room. It also caused some general flickering of the light between motion events and triggered callbacks. I mitigate this in the set_timer method here by first cancelling the timer if it is active before starting a new timer with the full timeout.

At this point, we have a fully functional and re-usable motion activated light. We can instantiate as many of these as we would like in our apps/apps.yaml file, like so:

Note that we haven’t yet recreated the functionality of my original automation. In that automation, the brightness was controlled by the door state. We’ll tackle this next.

Extending the App

Since our previous MotionLight app is just a Python object, we can take advantage of the object orientated capabilities of the Python language to extend it with further functionality. Doing so allows us to maintain the original behaviour for some instances, whilst also customising for more complex functionality.

Our subclassed light looks like this:

Here we can see that the initialize method loads only the new configuration parameters. The existing parameters from the parent class are loaded from the parent’s initialize method via the super call. The new configuration options are passed as lists, allowing us to specify several bedroom or other doors. In order to set the relevant callbacks I loop over each list and set the callback. The callback is the same for each entry in the list since it only matters what type they are. The specifics of each door are irrelevant.

Next we have the actual callback methods for when the doors open. These just set the internal variable last_door to the relevant value and log it for debugging purposes.

Most of the new logic comes in the motion_callback method. Here I have reused the is_light_times and set_timer methods from the parent class. The remainder of the logic first checks that the light is off and then recreates the operation of the template I used in my original automation. This sets the light to dim if the last door opened was to one of the bedrooms and bright otherwise. There are also some time based restrictions on this for times when I always want the light bright.

The configuration is pretty similar to the previous example, with the addition of the lists for the doors:

Conclusion and Next Steps

The previous automation (or more rightly set of automations) totaled to 78 lines. The Python code for the app is only 56 lines long. However, there is another 11 lines of configuration required. By this measurement, it seems like the two are similar in complexity. However, we now have an easily re-usable implementation for two types of motion controlled lights with the AppDaemon implementation. Further instances can be called into being with only a few lines of simple configuration. Whereas the YAML automation would need to be duplicated wholesale and tweaked to fit.

This power makes me keen to continue with AppDaemon. I’m also keen to integrate it with my CI Pipeline. Although I’m actually thinking of separating it out from my HASS configuration. With this I’d like to try out some more modern Python development tooling, since it’s been quite some time since I’ve had the opportunity to do any serious Python development.

I hope you’ve enjoyed reading this post. For anyone already using AppDaemon, this isn’t anything groundbreaking. However, for those who haven’t tried it or are on the fence, I’d highly recommend you give it a go. Please feel free to show off anything you’ve made in the feedback channels for this post!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

home assistant gitlab ci

Continuous Integration/Deployment for Home Assistant with Gitlab CI

This post may contain affiliate links. Please see the disclaimer for more information.

One of the best things about writing this blog is the interactions I have with other people. Of course it’s always nice to get feedback, whether it’s positive or (constructively) negative. It’s even better to see what similar projects other people are undertaking. Sometimes comments even start me off in a different direction than I had been taking.

This project was inspired by one such conversation. This started out rather gruff but actually ended up being really positive and motivated me to go further with an approach that I’d mostly dropped. The conversation in question was in relation to my recent “Seven Home Assistant Tips and Best Practices” post. Specifically it was around testing and deploying your Home Assistant config with Continuous Integration (via Gitlab CI).

I’m already familiar with Continuous Integration and Continuous Deployment through work. I had developed a minimal setup for validating my HASS config. However, I’d previously given up on it due to the slowness of Gitlab’s shared CI runners. What follows is my new attempt. Thanks to /u/automate_the_things and all the other commenters on that thread for the inspiration/persuasion to do this!

Setting Up a Local Runner

Ideally, I’d like to fully self host my own Gitlab instance. However, the recommended RAM requirements are between 4 and 8GB, which is a little ridiculous. Perhaps when I upgrade my server I’ll be able to spare enough RAM for this. For now I’m just running a local runner and connecting it to the cloud version of Gitlab.

I decided to go with deploying the runner as a Docker container and also executing my jobs also within their own containers. This fits with my recent Docker shenanigans and I’m familiar with this setup having deployed it for work. CI systems are one area where I’ve always felt that using containers makes sense, even before my recent Docker adventures. Each build running in it’s own container removes a lot of the complexity in managing build environments for different projects. It also means that your build machines only require Docker.

I set up my runner in a new VM on my main server. Then I pretty much just followed the official instructions to install the runner. I did convert the docker run command to a minimal docker-compose.yml file for ease of reproduction however:

Once that was done, I finished by getting the runner registered and connected to my project.

home assistant gitlab ci
Once your runner is active it should show up in your Gitlab runners page (accessible via Settings->CI/CD->Runners)

Build Pipeline Configuration

In my research for this project, I came across Frenck’s Gitlab CI configuration, which is really awesome (thanks @Frenck). I’ve based mine heavily on his with some tweaks for my configuration and environment. The finished pipeline runs the following jobs:

  • shellcheck – this job performs various checks on any shell scripts in the repository
  • yamllint – this job performs a full lint check of all my YAML files. Since I’ve never run this before it threw up loads of errors. I started fixing a few of these, but eventually marked the job with allow_failure: true to allow the pipeline to continue even with these errors. I’ll work on fixing these issues bit by bit over the next few weeks. I also remove some files which are encrypted with git-crypt
  • jsonlint – pretty much the same as yamllint, but for JSON files. Any files that are encrypted are excluded from this check.
  • markdownlint – similar as the previous jobs, but for checking of markdown files (such as the README.md file)
  • ha-latest – checks the HASS configuration against the current release of Home Assistant
  • ha-rc – runs a HASS configuration check against the next release candidate for Home Assistant
  • ha-dev – checks the HASS configuration against the development version of Home Assistant. Both of these jobs are configured to allow failure. This is just intended to give me advanced warning of and breaking configuration that may prevent HASS from starting up in a future release.
  • deploy – deploys the configuration to my HASS server. I’ll discuss this in more detail below.
home assistant gitlab ci
My full pipeline (note the yellow status of the failing yamllint job)

You can find the finished configuration in my hass-config repository.

Deployment Approaches

There are several ways I could have done the deployment, which may suit different scenarios:

  • We could use the Home Assistant Gitlab CI sensor to poll the pipeline status. We would then trigger an automation to pull down the configuration when the pipeline passes. I haven’t tried this approach. However, it is potentially useful if your HASS server and Gitlab runner are on different networks and your HASS server is not publicly available.
  • We could use a pipeline webhook from Gitlab to a webhook handler in HASS. This would trigger an automation similar to that above. Again, I haven’t tried this. It would be useful if your Home Assistant instance and Gitlab CI runner are on different networks. However, it would require your HASS instance to be publicly available.
  • Similar to the approach above you could trigger the HASS webhook handler from a job in your Gitlab runner directly with CURL (rather than using the built in webhooks). This has the advantage over the previous two approaches that it gives you an explicit deploy stage in your pipeline. This is turn gives you the ability to track deployments via environments. It’s also potentially a lot simpler, since it would only be triggered if the previous stages in the pipeline passed. You also wouldn’t have to parse the JSON payload.
  • The approach I have taken is to deploy directly from the runner container via SSH. This is because my runner and HASS machines are running in the same network and so I can easily SSH between them without having to open any ports through my firewall. It also centralises all the deployment logic in the CI configuration, without any HASS automations needed.

My Deployment Job

As per my CI configuration, the deployment job is as follows:

As you can see, this job runs in a plain Alpine Linux container and is deploying the the home-assistant environment. This allows me to track what versions were deployed and when from the Gitlab UI.

The before_script portion installs a couple of dependencies which we need for later and pulls the (password-less) SSH key we need for logging into the HASS server from the project variables. This is stored in the $DEPLOYMENT_SSH_KEY variable in the Gitlab configuration. The resulting file must have it’s permissions set to 600 to allow the SSH client to use it.

Moving on to the script portion. The first step performs the actual deployment of the repository to the server via SSH. Here we use our SSH key that we wrote out above. The public portion of this is installed on the HASS server as for the ci user. We also disable strict host key checking to prevent the SSH client prompting us to accept the fingerprint.

home assistant gitlab ci
The Gitlab CI variables page (accessible via Settings->CI/CD->Variables)

The SSH command connects to the server specified in $DEPLOYMENT_SSH_LOGIN, which is again set in the Gitlab variables configuration. This has the form ci@<hass host IP>. It should be noted here that the Alpine container defaults to using Google’s DNS. This means that resolving internal hostnames for your network will fail. I’m using the IP addresses for now to get around this.

Remote Control Commands

The SSH command sends a sequence of commands to be run on the HASS server. These commands are as follows:

  • cd /mnt/docker-data/home-assistant – change directory to the configuration directory on the server
  • git fetch – fetch all the new stuff via git
  • git checkout $CI_COMMIT_SHA – checkout the exact commit that we are running a pipeline for via one of Gitlab’s built in variables

This arrangement of commands allows me to control exactly what gets deployed to the server for each pipeline run. In this way we won’t accidentally deploy the wrong version if new code is checked into the server whilst our pipeline is running.

In order for the git fetch command to work another password-less SSH key is required. This time this is for the ci user on the HASS system. The public portion of this is installed as a deploy key for the project in Gitlab. I suppose it’s equally valid to pull changes via HTTPS (for public repos), but since the remote on my repository was already set up to use SSH I decided to continue using it.

Restarting Home Assistant from Gitlab CI

The second command in our script section is to restart Home Assistant after the configuration has been updated. Here we use CURL to call the homeassistant.restart service as per the API docs. The Home Assistant authentication token and URL are stored in Gitlab CI variables again.

Finally, we enter the after_script section, which will be executed even in the case that one of the above commands fails. Here we simply delete the id_rsa SSH key file.

I’ve restricted my deploy job to run only on pushes to the master branch. This allows me to use other branches in the repo as I please without having them deployed to the server accidentally. I’ve also used a tag of hass to prevent running on runners not intended for this job. Since I only have one runner right now this isn’t a concern, but having it in place makes things easier if/when I add more runners.

Conclusion

I’m really pleased with how this CI pipeline has turned out. However, I’m still a little concerned at how long all these steps take. The latest pipeline took 6 minutes and 42 seconds to run (plus the time it takes HASS to restart). This isn’t very much if it’s just a fire and forget operation. It is however, a long time if I am trying to iterate on my configuration. I’m going to play around with the runner configuration to see if I can get this down further. I also want to investigate options for testing on my local machine. In my previous attempt at this my builds could sit for longer than this waiting for one of Gitlab’s shared runners. So I’ve at least made progress in the speed department.

Next Steps

In terms of further improvements, I’d like better notifications of the pipeline progress and notifications when HASS competes it’s restart. I will implement these with Gotify. Right now I only get an email if the build fails from Gitlab. I’m also going to integrate the pipeline status into Home Assistant with the previously mentioned Gitlab CI sensor. I’m even tempted to turn one of my smart lights into a build light to alert me to problems!

I also want to take my use of CI in my infrastructure further. My next target will be building some modified Docker images for a couple of my services in CI. Deployment of the relevant Docker stacks is another thing I’d like to try out. I’ve not had chance to play with building containers via Gitlab CI before so that will be interesting.

I hope that you’ve enjoyed this post and that it’s inspired you to give something similar a go. Thanks to this pipeline I’ll never have to worry about HASS being unable to start due to a broken configuration again. I’m open to further improvements to this process. Please feel free to share any you may have via the feedback channels. Thanks for reading!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.