continuous integration home assistant

Continuous Integration for Home Assistant, ESPHome and AppDaemon

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I set up continuous integration and deployment from my Home Assistant configuration. This setup has been nothing short of awesome! It’s liberated me from worrying about editing my configuration – all I do is git push and relax. Either HASS will notify me when it restarts or I’ll get an email from Gitlab telling me the pipeline failed.

I wanted to take this configuration further and expand it to other parts of my Home Automation infrastructure. In this post I’ll cover expanding it to perform deployments of my HA stack with Docker, building and deploying to ESPHome devices and unit testing and deploying my AppDaemon apps.

Let’s get on with it!

Automating Docker Deployment

I’d originally held off doing this because I wasn’t looking forward to building custom Docker images in Gitlab CI. However, I managed to complete the original pipeline without having to add any extra dependencies to the HASS containers (such as git which I thought may be required). This makes the job of deploying my HA stack much easier, especially as I already had it mostly scripted. The first step was to add my update.sh script to my repo and tweak it to suit:

This is a pretty simple modification to my previous script. The main additions are that I use the -p argument to set the project name used by docker-compose. By default this is taken from the directory name, but I wanted it to match the name of my previous project even though the directory has changed from ha to home-assistant. The other main modification is that I’ve added the --remove-orphans argument to clean up any lingering containers. This is useful if I remove a container from the docker-compose.yml file. In addition I’ve removed the apt commands and cleaned up the script a bit so that it passes my shellcheck job.

The next step was simply to add the docker-compose.yml file to the repo. Then I continued by editing the CI configuration.

Updated Home Assistant CI Jobs

I first split up my previous deployment job into two jobs. The first of these is the main deployment job which pulls the new configuration. The second restarts HASS. The restart job goes in a new pipeline stage and will only be run when the docker-compose.yml or update.sh files haven’t changed:

I then added another job (again in another pipeline stage) which performs our Docker deployment. This will be run only when either the docker-compose.yml or update.sh files changes:

continuous integration home assistant
A full pipeline run with a deployment of the Docker containers running in the final stage.

With that in place I can now redeploy my HA stack by modifying either of those files, committing to git and pushing. In order to facilitate HASS updates with this workflow, I changed the tag of the HASS Docker image to the explicit version number. That way I can simply update the version number and redeploy for each new release.

Continuous Integration for ESPHome

Inspired by the previous configs I have seen for checking ESPHome files, I wanted to implement the same checks. However, I wanted to go further and have a full continuous deployment setup which would build the relevant firmware when its configuration was changed and send an OTA update to the corresponding device. As it turned out this was relatively easy.

I started out by importing my ESPHome configs into Git, which I hadn’t previously done. You can find the resulting repository on Gitlab. For the CI configuration I first copied over the markdownlint and yamllint jobs from my Home Assistant CI configuration.

I then borrowed the ESPHome config check jobs from Frenck’s configuration. These check against both the current release of ESPHome and the next beta release. The beta release job is allowed to fail and is designed only to provide a heads up for potential future issues.

Then I came to implement the build and deployment job. Traditionally these would be performed in separate steps, but since ESPHome can do this in a single step with it’s run subcommand I decided to do it the easy way. This also removes the requirement to manage build artifacts between steps. I created the following template job to manage this:

Most of the complexity here is in unlocking the git-crypt repository so that we can read the encrypted secrets file. I opted to store the git-crypt key in the repository, encrypted with openssl. The passphrase used for openssl is in turn stored in a Gitlab variable, in this case $OPENSSL_PASSPHRASE. Once the decryption of the key is complete, we can unlock the repo and get on with things. We remove the key after we are done in the after_script step.

Per-Device Jobs

Using the template configuration, I then created a job for each device I want to deploy to. These jobs are executed only when the corresponding YAML file (or secrets.yaml) is changed. This ensures that I only update devices that I need to on each run. The general form of these jobs is:

Of course you need to replace my_device with the name of your device file.

continuous integration home assistant
A run of the ESPHome pipeline with deployments to two devices

With these jobs in place I have a full end-to-end pipeline for ESPHome, which lints and checks my configuration before deploying it only to devices which need updating. Nice! You can check out the full pipeline configuration on Gitlab. I now no longer have need to run the ESPHome dashboard, so I’ve removed it from my server.

Continuous Integration for AppDaemon

I mentioned previously that I wanted to split out my AppDaemon apps and configuration into a separate repo from my HASS config. I did this as a prerequisite step of this setup and you can again find the new repo on Gitlab.

The inspiration for this configuration came mostly to @bachya on the HASS forum, whose post in reply to my earlier setup provided most of the details. Thanks for sharing!

I started out by copying across the now ubiquitous markdownlint and yamllint jobs. I then added jobs for pylint, mypy, flake8 and black:

Although this ends up being very verbose, I decided to implement these all as separate jobs so that I get individual pass/fail states for each. I’m also pretty sure the mypy job doesn’t do anything right now, because I’m not using any type hints in my Python code. However, the job is there for when I start adding those.

Unit Testing AppDaemon

Another thing that @bachya introduced me to was Appdaemontestframework. This provides a pytest based framework for unit testing your AppDaemon apps. Although I’m still working on the unit tests for my so far pretty minimal AD setup I did manage to get the framework up and running, which was a little tricky. I had some issues with setting up the initial configuration for the app, but I managed to work it out eventually.

The unit testing CI job is pretty simple:

All we do here is install the requirements that I need for the tests and then call py.test. Easy!

The deployment job for AppDaemon was also trivial, since it is pretty much a copy of the HASS one. Since AD detects changes to your apps automatically, there’s no need to restart. For more details you can check out the full CI pipeline on Gitlab.

continuous integration home assistant
A run of the AppDaemon pipeline – lots of preflight checks here!

Conclusion

Phew, that was a lot of work, but it was all the logical follow on from work I’d done before or that others had done. I now have a full set of CI pipelines for the three main components of my home automation setup. I’m really happy with each of them, but especially the ESPHome pipeline. As an embedded engineer in my day job I find it really cool that I can update a YAML file locally, commit/push it and then my CI takes over and ends up flashing a physical device! That this is even possible is a testament to all the pieces of software used.

Next Steps

I’m keen to keep going with CI as a means of automating my operations. I think my next target will be sprucing up my Ansible configurations and running them automatically from CI. Stay tuned for that in the hopefully near future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

smarthome network

Building a Smarthome Network with Open Source Software

This post may contain affiliate links. Please see the disclaimer for more information.

I’ve recently been doing some network restructuring and clean up in order to better separate devices on my network and to remove a bit of the cruft that builds up over time. In the process, I realised I haven’t written about how my smarthome network is structured. My network setup is somewhat different to lots of other setups I’ve seen documented. This is because I’m mostly using Open Source software to drive my firewall and wireless access points.

This article will introduce the technologies I’m using on my network and give you an overview of the structure. It’s not intended to be a full how-to, but I’ll try to include links which will guide you though any parts I don’t cover in detail. Let’s get into it…

What’s special about a Smarthome Network?

The typical smarthome today includes a myriad of devices, probably from a variety of different manufacturers. Obviously, there will be your smarthome devices – smart bulbs, switches, hubs/gateways, vacuums, IP cameras, etc. However, there are all the standard devices too – smartphones, tablets, laptops, desktops, etc. Additionally, it’s likely that there will be some devices specifically used for media consumption.

We can see that there are obviously different classes of devices in the lists above. These differ both in function and in the level of trust that you give to them. It’s this attribute of trust that separates a smarthome network from a standard home network for me.

Consider this, are there any devices on your home network that you maybe don’t trust all that much. What about that light bulb over there? When did it last get a security update? Do you really want that sitting on the same network as the devices you trust with your personal communications, important documents and web searches?

Using existing technologies, we can partition our network in such a way as to separate our trusted and untrusted devices from one another. We don’t need to stop there! We can separate devices that require Internet access from those that don’t or along any other lines that we want. The technologies in question are well trusted and have been around for years: VLANs and inter-network firewalling.

Open Source Components

I’ve been operating a partitioned network setup for some time, pretty much since I got seriously into Home Automation technologies. Recently I’ve seen renewed interest in this is the community. The recent series from Rob at The Hook Up on YouTube was particularly good (part 1, part 2, part 3). A large part of the community appears to be using the Unifi line of products from Ubiquity. I’m not questioning the quality or performance of these products – I’ve never tried them. However, I prefer to use Open Source alternatives where possible.

smarthome network
You shall not pass!

The primary components of my smarthome network are running Open Source software. The first of these is my firewall, running pfSense. The second of these are my wireless access points. For these I use a couple of consumer grade TP-Link wireless routers, running OpenWRT. Since these are running solely as access points and Ethernet switches they don’t suffer performance issues from the consumer hardware. I also don’t put excessive strain on my wireless, preferring to use Ethernet wherever possible.

smarthome network
The road goes ever on…

Unfortunately, there doesn’t appear to be an Open Source OS available for readily available Ethernet switches. As an alternative I’m using a couple of TP-Link switches (the TL-SG1024DE and TL-SG105E). These are great switches for the price. Crucially they come with the VLAN capability that we need for our partitioned smarthome network.

Network Partitioning With VLANs

For those unfamiliar with VLANs, they feel like magic when you discover them! A VLAN is a virtual network, which runs over the top of your physical network cables. Several of these networks can be run over the same connection. This means we can run several networks on the same physical hardware. VLANs work by tagging each Ethernet frame with a network identifier so that the receiving hardware knows which network to send it on to. This usually requires hardware support in your switching hardware, although software support is available in most operating systems.

I subdivide my network into several VLANs, based around both trust and functionality:

  • The main LAN network, this houses the trusted client devices (laptops, smartphones, etc). This network has access to most of the others for maintenance purposes.
  • The IOT network, which houses smart devices which absolutely require Internet access. At the moment this is just my Smart TV, Neato Botvac and a Chromecast Ultra. The Neato is the only device that uses the associated WLAN. This network is firewalled from all the others but is allowed to access the Internet.
  • The NOT (Network of Things – name stolen from The Hook Up video series, linked above). This houses smart devices which are locally controlled, such as my ESPHome devices. Some of the devices on this network (such as my Yeelight bulbs, Milight Gateway and Broadlink RM Mini) want to get out to the Internet but are blocked by the firewall. For ease of use I put the VM hosting my Home Automation services on this network and make an exception for it in the firewall. I’m hoping to change this eventually. This network is blocked both from the other local networks and the Internet, aside from a few exceptions.
  • The Media network, this houses all the devices and servers which stream media around my house. This includes both my Kodi systems, an older HDHomerun and the Emby, tvheadend and Mopidy/Snapcast servers. For now the RPi driving my outdoor speakers is living in the NOT network. This is because it’s on wifi and I didn’t want to create another AP for it. Eventually that will be migrated over, once I run a cable into the roof for it. Having the media devices in a separate VLAN should allow me to do some traffic shaping in the future to prioritise their traffic (if needed, I don’t have any problems right now). This network isn’t blocked from any of the others and has Internet access.
  • The DMZ, this hosts any services which are available from outside my network. It’s blocked from all local networks, but has Internet access.
  • The Guest network, which is tied directly to a specific WLAN only for guest devices. This allows me to provide Internet access to guests without giving them access to anything else. Blocked from all the local networks, but obviously has Internet access.
  • The Infrastructure network, this one is new and I’m still migrating devices over to it. The idea is that it will house all the network infrastructure devices, including switches, access points and the physical host servers. Everything in here will have a static IP address. Right now the firewall is open. I will probably lock it down so that admin can only be performed from trusted devices.
  • The Servers network, this one is also new and so far empty. Eventually it will contain all the internal server VMs (i.e. non-DMZ, media or HA). The idea is that I can use firewall rules to control access to these from other parts of the network. Most definitely a work in progress.
  • The Work network, which houses my company workstation and any other devices used for work, since I work from home. This is blocked from the other networks, except for a few ports. It obviously has Internet access.

Phew… that’s quite a …few(!).

VLANs With pfSense

VLANs with pfSense are fairly easy to configure. It’s basically a two step process:

  1. Create the VLAN in Interfaces->Assignments->VLANs
  2. Add a net interface which uses the VLAN in Interfaces->Assignments
smarthome network
The pfSense VLANs page
smarthome network
The pfSense interfaces page, this maps VLANs to interfaces used in firewall rules

The pfSense documentation gives a better overview of this.

Once your VLANs and interfaces are available you should be able to configure the firewall rules to control traffic between them. You’re probably going to want to block access to the local networks from your secure VLANs and potentially allow Internet access. If you want to also deny Internet access just deny access from that network to all destinations.

smarthome network
The firewall rules for my IOT network

OpenWRT: VLANs and Multiple APs

The main reason for using OpenWRT on my wireless APs is to unlock capabilities of the underlying hardware that aren’t available in the stock firmware. Specifically, this will allow you to create multiple wireless access points and assign them to different VLANs.

smarthome network
Multiple WLAN Access Points in OpenWRT

The basic process here is to create a bridge interface. This can be used to group your VLAN and the wireless network together. Adding the VLANs themselves is pretty trivial. There is even a nice GUI editor which shows each port and the corresponding VLANs.

smarthome network
The OpenWRT VLAN assignments interface is the best I’ve found so far

The main gotcha of this setup is to make sure that the dnsmasq service is disabled in System->Startup to prevent it interfering with the DNS and DHCP from the firewall. You can also check the “Ignore Interface” DHCP setting in the interface config for each interface.

You can also delete the default created WAN interface to allow you to use the fifth switch port on the router as another port on the network. This should also disable NATing between the ports which you also don’t need with this setup. Basically, once you are finished tweaking all the settings the device will only act as an access point (for multiple wireless networks) and managed switch.

Special Considerations

There aren’t too many issues that you should run into with this setup, once you get all the pieces into place. There were a few minor things which tripped me up however (two of these are specific to the switch I’m using):

  • My TL-SG1024DE switch is a little funny about what VLAN it’s management interface will be available on. It seems like it’s VLAN 1 or nothing. Additionally it adds VLAN 1 as untagged to every port. I eventually just made VLAN 1 into my Infrastructure network to work around this.
  • The TL-SG1024DE switch also requires you to set the PVID setting on each port, even if the port will never receive any untagged frames. I just set it to whatever the primary VLAN of the port is.
  • I held off moving the Chromecast into the IOT network for a long time, since I was worried about the discovery not working across subnets. As it turns out, I shouldn’t have worried. All you have to do is install and enable the Avahi package in pfSense and you’re good to go. This video helped me out, though the settings seem to have been simplified since then (see the screenshot below for my settings).
smarthome network
Avahi settings in pfSense

Core Network and Topology

I’m not going to cover how I setup the main switch on my network, since the purpose of this article is to focus on the Open Source components. Your configuration will also vary depending on what switch you have.

I will say a few words on the physical topology of my smarthome network. My firewall and main switch are located in my ‘rack’. The firewall machine has two network interfaces. One is used as the WAN port and connected to my fibre ONT. The other is the main VLAN trunk to the switch. In my case my pfSense install is actually installed as a virtual machine hosted by Proxmox which runs on the host machine. This is somewhat irrelevant to this discussion and probably deserves a post all of its own.

From the main switch, connections go out to the various rooms in the house. In three of these I have additional switches where I need the extra ports. These are the two OpenWRT devices and the TL-SG105E switch. This layout also gives nice wireless coverage around the house. The uplinks to each of these are configured as VLAN trunks for whichever networks are required at the other end.

Conclusions

Hopefully this post has shown you that it’s possible to create a fully featured and secure smarthome network using Open Source components. I’ve probably glossed over tons of the details of my setup in the process of writing this article. If I put them all in, it would probably be three times as long! If you have any questions, please ask them in the feedback channels. I’m also not a professional network engineer, so feel free to provide improvements and constructive criticism!

My network is a constant work in progress. However, once this latest round of tidying up is complete I think I’ll be in good shape for quite a while. Hopefully this network will happily support all the future projects I have planned for my smarthome!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

traccar home assistant

Self-Hosted GPS Tracking with Traccar and Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

One of the outstanding issues I’ve had with my Home Automation system in recent months has been to fix my presence detection. I was using a combination of Owntracks and SNMP to query my Pfsense firewall for devices on the network.

This worked well until my Owntracks setup broke due to internal network changes. This was due to moving separating my HA/MQTT server from the reverse proxy. In turn, this meant that the MQTT server wasn’t able to renew it’s Let’s Encrypt certificate.

The solution to this would have been to use DNS validation to get the certificate issued. However, my DNS provider (Namecheap) don’t allow API access unless you have a large enough account with them.

Eventually I will migrate my DNS to a supported provider and get internal TLS working. However, it’s quite a bit of work to migrate over. Also, DNS is pretty critical to the operation of, well, everything – so I want to take my time and get it right. In the meantime I was looking into other options for GPS based presence detection.

GPS Based Presence Options

Aside from using Owntracks in MQTT mode, it also supports HTTP mode (which is now actually the recommended mode). This is directly supported in HASS. I hadn’t switched over to it because I wanted to try out the Owntracks recorder for logging position data over time. Sending data to both HASS and the recorder would be difficult via HTTP. However, it’s exactly the use case that MQTT is made for!

While I was dithering around not doing very much about this problem, another alternative cropped up: Traccar. Traccar is a self-hosted GPS tracking system which supports a multitude of different devices and has mobile apps available for both major OSes. The main plus point for me is that it is a stand-alone server which I could host in my DMZ. There is also a Home Assistant integration for Traccar.

Installation

It seems to be rather badly documented, but there is an official Docker image for Traccar on Docker Hub. When reading that documentation I initially thought I had to build the image myself. Don’t! Just use the official one (unless you have a good reason not to).

I followed along the instructions in the Readme, until it came to the final docker run command, where I wanted to put it into docker-compose. Here’s what I came up with:

This is pretty much the command in the docs translated over, except for the second to last line. This mounts the data volume needed for persisting the default H2 database files to the host. I’m not sure why this isn’t mentioned in the docs. Perhaps they expect that you will use Mysql for the database, but I didn’t want to do that for my initial test setup.

traccar home assistant
The Traccar Web UI

After running the docker-compose up -d command I had Traccar working on port 8082 of my server and was able to log in as the default user (admin and password admin!). The first thing I did after logging in was change that password and disable user registration, before exposing my instance via the reverse proxy.

traccar home assistant
Deselect “Registration” in the Server Settings dialog (Right Hand Gear Icon->Server)

I found that the server memory usage was reasonably high, which seems to be due to the memory options passed to the Java VM in the dockerfile. There doesn’t seem to be any way to change this except for building a custom image.

Reverse Proxy Setup

I’m intending to migrate the reverse proxy on my home network to Traefik at some point after my previous success with it. However, for now it’s still running good old Nginx. I couldn’t find an example Nginx config for Traccar, so I copied my HASS one and modified it to suit:

Adding Devices

Once your server is up and running, adding devices is easy. Just click the plus icon next to devices and give your device a name. For the ID field, the Android app will generate a six-digit identifier, which you can copy over. This ID is the only security token used to update the position, so you may like to use something more secure. I recommend generating a pseudo-random string with pwgen and using that.

Set up on the app side is pretty trivial too. Just enter the full URL to the server e.g. https://traccar.example.com and make sure the device identifier matches the one you entered on the server. Toggling the service status will then start sending location updates. The Android app seems to put an annoying persistent notification in the notification pull down. For some reason this doesn’t go into the ongoing section (which would make it less annoying). I just hid the notifications from the Traccar app via the relevant Android setting and the app still sends location updates.

As an aside, you can easily send location updates from the command line with curl, for example I made the screenshot above by artificially positioning a device at New Plymouth’s Wind Wand:

Make sure to update the hostname and device ID to match your setup and update the other fields to reflect the position you want to log.

Integrating Traccar with Home Assistant

The integration with HASS also proved to be relatively easy and works well. There were a couple of things which tripped me up, which I’ll come to shortly. First we need an entry for Traccar in our Home Assistant config:

The port and ssl variables are required for the above setup with the reverse proxy, aside from that it’s reasonably obvious.

After adding that to my config file, I expected my devices just to show up in HASS. They didn’t. This turned out to be due to two problems. The first of these was that I had created a user in Traccar specifically for Home Assistant (which you can give read-only permissions, nicely). However, this user didn’t have access to my devices. The solution is to grant access via the ‘Devices’ panel of the user management panel (the icon looks like two little picture frames).

The second issue tripped me up for a bit longer. Even after I set the permissions correctly, I still couldn’t see my devices in the HASS UI. It turns out they were being added to the known_devices.yaml on the server, but not being enabled by default. It wasn’t until I logged into the server and checked this file that I noticed this. This is more problematic if you are editing your config locally and deploying it via git since the change obviously won’t be made to your local copy.

In the end I added the entry in my local copy and deployed it to the server:

Once that was done for each device, the Traccar device_tracker entities appeared in Home Assistant just fine.

Battery Usage Issues

Now comes the big potential problem: the power usage of the Traccar Android app. The first day I had this installed my phone was down to 15% by 9-10pm. However, the results have been less worrying over the last couple of days. I’m not sure why it was so bad for the first day, although there were a few points that were different from my subsequent usage:

  • I was using the version from F-Droid. After noticing the battery issue I switched to the version from the play store. I don’t know if that version uses some proprietary Google Services API, or if they are identical. However, it is a possible reason for the difference.
  • I had changed a couple of the default settings – I lengthened the frequency setting to 15 minutes and changed the distance setting to 50 meters. I’m not sure what effect/interactions these have since I can’t find any documentation about them. I’ve since gone back to the default settings.
  • I moved around quite a bit that first day (some driving around town and a long walk) with less moving in the subsequent two days. If this is really the reason for the battery usage then that would be disappointing. Obviously not moving is not a solution!

This is all based on three days of usage, so I’m going to continue monitoring the situation and potentially testing other variations in settings. I’ve asked about the battery usage on the Traccar forum, but have so far had no answer. One worrying issue is that the Android battery usage panel shows the app as keeping the device awake for long periods.

traccar home assistant
Power usage from the Traccar app

There is another option to the Traccar app, since OwnTracks can also be used. However, at this point you may as well just use the HASS integration directly, unless you want the extra features of the Traccar server.

Conclusion

Traccar seems like a pretty solid piece of software, though due to the power issue I’m not completely sold on it yet. The location updates are certainly more frequent than OwnTracks was (at least over MQTT). The server side UI is nice and responsive and there are quite a few capabilities that I haven’t explored yet. For these reasons I’m going to stick with it for now and continue testing.

Next Steps

Obviously I need to solve the battery issues in order to go much further. If the usage proves acceptable after further testing I’m also going to see if I can dial down the memory usage of the server by editing the Dockerfile.

I also have quite a bit of work to do on the HASS presence detection front. I’d specifically like to try a combination of Bayesian Sensors and the “Not so Binary” approach.

Hopefully I can come up with something which is both responsive to changes and also robust against false readings. I’ll be sure to write a post about that when I do. However, that’s it for now. Thanks for reading!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

home assistant rubbish collection

Quick Project: Rubbish Collection Panel for Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

With my local council rolling out an increasingly complex system of rubbish bins and collection, I’ve been thinking about getting this integrated with Home Assistant so that we don’t have to remember which bins go out when. I was presently surprised to find the garbage_collection custom component whilst browsing HACS the other day. This component does exactly what I need, so I decided to give it a go.

Rubbish Collection Configuration

After installing the component via HACS, I set about configuring it. Here is the configuration I came up with:

That was easy! The only non-obvious thing is working out which bins are collected on even and odd weeks, which is easy to look up online.

Getting it into Lovelace

The garbage_collection component page in HACS has a nice screenshot of the sensors in Lovelace (which doesn’t seem to be in the repository readme). However, the sensors themselves have a state based on whether the bin is due to be put out. The state is nice and machine readable, but I wanted to recreate the panel from the screenshot for the humans that have to look at it. In the end I actually decided to simplify it down to just show “today”, “this week” or “next week” plus the number of days, since the actual date is pretty irrelevant.

home assistant rubbish collection
The finished rubbish collection panel in my Lovelace UI

This proved to be more difficult than I’d expected, since Lovelace doesn’t support templates in cards natively. I had to install the Lovelace Card Templater plugin via HACS. This plugin in turn requires the card-tools plugin, which I couldn’t find in HACS. I ended up installing it by adding it as a git submodule to my configuration repository. I then added the following to my Lovelace config to load the plugins:

Full Panel YAML

The panel itself is made up of a vertical stack card in which I put two horizontal stack cards. These in turn contain two of the templater cards. The configuration of the templater cards is a little involved since you need to specify the entity twice (which seems to be due to some internal limitation of Lovelace). My template cards are based on the sensor card to show just the data from the template. I use a state_template to do this.

Anyway, here’s the full YAML:

Each of the templater cards is pretty much the same. I just change the entity ID for each. The only improvement I would make would be to add a line break between the “Today”/”This week”/Next Week” text and the day count, since this would look slightly better. I couldn’t work out how to do that however.

Conclusion

I think I’ve achieved my goal of simplifying the task of remembering which bins go out when. Now I can quickly check that info with a glance at my HASS UI. Of course, now that I have the rubbish collection data in Home Assistant I can use it for other things such as notifications or reminder lights. I already have some ideas for status lighting, so that might become part of a larger project.

I’d like to say thanks to the authors of the custom components and plugins that I’ve used to achieve this. The HASS community really is thriving will all these third party addons at the moment!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

getting started with appdaemon

Getting Started with AppDaemon for Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

Continuing on from last weeks post, I was also recently persuaded to try out AppDaemon for Home Assistant (again). I have previously tried out AppDaemon, so that I could use the excellent OccuSim app. I never got as far as writing any apps of my own and I hadn’t reinstalled it in my latest HASS migration. This post is going to detail my first steps in getting started with AppDaemon more seriously.

I should probably start with a run down of what AppDaemon is for anyone that doesn’t know. The AppDaemon website provides a high level description:

AppDaemon is a loosely coupled, multithreaded, sandboxed python execution environment for writing automation apps for home automation projects, and any environment that requires a robust event driven architecture.

In plain English, that basically means it provides an environment for writing home automation rules in Python. The supported home automation platforms are Home Assistant and plain MQTT. For HASS, this forms an alternative to both the built in YAML automation functionality and 3rd party systems such as Node-RED. Since AppDaemon is Python based, it also opens up the entirety of the Python ecosystem for use in your automations.

AppDaemon also provides dashboarding functionality (known as HADashboard). I’ve decided not to use this for now because I currently have no use for it. I also think the dashboards look a little dated next to the shiny new HASS Lovelace UI.

Installation

I installed AppDaemon via Docker by following the tutorial. The install went pretty much as expected. However, I had to clean up the config files from my old install before proceeding. The documentation doesn’t provide an example docker-compose configuration, so here’s mine:

I’ve linked the AppDaemon container to an internal network, on which I’ve also placed my HomeAssistant instance. That way AppDaemon can talk to HASS pretty easily.

You’ll note that I’m not passing any environment variables as per the documentation. This is because my configuration is passed only via the appdaemon.yaml file, since it allows me to use secrets:

You’ll see here that I use the docker0 interface IP to connect to HASS. I tried using the internal hostname (which should be homeassistant on my setup), but it didn’t seem to work. I think this is due to the HASS container being configured with host networking.

Writing My First App

I wanted to test of the capabilities and ease of use of AppDaemon. So, I decided to convert one of my existing automations into app form. I chose my bathroom motion light automation, because it’s reasonably complex but simple enough to complete quickly.

I started out by copying the motion light example from the tutorial. Then I updated it to take configuration parameters for the motion sensor, light and off timeout:

I’ve also added a couple of utility methods to manage the timer better and also to specify more complex logic to restrict when the light will come on. Encapsulating both of these in their own methods will allow re-use of them later on.

The timer logic of the example app is particularly problematic in the case of multiple motion events. In the original logic one timer will be set for each motion event. This leads to the light being turned off even if there is still motion in the room. It also caused some general flickering of the light between motion events and triggered callbacks. I mitigate this in the set_timer method here by first cancelling the timer if it is active before starting a new timer with the full timeout.

At this point, we have a fully functional and re-usable motion activated light. We can instantiate as many of these as we would like in our apps/apps.yaml file, like so:

Note that we haven’t yet recreated the functionality of my original automation. In that automation, the brightness was controlled by the door state. We’ll tackle this next.

Extending the App

Since our previous MotionLight app is just a Python object, we can take advantage of the object orientated capabilities of the Python language to extend it with further functionality. Doing so allows us to maintain the original behaviour for some instances, whilst also customising for more complex functionality.

Our subclassed light looks like this:

Here we can see that the initialize method loads only the new configuration parameters. The existing parameters from the parent class are loaded from the parent’s initialize method via the super call. The new configuration options are passed as lists, allowing us to specify several bedroom or other doors. In order to set the relevant callbacks I loop over each list and set the callback. The callback is the same for each entry in the list since it only matters what type they are. The specifics of each door are irrelevant.

Next we have the actual callback methods for when the doors open. These just set the internal variable last_door to the relevant value and log it for debugging purposes.

Most of the new logic comes in the motion_callback method. Here I have reused the is_light_times and set_timer methods from the parent class. The remainder of the logic first checks that the light is off and then recreates the operation of the template I used in my original automation. This sets the light to dim if the last door opened was to one of the bedrooms and bright otherwise. There are also some time based restrictions on this for times when I always want the light bright.

The configuration is pretty similar to the previous example, with the addition of the lists for the doors:

Conclusion and Next Steps

The previous automation (or more rightly set of automations) totaled to 78 lines. The Python code for the app is only 56 lines long. However, there is another 11 lines of configuration required. By this measurement, it seems like the two are similar in complexity. However, we now have an easily re-usable implementation for two types of motion controlled lights with the AppDaemon implementation. Further instances can be called into being with only a few lines of simple configuration. Whereas the YAML automation would need to be duplicated wholesale and tweaked to fit.

This power makes me keen to continue with AppDaemon. I’m also keen to integrate it with my CI Pipeline. Although I’m actually thinking of separating it out from my HASS configuration. With this I’d like to try out some more modern Python development tooling, since it’s been quite some time since I’ve had the opportunity to do any serious Python development.

I hope you’ve enjoyed reading this post. For anyone already using AppDaemon, this isn’t anything groundbreaking. However, for those who haven’t tried it or are on the fence, I’d highly recommend you give it a go. Please feel free to show off anything you’ve made in the feedback channels for this post!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.