home assistant rubbish collection

Quick Project: Follow up to my Home Assistant Rubbish Collection Panel

This post may contain affiliate links. Please see the disclaimer for more information.

Last month, I wrote a quick post about the Home Assistant Rubbish Collection panel I made for the Lovelace UI. Well, it looks like amaximus was inspired to create his own custom card to do a similar thing. [NOTE: the author of this card hasn’t contacted me directly (I came across it via HACS), I’m only claiming to be the inspiration based on the relative dates].

This card has a couple of cool capabilities that my previous panel didn’t have. Specifically, you can set colour coded icons for your different bins and the icon will change style and colour (to red) when the bin is due to go out in the next day. You can also hide the card completely if a bin is not due to go out within X days.

home assistant rubbish collection
Cards for all four of my sensors, with colour coded icons
home assistant rubbish collection
If the collection is the next day the icons change to red

Installation and Setup

I installed the card by adding a git submodule to my configuration repository in the www/plugins directory, but you can also install directly from HACS. I’m switching over to adding all my custom components and cards as submodules in order to make my config more easily deployable.

After installation, you need to add the path to the garbage-collection-card.js file in the resources section of your Lovelace UI config:

resources:
  - type: js
    url: /local/plugins/garbage-collection-card/garbage-collection-card.js

Once that’s done you can add cards to the UI. I just put mine in a vertical stack to group them together:

type: vertical-stack
cards:
  - type: 'custom:garbage-collection-card'
    entity: sensor.food_scraps
    hide_date: true
    icon_color: green
    icon_size: 48px
  - type: 'custom:garbage-collection-card'
    entity: sensor.general_recycling
    hide_date: true
    icon_color: yellow
    icon_size: 48px
  - type: 'custom:garbage-collection-card'
    entity: sensor.glass_recycling
    hide_date: true
    icon_color: blue
    icon_size: 48px  
  - type: 'custom:garbage-collection-card'
    entity: sensor.landfill
    hide_date: true
    icon_color: red
    icon_size: 48px

…and that’s it! If you want to hide a card for bins that don’t need to go out soon use hide_before: x (where x is the number of days). I’ll probably use this to hide bins that don’t need to go out in the current week, but I wanted to show all the cards in the screenshots 😉

Conclusion

I think this is a great improvement on my previous panel, so I’m going to stick with it. Thanks to the author another contributors for taking the time to make it!

It’s kinda cool to think that this blog may have inspired someone else to go out and write some code! If you are inspired to make and share something as a result of one of my posts, please get in contact! Your work will most likely get featured in a future blog post.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

continuous integration home assistant

Continuous Integration for Home Assistant, ESPHome and AppDaemon

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I set up continuous integration and deployment from my Home Assistant configuration. This setup has been nothing short of awesome! It’s liberated me from worrying about editing my configuration – all I do is git push and relax. Either HASS will notify me when it restarts or I’ll get an email from Gitlab telling me the pipeline failed.

I wanted to take this configuration further and expand it to other parts of my Home Automation infrastructure. In this post I’ll cover expanding it to perform deployments of my HA stack with Docker, building and deploying to ESPHome devices and unit testing and deploying my AppDaemon apps.

Let’s get on with it!

Automating Docker Deployment

I’d originally held off doing this because I wasn’t looking forward to building custom Docker images in Gitlab CI. However, I managed to complete the original pipeline without having to add any extra dependencies to the HASS containers (such as git which I thought may be required). This makes the job of deploying my HA stack much easier, especially as I already had it mostly scripted. The first step was to add my update.sh script to my repo and tweak it to suit:

#!/bin/bash
set -e

cd /mnt/docker-data/home-assistant || exit
docker-compose -p ha pull
docker-compose -p ha down
docker-compose -p ha up -d --remove-orphans
docker system prune -fa
docker volume prune -f
exit

This is a pretty simple modification to my previous script. The main additions are that I use the -p argument to set the project name used by docker-compose. By default this is taken from the directory name, but I wanted it to match the name of my previous project even though the directory has changed from ha to home-assistant. The other main modification is that I’ve added the --remove-orphans argument to clean up any lingering containers. This is useful if I remove a container from the docker-compose.yml file. In addition I’ve removed the apt commands and cleaned up the script a bit so that it passes my shellcheck job.

The next step was simply to add the docker-compose.yml file to the repo. Then I continued by editing the CI configuration.

Updated Home Assistant CI Jobs

I first split up my previous deployment job into two jobs. The first of these is the main deployment job which pulls the new configuration. The second restarts HASS. The restart job goes in a new pipeline stage and will only be run when the docker-compose.yml or update.sh files haven’t changed:

deploy:
  stage: deploy
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add openssh-client
    - echo "$DEPLOYMENT_SSH_KEY" > id_rsa
    - chmod 600 id_rsa
  script:
    - ssh -i id_rsa -o "StrictHostKeyChecking=no" $DEPLOYMENT_SSH_LOGIN "cd /mnt/docker-data/home-assistant && git fetch && git checkout $CI_COMMIT_SHA && git submodule sync --recursive && git submodule update --init --recursive"
  after_script:
    - rm id_rsa
  only:
    refs:
      - master
  tags:
    - hass

restart-hass:
  stage: postflight
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add curl
  script:
    - "curl -X POST -H \"Authorization: Bearer $DEPLOYMENT_HASS_TOKEN\" -H \"Content-Type: application/json\" $DEPLOYMENT_HASS_URL/api/services/homeassistant/restart"
  only:
    refs:
      - master
  except:
    changes:
      - docker-compose.yml
      - update.sh
  tags:
    - hass

I then added another job (again in another pipeline stage) which performs our Docker deployment. This will be run only when either the docker-compose.yml or update.sh files changes:

docker-deploy:
  stage: docker-deploy
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add openssh-client
    - echo "$DEPLOYMENT_SSH_KEY" > id_rsa
    - chmod 600 id_rsa
  script:
    - ssh -i id_rsa -o "StrictHostKeyChecking=no" $DEPLOYMENT_SSH_LOGIN "cd /mnt/docker-data/home-assistant && ./update.sh"
  after_script:
    - rm id_rsa
  only:
    refs:
      - master
    changes:
      - docker-compose.yml
      - update.sh
  tags:
    - hass
continuous integration home assistant
A full pipeline run with a deployment of the Docker containers running in the final stage.

With that in place I can now redeploy my HA stack by modifying either of those files, committing to git and pushing. In order to facilitate HASS updates with this workflow, I changed the tag of the HASS Docker image to the explicit version number. That way I can simply update the version number and redeploy for each new release.

Continuous Integration for ESPHome

Inspired by the previous configs I have seen for checking ESPHome files, I wanted to implement the same checks. However, I wanted to go further and have a full continuous deployment setup which would build the relevant firmware when its configuration was changed and send an OTA update to the corresponding device. As it turned out this was relatively easy.

I started out by importing my ESPHome configs into Git, which I hadn’t previously done. You can find the resulting repository on Gitlab. For the CI configuration I first copied over the markdownlint and yamllint jobs from my Home Assistant CI configuration.

I then borrowed the ESPHome config check jobs from Frenck’s configuration. These check against both the current release of ESPHome and the next beta release. The beta release job is allowed to fail and is designed only to provide a heads up for potential future issues.

Then I came to implement the build and deployment job. Traditionally these would be performed in separate steps, but since ESPHome can do this in a single step with it’s run subcommand I decided to do it the easy way. This also removes the requirement to manage build artifacts between steps. I created the following template job to manage this:

# Generic deployment template
.esphome-deploy: &esphome-deploy
  stage: deploy
  variables:
    PYTHONPATH: "/usr/src/app:$PYTHONPATH"
  image:
    name: esphome/esphome:latest
    entrypoint: [""]
  before_script:
    - apt update && apt install -y git-crypt openssl
    - |
      openssl enc -aes-256-cbc -pbkdf2 -d -in git-crypt.key.enc -out \
          git-crypt.key -k $OPENSSL_PASSPHRASE
    - git-crypt unlock git-crypt.key
    - esphome dummy.yaml version
  after_script:
    - rm -f git-crypt.key
  retry: 2
  tags:
    - hass

Most of the complexity here is in unlocking the git-crypt repository so that we can read the encrypted secrets file. I opted to store the git-crypt key in the repository, encrypted with openssl. The passphrase used for openssl is in turn stored in a Gitlab variable, in this case $OPENSSL_PASSPHRASE. Once the decryption of the key is complete, we can unlock the repo and get on with things. We remove the key after we are done in the after_script step.

Per-Device Jobs

Using the template configuration, I then created a job for each device I want to deploy to. These jobs are executed only when the corresponding YAML file (or secrets.yaml) is changed. This ensures that I only update devices that I need to on each run. The general form of these jobs is:

my_device:
  <<: *esphome-deploy
  script:
    - esphome my_device.yaml run --no-logs
  only:
    refs:
      - master
    changes:
      - my_device.yaml
      - secrets.yaml

Of course you need to replace my_device with the name of your device file.

continuous integration home assistant
A run of the ESPHome pipeline with deployments to two devices

With these jobs in place I have a full end-to-end pipeline for ESPHome, which lints and checks my configuration before deploying it only to devices which need updating. Nice! You can check out the full pipeline configuration on Gitlab. I now no longer have need to run the ESPHome dashboard, so I’ve removed it from my server.

Continuous Integration for AppDaemon

I mentioned previously that I wanted to split out my AppDaemon apps and configuration into a separate repo from my HASS config. I did this as a prerequisite step of this setup and you can again find the new repo on Gitlab.

The inspiration for this configuration came mostly to @bachya on the HASS forum, whose post in reply to my earlier setup provided most of the details. Thanks for sharing!

I started out by copying across the now ubiquitous markdownlint and yamllint jobs. I then added jobs for pylint, mypy, flake8 and black:

pylint:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install pylint
    - pylint --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - pylint --rcfile pylintrc apps/

mypy:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install mypy
    - mypy --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - mypy --ignore-missing-imports apps/

flake8:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install flake8
    - flake8 --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - flake8 --exclude=apps/occusim --max-line-length=88 apps/

black:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install black
    - black --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - black --exclude=apps/occusim --check --fast apps/

Although this ends up being very verbose, I decided to implement these all as separate jobs so that I get individual pass/fail states for each. I’m also pretty sure the mypy job doesn’t do anything right now, because I’m not using any type hints in my Python code. However, the job is there for when I start adding those.

Unit Testing AppDaemon

Another thing that @bachya introduced me to was Appdaemontestframework. This provides a pytest based framework for unit testing your AppDaemon apps. Although I’m still working on the unit tests for my so far pretty minimal AD setup I did manage to get the framework up and running, which was a little tricky. I had some issues with setting up the initial configuration for the app, but I managed to work it out eventually.

The unit testing CI job is pretty simple:

# Unit test jobs
unit-tests:
  stage: test
  image:
    name: acockburn/appdaemon:latest
    entrypoint: [""]
  before_script:
    - pip install -r apps/test/test_requirements.txt
    - py.test --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - py.test
  tags:
    - hass

All we do here is install the requirements that I need for the tests and then call py.test. Easy!

The deployment job for AppDaemon was also trivial, since it is pretty much a copy of the HASS one. Since AD detects changes to your apps automatically, there’s no need to restart. For more details you can check out the full CI pipeline on Gitlab.

continuous integration home assistant
A run of the AppDaemon pipeline – lots of preflight checks here!

Conclusion

Phew, that was a lot of work, but it was all the logical follow on from work I’d done before or that others had done. I now have a full set of CI pipelines for the three main components of my home automation setup. I’m really happy with each of them, but especially the ESPHome pipeline. As an embedded engineer in my day job I find it really cool that I can update a YAML file locally, commit/push it and then my CI takes over and ends up flashing a physical device! That this is even possible is a testament to all the pieces of software used.

Next Steps

I’m keen to keep going with CI as a means of automating my operations. I think my next target will be sprucing up my Ansible configurations and running them automatically from CI. Stay tuned for that in the hopefully near future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

traccar home assistant

Self-Hosted GPS Tracking with Traccar and Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

One of the outstanding issues I’ve had with my Home Automation system in recent months has been to fix my presence detection. I was using a combination of Owntracks and SNMP to query my Pfsense firewall for devices on the network.

This worked well until my Owntracks setup broke due to internal network changes. This was due to moving separating my HA/MQTT server from the reverse proxy. In turn, this meant that the MQTT server wasn’t able to renew it’s Let’s Encrypt certificate.

The solution to this would have been to use DNS validation to get the certificate issued. However, my DNS provider (Namecheap) don’t allow API access unless you have a large enough account with them.

Eventually I will migrate my DNS to a supported provider and get internal TLS working. However, it’s quite a bit of work to migrate over. Also, DNS is pretty critical to the operation of, well, everything – so I want to take my time and get it right. In the meantime I was looking into other options for GPS based presence detection.

GPS Based Presence Options

Aside from using Owntracks in MQTT mode, it also supports HTTP mode (which is now actually the recommended mode). This is directly supported in HASS. I hadn’t switched over to it because I wanted to try out the Owntracks recorder for logging position data over time. Sending data to both HASS and the recorder would be difficult via HTTP. However, it’s exactly the use case that MQTT is made for!

While I was dithering around not doing very much about this problem, another alternative cropped up: Traccar. Traccar is a self-hosted GPS tracking system which supports a multitude of different devices and has mobile apps available for both major OSes. The main plus point for me is that it is a stand-alone server which I could host in my DMZ. There is also a Home Assistant integration for Traccar.

Installation

It seems to be rather badly documented, but there is an official Docker image for Traccar on Docker Hub. When reading that documentation I initially thought I had to build the image myself. Don’t! Just use the official one (unless you have a good reason not to).

I followed along the instructions in the Readme, until it came to the final docker run command, where I wanted to put it into docker-compose. Here’s what I came up with:

---
version: '3'

services:
  traccar:
    image: traccar/traccar:latest
    restart: always
    ports:
      - "8082:8082"
      - "5000-5150:5000-5150"
      - "5000-5150:5000-5150/udp"
    volumes:
      - "/mnt/docker-data/traccar/logs:/opt/traccar/logs:rw"
      - "/mnt/docker-data/traccar/data:/opt/traccar/data:rw"
      - "/mnt/docker-data/traccar/traccar.xml:/opt/traccar/conf/traccar.xml:rw"

This is pretty much the command in the docs translated over, except for the second to last line. This mounts the data volume needed for persisting the default H2 database files to the host. I’m not sure why this isn’t mentioned in the docs. Perhaps they expect that you will use Mysql for the database, but I didn’t want to do that for my initial test setup.

traccar home assistant
The Traccar Web UI

After running the docker-compose up -d command I had Traccar working on port 8082 of my server and was able to log in as the default user (admin and password admin!). The first thing I did after logging in was change that password and disable user registration, before exposing my instance via the reverse proxy.

traccar home assistant
Deselect “Registration” in the Server Settings dialog (Right Hand Gear Icon->Server)

I found that the server memory usage was reasonably high, which seems to be due to the memory options passed to the Java VM in the dockerfile. There doesn’t seem to be any way to change this except for building a custom image.

Reverse Proxy Setup

I’m intending to migrate the reverse proxy on my home network to Traefik at some point after my previous success with it. However, for now it’s still running good old Nginx. I couldn’t find an example Nginx config for Traccar, so I copied my HASS one and modified it to suit:

server {
    # Update this line to be your domain
    server_name traccar.example.com;

    # Ensure these lines point to your SSL certificate and key
    ssl_certificate /etc/letsencrypt/live/traccar.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/traccar.example.com/privkey.pem;

    # Ensure this line points to your dhparams file
    ssl_dhparam /etc/nginx/ssl/dhparams.pem;

    # These shouldn't need to be changed
    listen 443 ;
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
    ssl on;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;

    proxy_buffering off;

    location / {
        # Update the address of your backend server here
        proxy_pass http://backend-server:8082;
        proxy_set_header Host $host;
        proxy_redirect http:// https://;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }
}

Adding Devices

Once your server is up and running, adding devices is easy. Just click the plus icon next to devices and give your device a name. For the ID field, the Android app will generate a six-digit identifier, which you can copy over. This ID is the only security token used to update the position, so you may like to use something more secure. I recommend generating a pseudo-random string with pwgen and using that.

Set up on the app side is pretty trivial too. Just enter the full URL to the server e.g. https://traccar.example.com and make sure the device identifier matches the one you entered on the server. Toggling the service status will then start sending location updates. The Android app seems to put an annoying persistent notification in the notification pull down. For some reason this doesn’t go into the ongoing section (which would make it less annoying). I just hid the notifications from the Traccar app via the relevant Android setting and the app still sends location updates.

As an aside, you can easily send location updates from the command line with curl, for example I made the screenshot above by artificially positioning a device at New Plymouth’s Wind Wand:

curl -X POST "https://<insert-hostname>/?id=<insert-device-id>&lat=-39.056056&lon=174.071736&hdrop=0&altitude=4&speed=0"

Make sure to update the hostname and device ID to match your setup and update the other fields to reflect the position you want to log.

Integrating Traccar with Home Assistant

The integration with HASS also proved to be relatively easy and works well. There were a couple of things which tripped me up, which I’ll come to shortly. First we need an entry for Traccar in our Home Assistant config:

device_tracker:
  - platform: traccar
    host: !secret traccar_host
    port: 443
    username: !secret traccar_user
    password: !secret traccar_password
    ssl: true

The port and ssl variables are required for the above setup with the reverse proxy, aside from that it’s reasonably obvious.

After adding that to my config file, I expected my devices just to show up in HASS. They didn’t. This turned out to be due to two problems. The first of these was that I had created a user in Traccar specifically for Home Assistant (which you can give read-only permissions, nicely). However, this user didn’t have access to my devices. The solution is to grant access via the ‘Devices’ panel of the user management panel (the icon looks like two little picture frames).

The second issue tripped me up for a bit longer. Even after I set the permissions correctly, I still couldn’t see my devices in the HASS UI. It turns out they were being added to the known_devices.yaml on the server, but not being enabled by default. It wasn’t until I logged into the server and checked this file that I noticed this. This is more problematic if you are editing your config locally and deploying it via git since the change obviously won’t be made to your local copy.

In the end I added the entry in my local copy and deployed it to the server:

robs_phone:                                                                                                                                                                                     
  hide_if_away: false                                                                                                                                                                           
  icon:                                                                                                                                                                                         
  mac:                                                                                                                                                                      
  name: Rob                                                                                                                                                                                     
  picture:                                                                                                                                                                                      
  track: true

Once that was done for each device, the Traccar device_tracker entities appeared in Home Assistant just fine.

Battery Usage Issues

Now comes the big potential problem: the power usage of the Traccar Android app. The first day I had this installed my phone was down to 15% by 9-10pm. However, the results have been less worrying over the last couple of days. I’m not sure why it was so bad for the first day, although there were a few points that were different from my subsequent usage:

  • I was using the version from F-Droid. After noticing the battery issue I switched to the version from the play store. I don’t know if that version uses some proprietary Google Services API, or if they are identical. However, it is a possible reason for the difference.
  • I had changed a couple of the default settings – I lengthened the frequency setting to 15 minutes and changed the distance setting to 50 meters. I’m not sure what effect/interactions these have since I can’t find any documentation about them. I’ve since gone back to the default settings.
  • I moved around quite a bit that first day (some driving around town and a long walk) with less moving in the subsequent two days. If this is really the reason for the battery usage then that would be disappointing. Obviously not moving is not a solution!

This is all based on three days of usage, so I’m going to continue monitoring the situation and potentially testing other variations in settings. I’ve asked about the battery usage on the Traccar forum, but have so far had no answer. One worrying issue is that the Android battery usage panel shows the app as keeping the device awake for long periods.

traccar home assistant
Power usage from the Traccar app

There is another option to the Traccar app, since OwnTracks can also be used. However, at this point you may as well just use the HASS integration directly, unless you want the extra features of the Traccar server.

Conclusion

Traccar seems like a pretty solid piece of software, though due to the power issue I’m not completely sold on it yet. The location updates are certainly more frequent than OwnTracks was (at least over MQTT). The server side UI is nice and responsive and there are quite a few capabilities that I haven’t explored yet. For these reasons I’m going to stick with it for now and continue testing.

Next Steps

Obviously I need to solve the battery issues in order to go much further. If the usage proves acceptable after further testing I’m also going to see if I can dial down the memory usage of the server by editing the Dockerfile.

I also have quite a bit of work to do on the HASS presence detection front. I’d specifically like to try a combination of Bayesian Sensors and the “Not so Binary” approach.

Hopefully I can come up with something which is both responsive to changes and also robust against false readings. I’ll be sure to write a post about that when I do. However, that’s it for now. Thanks for reading!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

home assistant rubbish collection

Quick Project: Rubbish Collection Panel for Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

With my local council rolling out an increasingly complex system of rubbish bins and collection, I’ve been thinking about getting this integrated with Home Assistant so that we don’t have to remember which bins go out when. I was presently surprised to find the garbage_collection custom component whilst browsing HACS the other day. This component does exactly what I need, so I decided to give it a go.

Rubbish Collection Configuration

After installing the component via HACS, I set about configuring it. Here is the configuration I came up with:

sensor:
  - platform: garbage_collection
    name: Food Scraps
    frequency: "weekly"
    collection_days: !secret rubbish_collection_day
  - platform: garbage_collection
    name: General Recycling
    frequency: "odd-weeks"
    collection_days: !secret rubbish_collection_day
  - platform: garbage_collection
    name: Glass Recycling
    frequency: "even-weeks"
    collection_days: !secret rubbish_collection_day
  - platform: garbage_collection
    name: Landfill
    frequency: "even-weeks"
    collection_days: !secret rubbish_collection_day

That was easy! The only non-obvious thing is working out which bins are collected on even and odd weeks, which is easy to look up online.

Getting it into Lovelace

The garbage_collection component page in HACS has a nice screenshot of the sensors in Lovelace (which doesn’t seem to be in the repository readme). However, the sensors themselves have a state based on whether the bin is due to be put out. The state is nice and machine readable, but I wanted to recreate the panel from the screenshot for the humans that have to look at it. In the end I actually decided to simplify it down to just show “today”, “this week” or “next week” plus the number of days, since the actual date is pretty irrelevant.

home assistant rubbish collection
The finished rubbish collection panel in my Lovelace UI

This proved to be more difficult than I’d expected, since Lovelace doesn’t support templates in cards natively. I had to install the Lovelace Card Templater plugin via HACS. This plugin in turn requires the card-tools plugin, which I couldn’t find in HACS. I ended up installing it by adding it as a git submodule to my configuration repository. I then added the following to my Lovelace config to load the plugins:

resources:
  - type: js
    url: /local/plugins/lovelace-card-tools/card-tools.js?v=1
  - type: js
    url: /community_plugin/lovelace-card-templater/lovelace-card-templater.js

Full Panel YAML

The panel itself is made up of a vertical stack card in which I put two horizontal stack cards. These in turn contain two of the templater cards. The configuration of the templater cards is a little involved since you need to specify the entity twice (which seems to be due to some internal limitation of Lovelace). My template cards are based on the sensor card to show just the data from the template. I use a state_template to do this.

Anyway, here’s the full YAML:

cards:
  - cards:
      - card:
          entity: sensor.food_scraps
          type: sensor
        entities:
          - entity: sensor.food_scraps
            state_template: >-
              {% if state_attr("sensor.food_scraps", "days") == 0 %} Today {%
              elif state_attr("sensor.food_scraps", "days") < 7 %} This Week {%
              else %} Next Week {% endif %} ({{ state_attr("sensor.food_scraps",
              "days") }} days)
        type: 'custom:card-templater'
      - card:
          entity: sensor.general_recycling
          type: sensor
        entities:
          - entity: sensor.general_recycling
            state_template: >-
              {% if state_attr("sensor.general_recycling", "days") == 0 %} Today
              {% elif state_attr("sensor.general_recycling", "days") < 7 %} This
              Week {% else %} Next Week {% endif %} ({{
              state_attr("sensor.general_recycling", "days") }} days)
        type: 'custom:card-templater'
    type: horizontal-stack
  - cards:
      - card:
          entity: sensor.glass_recycling
          type: sensor
        entities:
          - entity: sensor.glass_recycling
            state_template: >-
              {% if state_attr("sensor.glass_recycling", "days") == 0 %} Today
              {% elif state_attr("sensor.glass_recycling", "days") < 7 %} This
              Week {% else %} Next Week {% endif %} ({{
              state_attr("sensor.glass_recycling", "days") }} days)
        type: 'custom:card-templater'
      - card:
          entity: sensor.landfill
          type: sensor
        entities:
          - entity: sensor.landfill
            state_template: >-
              {% if state_attr("sensor.landfill", "days") == 0 %} Today {% elif
              state_attr("sensor.landfill", "days") < 7 %} This Week {% else %}
              Next Week {% endif %} ({{ state_attr("sensor.landfill", "days") }}
              days)
        type: 'custom:card-templater'
    type: horizontal-stack
type: vertical-stack

Each of the templater cards is pretty much the same. I just change the entity ID for each. The only improvement I would make would be to add a line break between the “Today”/”This week”/Next Week” text and the day count, since this would look slightly better. I couldn’t work out how to do that however.

Conclusion

I think I’ve achieved my goal of simplifying the task of remembering which bins go out when. Now I can quickly check that info with a glance at my HASS UI. Of course, now that I have the rubbish collection data in Home Assistant I can use it for other things such as notifications or reminder lights. I already have some ideas for status lighting, so that might become part of a larger project.

I’d like to say thanks to the authors of the custom components and plugins that I’ve used to achieve this. The HASS community really is thriving will all these third party addons at the moment!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

getting started with appdaemon

Getting Started with AppDaemon for Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

Continuing on from last weeks post, I was also recently persuaded to try out AppDaemon for Home Assistant (again). I have previously tried out AppDaemon, so that I could use the excellent OccuSim app. I never got as far as writing any apps of my own and I hadn’t reinstalled it in my latest HASS migration. This post is going to detail my first steps in getting started with AppDaemon more seriously.

I should probably start with a run down of what AppDaemon is for anyone that doesn’t know. The AppDaemon website provides a high level description:

AppDaemon is a loosely coupled, multithreaded, sandboxed python execution environment for writing automation apps for home automation projects, and any environment that requires a robust event driven architecture.

In plain English, that basically means it provides an environment for writing home automation rules in Python. The supported home automation platforms are Home Assistant and plain MQTT. For HASS, this forms an alternative to both the built in YAML automation functionality and 3rd party systems such as Node-RED. Since AppDaemon is Python based, it also opens up the entirety of the Python ecosystem for use in your automations.

AppDaemon also provides dashboarding functionality (known as HADashboard). I’ve decided not to use this for now because I currently have no use for it. I also think the dashboards look a little dated next to the shiny new HASS Lovelace UI.

Installation

I installed AppDaemon via Docker by following the tutorial. The install went pretty much as expected. However, I had to clean up the config files from my old install before proceeding. The documentation doesn’t provide an example docker-compose configuration, so here’s mine:

appdaemon:
    image: acockburn/appdaemon:latest
    volumes:
      - /mnt/docker-data/home-assistant:/conf
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - homeassistant
    restart: always
    networks:
      - internal

I’ve linked the AppDaemon container to an internal network, on which I’ve also placed my HomeAssistant instance. That way AppDaemon can talk to HASS pretty easily.

You’ll note that I’m not passing any environment variables as per the documentation. This is because my configuration is passed only via the appdaemon.yaml file, since it allows me to use secrets:

---
log:
  logfile: STDOUT
  errorfile: STDERR

appdaemon:
  threads: 10
  timezone: 'Pacific/Auckland'
  plugins:
    HASS:
      type: hass
      ha_url: "http://172.17.0.1:8123"
      token: !secret appdaemon_token

You’ll see here that I use the docker0 interface IP to connect to HASS. I tried using the internal hostname (which should be homeassistant on my setup), but it didn’t seem to work. I think this is due to the HASS container being configured with host networking.

Writing My First App

I wanted to test of the capabilities and ease of use of AppDaemon. So, I decided to convert one of my existing automations into app form. I chose my bathroom motion light automation, because it’s reasonably complex but simple enough to complete quickly.

I started out by copying the motion light example from the tutorial. Then I updated it to take configuration parameters for the motion sensor, light and off timeout:

class MotionLight(hass.Hass):                                                                                                                                                                   
                                                                                                                                                                                                
    def initialize(self):                                                                                                                                                                       
        self.motion_sensor = self.args['motion_sensor']                                                                                                                                         
        self.light = self.args['light']                                                                                                                                                         
        self.timeout = self.args['timeout']                                                                                                                                                     
                                                                                                                                                                                                
        self.timer = None                                                                                                                                                                       
        self.listen_state(self.motion_callback, self.motion_sensor, new = "on")                                                                                                                 
                                                                                                                                                                                                
    def set_timer(self):                                                                                                                                                                        
        if self.timer is not None:                                                                                                                                                              
            self.cancel_timer(self.timer)                                                                                                                                                       
        self.timer = self.run_in(self.timeout_callback, self.timeout)                                                                                                                           
                                                                                                                                                                                                
    def is_light_times(self):                                                                                                                                                                   
        return self.now_is_between("sunset - 00:10:00", "sunrise + 00:10:00")

    def motion_callback(self, entity, attribute, old, new, kwargs):
        if self.is_light_times():
            self.turn_on(self.light)
            self.set_timer()

    def timeout_callback(self, kwargs):
        self.timer = None
        self.turn_off(self.light)

I’ve also added a couple of utility methods to manage the timer better and also to specify more complex logic to restrict when the light will come on. Encapsulating both of these in their own methods will allow re-use of them later on.

The timer logic of the example app is particularly problematic in the case of multiple motion events. In the original logic one timer will be set for each motion event. This leads to the light being turned off even if there is still motion in the room. It also caused some general flickering of the light between motion events and triggered callbacks. I mitigate this in the set_timer method here by first cancelling the timer if it is active before starting a new timer with the full timeout.

At this point, we have a fully functional and re-usable motion activated light. We can instantiate as many of these as we would like in our apps/apps.yaml file, like so:

motion_light_1:
  module: motion_lights
  class: MotionLight
  motion_sensor: binary_sensor.motion_sensor_1
  light: light.light_1
  timeout: 120

motion_light_2:
  module: motion_lights
  class: MotionLight
  motion_sensor: binary_sensor.motion_sensor_2
  light: light.light_2
  timeout: 60

...

Note that we haven’t yet recreated the functionality of my original automation. In that automation, the brightness was controlled by the door state. We’ll tackle this next.

Extending the App

Since our previous MotionLight app is just a Python object, we can take advantage of the object orientated capabilities of the Python language to extend it with further functionality. Doing so allows us to maintain the original behaviour for some instances, whilst also customising for more complex functionality.

Our subclassed light looks like this:

class BrightnessControlledMotionLight(MotionLight):                                                                                                                                             
                                                                                                                                                                                                
    def initialize(self):                                                                                                                                                                       
        self.last_door = "Other"                                                                                                                                                                
        for door in self.args['bedroom_doors']:                                                                                                                                                 
            self.listen_state(self.bedroom_door_callback, door, old = "off", new = "on")                                                                                                        
        for door in self.args['other_doors']:                                                                                                                                                   
            self.listen_state(self.other_door_callback, door, old = "off", new = "on")                                                                                                          

        super().initialize()

    def bedroom_door_callback(self, entity, attribute, old, new, kwargs):
        self.last_door = "Bedroom"
        self.log("Last door is: {}".format(self.last_door))

    def other_door_callback(self, entity, attribute, old, new, kwargs):
        self.last_door = "Other"
        self.log("Last door is: {}".format(self.last_door))

    def motion_callback(self, entity, attribute, old, new, kwargs):
        if self.is_light_times():
            if self.get_state(entity=self.light) == "off":
                if self.now_is_between("07:00:00", "20:00:00") or self.last_door != "Bedroom":
                    self.turn_on(self.light, brightness_pct = 100)
                else:
                    self.turn_on(self.light, brightness_pct = 1)
            self.set_timer()

Here we can see that the initialize method loads only the new configuration parameters. The existing parameters from the parent class are loaded from the parent’s initialize method via the super call. The new configuration options are passed as lists, allowing us to specify several bedroom or other doors. In order to set the relevant callbacks I loop over each list and set the callback. The callback is the same for each entry in the list since it only matters what type they are. The specifics of each door are irrelevant.

Next we have the actual callback methods for when the doors open. These just set the internal variable last_door to the relevant value and log it for debugging purposes.

Most of the new logic comes in the motion_callback method. Here I have reused the is_light_times and set_timer methods from the parent class. The remainder of the logic first checks that the light is off and then recreates the operation of the template I used in my original automation. This sets the light to dim if the last door opened was to one of the bedrooms and bright otherwise. There are also some time based restrictions on this for times when I always want the light bright.

The configuration is pretty similar to the previous example, with the addition of the lists for the doors:

brightness_controlled_motion_light:
  module: motion_lights
  class: BrightnessControlledMotionLight
  motion_sensor: binary_sensor.motion_sensor
  light: light.motion_light
  timeout: 120
  bedroom_doors:
    - binary_sensor.bedroom1_door_contact
    - binary_sensor.bedroom2_door_contact
  other_doors:
    - binary_sensor.kitchen_hall_door_contact

Conclusion and Next Steps

The previous automation (or more rightly set of automations) totaled to 78 lines. The Python code for the app is only 56 lines long. However, there is another 11 lines of configuration required. By this measurement, it seems like the two are similar in complexity. However, we now have an easily re-usable implementation for two types of motion controlled lights with the AppDaemon implementation. Further instances can be called into being with only a few lines of simple configuration. Whereas the YAML automation would need to be duplicated wholesale and tweaked to fit.

This power makes me keen to continue with AppDaemon. I’m also keen to integrate it with my CI Pipeline. Although I’m actually thinking of separating it out from my HASS configuration. With this I’d like to try out some more modern Python development tooling, since it’s been quite some time since I’ve had the opportunity to do any serious Python development.

I hope you’ve enjoyed reading this post. For anyone already using AppDaemon, this isn’t anything groundbreaking. However, for those who haven’t tried it or are on the fence, I’d highly recommend you give it a go. Please feel free to show off anything you’ve made in the feedback channels for this post!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.