home assistant occupancy simulation

Keeping Baddies Away with Occupancy Simulation and Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

Home security is one of the more interesting and useful applications of home automation technologies. Of-course it’s a whole field and industry in itself with all manner of products and services available. Of course the first part of any security system should be good physical security (good doors, windows and locks). Once you get past this it seems like you can go two ways: prevention or detection.

Most traditional systems are in the detection camp. They aim to detect an intruder and alert someone, either via the traditional loud noise or other means. The presence of an alarm box or sign my provide a minimal amount of prevention capability, but not much. Camera systems on the other hand provide both pretty robust prevention as well as (increasingly) advanced detection capabilities.

The third option is to ramp up the prevention angle by making it look like someone is home. It’s this that I’m going to tackle here, with an occupancy simulation application for Home Assistant called Occusim.

Note: I’m not a home security specialist and you should definitely consult a professional. This post is provided for informational purposes and should not be relied upon to provide a full security system. The author excepts no responsibility for any consequence of you relying on this information and/or not consulting someone who actually does this stuff for a living. YOU HAVE BEEN WARNED!

What is Occupancy Simulation?

Basically it’s pretending you are home, by switching on and off things as you would when you’re there. The reasoning behind it is that potential intruders are less likely to invade a house with people inside. I’m sure there are parts of the world where that doesn’t hold. However, I’d guess it applies in enough places to be useful.

Methods of occupancy simulation range from the very low tech “light on a timer” to the high tech approach we’re going to take here, using Home Assistant. The key is to make the sequence as close to the normal usage pattern as possible. So that anyone potentially watching your house can’t tell the difference.

Introducing OccuSim

OccuSim is a tool for defining complex occupancy simulation rules, for use with Home Assistant. These rules are defined as sequences which proceed from beginning to end (with potentially some randomness applied). In this way you can program in a full day’s operation. There is also a fully random mode, which works better for non-scheduled events. Both modes can be combined to give a good representation of your normal home operation.

OccuSim is actually an AppDaemon app, so you will need AppDaemon installed as a dependency. I’m not going to cover that now. You can either see the link above or read my previous post on the subject.

Once you have a functional AppDaemon install you can install OccuSim by cloning the repository into the apps subdirectory of you AppDaemon configuration. Rather than just a pure clone, I prefer to add it as a submodule in my AppDaemon configuration repository. This makes it easy to reinstall if you move your configuration around:

cd apps
git submodule add https://github.com/acockburn/occusim.git occusim
git commit -m "Add OccuSim"

With this setup, updating OccuSim is a little more involved. This is because you also need to commit the submodule change:

cd apps/occusim
git pull origin
cd ../..
git add apps/occusim
git commit -m "Update OccuSim"

Initial Configuration

OccuSim is configured as an app in your apps.yaml file. There is some initial config before you get down to creating your simulation sequences:

occupancy_simulator:
  class: OccuSim
  module: occusim
  log: '1'
  notify: '1'
  enable: input_boolean.vacation_mode,on
  test: '0'
  dump_times: '1'
  reset_time: '02:00:00'

Here I add a new app to my setup, called occupancy_simulator. We set the Python class to load as OccuSim from the module occusim, since we installed it in it’s own subdirectory. I set OccuSim to log it’s activity and to notify when a sequence step is activated. This will use the default notify.notify service in HASS, so you better have that set up to go to the right place. I haven’t found a way to change the notifier that it uses.

The enable setting takes an input_boolean entity and a state in which OccuSim should be active. Here I use my vacation mode toggle. I actuate mine manually, but it’s perfectly possible to set this from a HASS automation if you so desire.

The last few settings are some basic housekeeping. Test mode is disabled to ensure that OccuSim does it’s thing (though this can be useful when debugging your sequences). I also set the dump_times option to true so that I can see the times of the steps in the logs. I then set the time when I want to re-calculate the steps for the upcoming day. In my case this is set to 2am.

Setting up Sequences

A simple sequence might be the following:

step_evening_name: Evening
step_evening_start: 'sunset'
step_evening_on_1: scene.main_lights

This sequence simply turns on my main lights scene at sunset. The configuration variables take the form step_<id>_<variable> where <id> is a custom identifier that needs to be common across all variables for the step. Multiple entities can be turned on or off by appending numbers to the variables in question, e.g. step_evening_on_1, step_evening_on_2, etc.

You can add an offset to the times for sunset or sunrise, such as sunset - 00:20:00 to trigger 20 minutes before sunset. You can of course also specify an absolute time in the form HH:MM:SS.

More Advanced Sequences

So far that’s great, but we could have just turned on a light at sunset with a basic HASS automation rule. The real power of OccuSim comes in randomising the sequence times (within bounds) and creating multi-step sequences.

Expanding on our previous example:

step_evening_name: Evening
step_evening_start: 'sunset - 00:40:00'
step_evening_end: 'sunset - 00:10:00'
step_evening_on_1: scene.main_lights

This will turn on the lights at a random time between 40 minutes before sunset and ten minutes before sunset.

Let’s now create a multi-step sequence:

step_movie1_name: Movie Scene
step_movie1_start: '20:00:00'
step_movie1_end: '20:30:00'
step_movie1_on_1: scene.movie

step_movie2_name: Movie Scene Pause
step_movie2_relative: Movie Scene
step_movie2_start_offset: '00:35:00'
step_movie2_end_offset: '00:45:00'
step_movie2_on_1: script.downlights_bright

step_movie3_name: Movie Scene Play
step_movie3_relative: Movie Scene Pause
step_movie3_start_offset: '00:03:00'
step_movie3_end_offset: '00:06:00'
step_movie3_on_1: scene.movie

Here I start a sequence that will execute my movie scene sometime between 8pm and 8.30pm. I then specify a second step which will emulate us pausing the movie and putting the kitchen lights on. This happens sometime between 35 and 45 minutes later and is relative to the previous step. This means that whatever time the previous step is executed the second step will always come 35-45 minutes after that. The third step in the sequence is another relative one. This will execute 3-6 minutes after the previous one and emulates us starting up the movie again.

Random Mode

As mentioned earlier, you can also create totally random events. For example you might use the following to simulate overnight bathroom trips:

random_bathroom_name: Overnight Bathroom
random_bathroom_start: Bedtime
random_bathroom_end: Morning
random_bathroom_minduration: 00:02:00
random_bathroom_maxduration: 00:05:00
random_bathroom_number: 2
random_bathroom_on_1: light.bathroom
random_bathroom_off_1: light.bathroom

step_bedtime_name: Bedtime
step_bedtime_start: '21:30:00'
step_bedtime_end: '22:30:00'

step_morning_name: Morning
step_morning_start: 'sunrise + 00:20:00'

This configuration creates a 2 randomised events lasting 2-5 minutes, which turn on and off the bathroom light. These are defined as starting and ending relative to other steps in the sequence. I’ve defined the Bedtime and Morning steps in my example to illustrate this. These steps can of course contain actions of their own just like the steps above. The full configuration basically says “sometime starting between 9.30pm and 10.30pm and lasting until 20 minutes after sunrise there will be two instances of the bathroom light coming on for 2-5 minutes”. Pretty cool!

Conclusion

OccuSim is a pretty powerful tool, which allows you to create complex occupancy simulation rules with Home Assistant. I’ve only really just begun to map out my own simulations. You can find these on GitLab. Some of the examples here are based upon my rules, but the real times have been changed. I’m going to continue expanding upon this in the coming weeks. Hopefully I’ll end up with a pretty accurate simulation.

I’m also exploring further home security options and will hopefully be expanding my existing ZoneMinder setup this year. Stay tuned to the blog for more info when that happens.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

continuous integration home assistant

Continuous Integration for Home Assistant, ESPHome and AppDaemon

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I set up continuous integration and deployment from my Home Assistant configuration. This setup has been nothing short of awesome! It’s liberated me from worrying about editing my configuration – all I do is git push and relax. Either HASS will notify me when it restarts or I’ll get an email from Gitlab telling me the pipeline failed.

I wanted to take this configuration further and expand it to other parts of my Home Automation infrastructure. In this post I’ll cover expanding it to perform deployments of my HA stack with Docker, building and deploying to ESPHome devices and unit testing and deploying my AppDaemon apps.

Let’s get on with it!

Automating Docker Deployment

I’d originally held off doing this because I wasn’t looking forward to building custom Docker images in Gitlab CI. However, I managed to complete the original pipeline without having to add any extra dependencies to the HASS containers (such as git which I thought may be required). This makes the job of deploying my HA stack much easier, especially as I already had it mostly scripted. The first step was to add my update.sh script to my repo and tweak it to suit:

#!/bin/bash
set -e

cd /mnt/docker-data/home-assistant || exit
docker-compose -p ha pull
docker-compose -p ha down
docker-compose -p ha up -d --remove-orphans
docker system prune -fa
docker volume prune -f
exit

This is a pretty simple modification to my previous script. The main additions are that I use the -p argument to set the project name used by docker-compose. By default this is taken from the directory name, but I wanted it to match the name of my previous project even though the directory has changed from ha to home-assistant. The other main modification is that I’ve added the --remove-orphans argument to clean up any lingering containers. This is useful if I remove a container from the docker-compose.yml file. In addition I’ve removed the apt commands and cleaned up the script a bit so that it passes my shellcheck job.

The next step was simply to add the docker-compose.yml file to the repo. Then I continued by editing the CI configuration.

Updated Home Assistant CI Jobs

I first split up my previous deployment job into two jobs. The first of these is the main deployment job which pulls the new configuration. The second restarts HASS. The restart job goes in a new pipeline stage and will only be run when the docker-compose.yml or update.sh files haven’t changed:

deploy:
  stage: deploy
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add openssh-client
    - echo "$DEPLOYMENT_SSH_KEY" > id_rsa
    - chmod 600 id_rsa
  script:
    - ssh -i id_rsa -o "StrictHostKeyChecking=no" $DEPLOYMENT_SSH_LOGIN "cd /mnt/docker-data/home-assistant && git fetch && git checkout $CI_COMMIT_SHA && git submodule sync --recursive && git submodule update --init --recursive"
  after_script:
    - rm id_rsa
  only:
    refs:
      - master
  tags:
    - hass

restart-hass:
  stage: postflight
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add curl
  script:
    - "curl -X POST -H \"Authorization: Bearer $DEPLOYMENT_HASS_TOKEN\" -H \"Content-Type: application/json\" $DEPLOYMENT_HASS_URL/api/services/homeassistant/restart"
  only:
    refs:
      - master
  except:
    changes:
      - docker-compose.yml
      - update.sh
  tags:
    - hass

I then added another job (again in another pipeline stage) which performs our Docker deployment. This will be run only when either the docker-compose.yml or update.sh files changes:

docker-deploy:
  stage: docker-deploy
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add openssh-client
    - echo "$DEPLOYMENT_SSH_KEY" > id_rsa
    - chmod 600 id_rsa
  script:
    - ssh -i id_rsa -o "StrictHostKeyChecking=no" $DEPLOYMENT_SSH_LOGIN "cd /mnt/docker-data/home-assistant && ./update.sh"
  after_script:
    - rm id_rsa
  only:
    refs:
      - master
    changes:
      - docker-compose.yml
      - update.sh
  tags:
    - hass
continuous integration home assistant
A full pipeline run with a deployment of the Docker containers running in the final stage.

With that in place I can now redeploy my HA stack by modifying either of those files, committing to git and pushing. In order to facilitate HASS updates with this workflow, I changed the tag of the HASS Docker image to the explicit version number. That way I can simply update the version number and redeploy for each new release.

Continuous Integration for ESPHome

Inspired by the previous configs I have seen for checking ESPHome files, I wanted to implement the same checks. However, I wanted to go further and have a full continuous deployment setup which would build the relevant firmware when its configuration was changed and send an OTA update to the corresponding device. As it turned out this was relatively easy.

I started out by importing my ESPHome configs into Git, which I hadn’t previously done. You can find the resulting repository on Gitlab. For the CI configuration I first copied over the markdownlint and yamllint jobs from my Home Assistant CI configuration.

I then borrowed the ESPHome config check jobs from Frenck’s configuration. These check against both the current release of ESPHome and the next beta release. The beta release job is allowed to fail and is designed only to provide a heads up for potential future issues.

Then I came to implement the build and deployment job. Traditionally these would be performed in separate steps, but since ESPHome can do this in a single step with it’s run subcommand I decided to do it the easy way. This also removes the requirement to manage build artifacts between steps. I created the following template job to manage this:

# Generic deployment template
.esphome-deploy: &esphome-deploy
  stage: deploy
  variables:
    PYTHONPATH: "/usr/src/app:$PYTHONPATH"
  image:
    name: esphome/esphome:latest
    entrypoint: [""]
  before_script:
    - apt update && apt install -y git-crypt openssl
    - |
      openssl enc -aes-256-cbc -pbkdf2 -d -in git-crypt.key.enc -out \
          git-crypt.key -k $OPENSSL_PASSPHRASE
    - git-crypt unlock git-crypt.key
    - esphome dummy.yaml version
  after_script:
    - rm -f git-crypt.key
  retry: 2
  tags:
    - hass

Most of the complexity here is in unlocking the git-crypt repository so that we can read the encrypted secrets file. I opted to store the git-crypt key in the repository, encrypted with openssl. The passphrase used for openssl is in turn stored in a Gitlab variable, in this case $OPENSSL_PASSPHRASE. Once the decryption of the key is complete, we can unlock the repo and get on with things. We remove the key after we are done in the after_script step.

Per-Device Jobs

Using the template configuration, I then created a job for each device I want to deploy to. These jobs are executed only when the corresponding YAML file (or secrets.yaml) is changed. This ensures that I only update devices that I need to on each run. The general form of these jobs is:

my_device:
  <<: *esphome-deploy
  script:
    - esphome my_device.yaml run --no-logs
  only:
    refs:
      - master
    changes:
      - my_device.yaml
      - secrets.yaml

Of course you need to replace my_device with the name of your device file.

continuous integration home assistant
A run of the ESPHome pipeline with deployments to two devices

With these jobs in place I have a full end-to-end pipeline for ESPHome, which lints and checks my configuration before deploying it only to devices which need updating. Nice! You can check out the full pipeline configuration on Gitlab. I now no longer have need to run the ESPHome dashboard, so I’ve removed it from my server.

Continuous Integration for AppDaemon

I mentioned previously that I wanted to split out my AppDaemon apps and configuration into a separate repo from my HASS config. I did this as a prerequisite step of this setup and you can again find the new repo on Gitlab.

The inspiration for this configuration came mostly to @bachya on the HASS forum, whose post in reply to my earlier setup provided most of the details. Thanks for sharing!

I started out by copying across the now ubiquitous markdownlint and yamllint jobs. I then added jobs for pylint, mypy, flake8 and black:

pylint:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install pylint
    - pylint --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - pylint --rcfile pylintrc apps/

mypy:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install mypy
    - mypy --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - mypy --ignore-missing-imports apps/

flake8:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install flake8
    - flake8 --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - flake8 --exclude=apps/occusim --max-line-length=88 apps/

black:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install black
    - black --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - black --exclude=apps/occusim --check --fast apps/

Although this ends up being very verbose, I decided to implement these all as separate jobs so that I get individual pass/fail states for each. I’m also pretty sure the mypy job doesn’t do anything right now, because I’m not using any type hints in my Python code. However, the job is there for when I start adding those.

Unit Testing AppDaemon

Another thing that @bachya introduced me to was Appdaemontestframework. This provides a pytest based framework for unit testing your AppDaemon apps. Although I’m still working on the unit tests for my so far pretty minimal AD setup I did manage to get the framework up and running, which was a little tricky. I had some issues with setting up the initial configuration for the app, but I managed to work it out eventually.

The unit testing CI job is pretty simple:

# Unit test jobs
unit-tests:
  stage: test
  image:
    name: acockburn/appdaemon:latest
    entrypoint: [""]
  before_script:
    - pip install -r apps/test/test_requirements.txt
    - py.test --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - py.test
  tags:
    - hass

All we do here is install the requirements that I need for the tests and then call py.test. Easy!

The deployment job for AppDaemon was also trivial, since it is pretty much a copy of the HASS one. Since AD detects changes to your apps automatically, there’s no need to restart. For more details you can check out the full CI pipeline on Gitlab.

continuous integration home assistant
A run of the AppDaemon pipeline – lots of preflight checks here!

Conclusion

Phew, that was a lot of work, but it was all the logical follow on from work I’d done before or that others had done. I now have a full set of CI pipelines for the three main components of my home automation setup. I’m really happy with each of them, but especially the ESPHome pipeline. As an embedded engineer in my day job I find it really cool that I can update a YAML file locally, commit/push it and then my CI takes over and ends up flashing a physical device! That this is even possible is a testament to all the pieces of software used.

Next Steps

I’m keen to keep going with CI as a means of automating my operations. I think my next target will be sprucing up my Ansible configurations and running them automatically from CI. Stay tuned for that in the hopefully near future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

getting started with appdaemon

Getting Started with AppDaemon for Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

Continuing on from last weeks post, I was also recently persuaded to try out AppDaemon for Home Assistant (again). I have previously tried out AppDaemon, so that I could use the excellent OccuSim app. I never got as far as writing any apps of my own and I hadn’t reinstalled it in my latest HASS migration. This post is going to detail my first steps in getting started with AppDaemon more seriously.

I should probably start with a run down of what AppDaemon is for anyone that doesn’t know. The AppDaemon website provides a high level description:

AppDaemon is a loosely coupled, multithreaded, sandboxed python execution environment for writing automation apps for home automation projects, and any environment that requires a robust event driven architecture.

In plain English, that basically means it provides an environment for writing home automation rules in Python. The supported home automation platforms are Home Assistant and plain MQTT. For HASS, this forms an alternative to both the built in YAML automation functionality and 3rd party systems such as Node-RED. Since AppDaemon is Python based, it also opens up the entirety of the Python ecosystem for use in your automations.

AppDaemon also provides dashboarding functionality (known as HADashboard). I’ve decided not to use this for now because I currently have no use for it. I also think the dashboards look a little dated next to the shiny new HASS Lovelace UI.

Installation

I installed AppDaemon via Docker by following the tutorial. The install went pretty much as expected. However, I had to clean up the config files from my old install before proceeding. The documentation doesn’t provide an example docker-compose configuration, so here’s mine:

appdaemon:
    image: acockburn/appdaemon:latest
    volumes:
      - /mnt/docker-data/home-assistant:/conf
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - homeassistant
    restart: always
    networks:
      - internal

I’ve linked the AppDaemon container to an internal network, on which I’ve also placed my HomeAssistant instance. That way AppDaemon can talk to HASS pretty easily.

You’ll note that I’m not passing any environment variables as per the documentation. This is because my configuration is passed only via the appdaemon.yaml file, since it allows me to use secrets:

---
log:
  logfile: STDOUT
  errorfile: STDERR

appdaemon:
  threads: 10
  timezone: 'Pacific/Auckland'
  plugins:
    HASS:
      type: hass
      ha_url: "http://172.17.0.1:8123"
      token: !secret appdaemon_token

You’ll see here that I use the docker0 interface IP to connect to HASS. I tried using the internal hostname (which should be homeassistant on my setup), but it didn’t seem to work. I think this is due to the HASS container being configured with host networking.

Writing My First App

I wanted to test of the capabilities and ease of use of AppDaemon. So, I decided to convert one of my existing automations into app form. I chose my bathroom motion light automation, because it’s reasonably complex but simple enough to complete quickly.

I started out by copying the motion light example from the tutorial. Then I updated it to take configuration parameters for the motion sensor, light and off timeout:

class MotionLight(hass.Hass):                                                                                                                                                                   
                                                                                                                                                                                                
    def initialize(self):                                                                                                                                                                       
        self.motion_sensor = self.args['motion_sensor']                                                                                                                                         
        self.light = self.args['light']                                                                                                                                                         
        self.timeout = self.args['timeout']                                                                                                                                                     
                                                                                                                                                                                                
        self.timer = None                                                                                                                                                                       
        self.listen_state(self.motion_callback, self.motion_sensor, new = "on")                                                                                                                 
                                                                                                                                                                                                
    def set_timer(self):                                                                                                                                                                        
        if self.timer is not None:                                                                                                                                                              
            self.cancel_timer(self.timer)                                                                                                                                                       
        self.timer = self.run_in(self.timeout_callback, self.timeout)                                                                                                                           
                                                                                                                                                                                                
    def is_light_times(self):                                                                                                                                                                   
        return self.now_is_between("sunset - 00:10:00", "sunrise + 00:10:00")

    def motion_callback(self, entity, attribute, old, new, kwargs):
        if self.is_light_times():
            self.turn_on(self.light)
            self.set_timer()

    def timeout_callback(self, kwargs):
        self.timer = None
        self.turn_off(self.light)

I’ve also added a couple of utility methods to manage the timer better and also to specify more complex logic to restrict when the light will come on. Encapsulating both of these in their own methods will allow re-use of them later on.

The timer logic of the example app is particularly problematic in the case of multiple motion events. In the original logic one timer will be set for each motion event. This leads to the light being turned off even if there is still motion in the room. It also caused some general flickering of the light between motion events and triggered callbacks. I mitigate this in the set_timer method here by first cancelling the timer if it is active before starting a new timer with the full timeout.

At this point, we have a fully functional and re-usable motion activated light. We can instantiate as many of these as we would like in our apps/apps.yaml file, like so:

motion_light_1:
  module: motion_lights
  class: MotionLight
  motion_sensor: binary_sensor.motion_sensor_1
  light: light.light_1
  timeout: 120

motion_light_2:
  module: motion_lights
  class: MotionLight
  motion_sensor: binary_sensor.motion_sensor_2
  light: light.light_2
  timeout: 60

...

Note that we haven’t yet recreated the functionality of my original automation. In that automation, the brightness was controlled by the door state. We’ll tackle this next.

Extending the App

Since our previous MotionLight app is just a Python object, we can take advantage of the object orientated capabilities of the Python language to extend it with further functionality. Doing so allows us to maintain the original behaviour for some instances, whilst also customising for more complex functionality.

Our subclassed light looks like this:

class BrightnessControlledMotionLight(MotionLight):                                                                                                                                             
                                                                                                                                                                                                
    def initialize(self):                                                                                                                                                                       
        self.last_door = "Other"                                                                                                                                                                
        for door in self.args['bedroom_doors']:                                                                                                                                                 
            self.listen_state(self.bedroom_door_callback, door, old = "off", new = "on")                                                                                                        
        for door in self.args['other_doors']:                                                                                                                                                   
            self.listen_state(self.other_door_callback, door, old = "off", new = "on")                                                                                                          

        super().initialize()

    def bedroom_door_callback(self, entity, attribute, old, new, kwargs):
        self.last_door = "Bedroom"
        self.log("Last door is: {}".format(self.last_door))

    def other_door_callback(self, entity, attribute, old, new, kwargs):
        self.last_door = "Other"
        self.log("Last door is: {}".format(self.last_door))

    def motion_callback(self, entity, attribute, old, new, kwargs):
        if self.is_light_times():
            if self.get_state(entity=self.light) == "off":
                if self.now_is_between("07:00:00", "20:00:00") or self.last_door != "Bedroom":
                    self.turn_on(self.light, brightness_pct = 100)
                else:
                    self.turn_on(self.light, brightness_pct = 1)
            self.set_timer()

Here we can see that the initialize method loads only the new configuration parameters. The existing parameters from the parent class are loaded from the parent’s initialize method via the super call. The new configuration options are passed as lists, allowing us to specify several bedroom or other doors. In order to set the relevant callbacks I loop over each list and set the callback. The callback is the same for each entry in the list since it only matters what type they are. The specifics of each door are irrelevant.

Next we have the actual callback methods for when the doors open. These just set the internal variable last_door to the relevant value and log it for debugging purposes.

Most of the new logic comes in the motion_callback method. Here I have reused the is_light_times and set_timer methods from the parent class. The remainder of the logic first checks that the light is off and then recreates the operation of the template I used in my original automation. This sets the light to dim if the last door opened was to one of the bedrooms and bright otherwise. There are also some time based restrictions on this for times when I always want the light bright.

The configuration is pretty similar to the previous example, with the addition of the lists for the doors:

brightness_controlled_motion_light:
  module: motion_lights
  class: BrightnessControlledMotionLight
  motion_sensor: binary_sensor.motion_sensor
  light: light.motion_light
  timeout: 120
  bedroom_doors:
    - binary_sensor.bedroom1_door_contact
    - binary_sensor.bedroom2_door_contact
  other_doors:
    - binary_sensor.kitchen_hall_door_contact

Conclusion and Next Steps

The previous automation (or more rightly set of automations) totaled to 78 lines. The Python code for the app is only 56 lines long. However, there is another 11 lines of configuration required. By this measurement, it seems like the two are similar in complexity. However, we now have an easily re-usable implementation for two types of motion controlled lights with the AppDaemon implementation. Further instances can be called into being with only a few lines of simple configuration. Whereas the YAML automation would need to be duplicated wholesale and tweaked to fit.

This power makes me keen to continue with AppDaemon. I’m also keen to integrate it with my CI Pipeline. Although I’m actually thinking of separating it out from my HASS configuration. With this I’d like to try out some more modern Python development tooling, since it’s been quite some time since I’ve had the opportunity to do any serious Python development.

I hope you’ve enjoyed reading this post. For anyone already using AppDaemon, this isn’t anything groundbreaking. However, for those who haven’t tried it or are on the fence, I’d highly recommend you give it a go. Please feel free to show off anything you’ve made in the feedback channels for this post!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Monthly Update: February 2017

Hello again. It’s been a busy (and short) month so I don’t have much to update on. Most of my work this month has gone into refactoring my Home Assistant configuration into something which is publicly sharable. This has mostly involved splitting the configuration into more logical chunks than the few monolithic files I had previously and extracting secrets out into a file protected by git-crypt. I’ve also been updating and improving aspects of my config as I go, particularly the automations. I’m not quite ready to share this since I still have a couple of things to clean up and also need to actually deploy and test the new configuration. Hopefully this will be posted on gitlab during March, with an accompanying blog post here.

I’ve also been working on another Home Assistant related task, which was to get AppDaemon working. This was specifically so I could run Occusim, which provides occupancy simulation (turns things on and off when you’re not there) for Home Assistant. It didn’t take me long to get this up and running, but my first live test of Occusim it didn’t work, due to me not removing the test command properly. Now that I’ve fixed that issue, it works great.

I think that’s pretty much it for now. Hopefully there will be more to share next month.