getting started with appdaemon

Getting Started with AppDaemon for Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

Continuing on from last weeks post, I was also recently persuaded to try out AppDaemon for Home Assistant (again). I have previously tried out AppDaemon, so that I could use the excellent OccuSim app. I never got as far as writing any apps of my own and I hadn’t reinstalled it in my latest HASS migration. This post is going to detail my first steps in getting started with AppDaemon more seriously.

I should probably start with a run down of what AppDaemon is for anyone that doesn’t know. The AppDaemon website provides a high level description:

AppDaemon is a loosely coupled, multithreaded, sandboxed python execution environment for writing automation apps for home automation projects, and any environment that requires a robust event driven architecture.

In plain English, that basically means it provides an environment for writing home automation rules in Python. The supported home automation platforms are Home Assistant and plain MQTT. For HASS, this forms an alternative to both the built in YAML automation functionality and 3rd party systems such as Node-RED. Since AppDaemon is Python based, it also opens up the entirety of the Python ecosystem for use in your automations.

AppDaemon also provides dashboarding functionality (known as HADashboard). I’ve decided not to use this for now because I currently have no use for it. I also think the dashboards look a little dated next to the shiny new HASS Lovelace UI.

Installation

I installed AppDaemon via Docker by following the tutorial. The install went pretty much as expected. However, I had to clean up the config files from my old install before proceeding. The documentation doesn’t provide an example docker-compose configuration, so here’s mine:

appdaemon:
    image: acockburn/appdaemon:latest
    volumes:
      - /mnt/docker-data/home-assistant:/conf
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - homeassistant
    restart: always
    networks:
      - internal

I’ve linked the AppDaemon container to an internal network, on which I’ve also placed my HomeAssistant instance. That way AppDaemon can talk to HASS pretty easily.

You’ll note that I’m not passing any environment variables as per the documentation. This is because my configuration is passed only via the appdaemon.yaml file, since it allows me to use secrets:

---
log:
  logfile: STDOUT
  errorfile: STDERR

appdaemon:
  threads: 10
  timezone: 'Pacific/Auckland'
  plugins:
    HASS:
      type: hass
      ha_url: "http://172.17.0.1:8123"
      token: !secret appdaemon_token

You’ll see here that I use the docker0 interface IP to connect to HASS. I tried using the internal hostname (which should be homeassistant on my setup), but it didn’t seem to work. I think this is due to the HASS container being configured with host networking.

Writing My First App

I wanted to test of the capabilities and ease of use of AppDaemon. So, I decided to convert one of my existing automations into app form. I chose my bathroom motion light automation, because it’s reasonably complex but simple enough to complete quickly.

I started out by copying the motion light example from the tutorial. Then I updated it to take configuration parameters for the motion sensor, light and off timeout:

class MotionLight(hass.Hass):                                                                                                                                                                   
                                                                                                                                                                                                
    def initialize(self):                                                                                                                                                                       
        self.motion_sensor = self.args['motion_sensor']                                                                                                                                         
        self.light = self.args['light']                                                                                                                                                         
        self.timeout = self.args['timeout']                                                                                                                                                     
                                                                                                                                                                                                
        self.timer = None                                                                                                                                                                       
        self.listen_state(self.motion_callback, self.motion_sensor, new = "on")                                                                                                                 
                                                                                                                                                                                                
    def set_timer(self):                                                                                                                                                                        
        if self.timer is not None:                                                                                                                                                              
            self.cancel_timer(self.timer)                                                                                                                                                       
        self.timer = self.run_in(self.timeout_callback, self.timeout)                                                                                                                           
                                                                                                                                                                                                
    def is_light_times(self):                                                                                                                                                                   
        return self.now_is_between("sunset - 00:10:00", "sunrise + 00:10:00")

    def motion_callback(self, entity, attribute, old, new, kwargs):
        if self.is_light_times():
            self.turn_on(self.light)
            self.set_timer()

    def timeout_callback(self, kwargs):
        self.timer = None
        self.turn_off(self.light)

I’ve also added a couple of utility methods to manage the timer better and also to specify more complex logic to restrict when the light will come on. Encapsulating both of these in their own methods will allow re-use of them later on.

The timer logic of the example app is particularly problematic in the case of multiple motion events. In the original logic one timer will be set for each motion event. This leads to the light being turned off even if there is still motion in the room. It also caused some general flickering of the light between motion events and triggered callbacks. I mitigate this in the set_timer method here by first cancelling the timer if it is active before starting a new timer with the full timeout.

At this point, we have a fully functional and re-usable motion activated light. We can instantiate as many of these as we would like in our apps/apps.yaml file, like so:

motion_light_1:
  module: motion_lights
  class: MotionLight
  motion_sensor: binary_sensor.motion_sensor_1
  light: light.light_1
  timeout: 120

motion_light_2:
  module: motion_lights
  class: MotionLight
  motion_sensor: binary_sensor.motion_sensor_2
  light: light.light_2
  timeout: 60

...

Note that we haven’t yet recreated the functionality of my original automation. In that automation, the brightness was controlled by the door state. We’ll tackle this next.

Extending the App

Since our previous MotionLight app is just a Python object, we can take advantage of the object orientated capabilities of the Python language to extend it with further functionality. Doing so allows us to maintain the original behaviour for some instances, whilst also customising for more complex functionality.

Our subclassed light looks like this:

class BrightnessControlledMotionLight(MotionLight):                                                                                                                                             
                                                                                                                                                                                                
    def initialize(self):                                                                                                                                                                       
        self.last_door = "Other"                                                                                                                                                                
        for door in self.args['bedroom_doors']:                                                                                                                                                 
            self.listen_state(self.bedroom_door_callback, door, old = "off", new = "on")                                                                                                        
        for door in self.args['other_doors']:                                                                                                                                                   
            self.listen_state(self.other_door_callback, door, old = "off", new = "on")                                                                                                          

        super().initialize()

    def bedroom_door_callback(self, entity, attribute, old, new, kwargs):
        self.last_door = "Bedroom"
        self.log("Last door is: {}".format(self.last_door))

    def other_door_callback(self, entity, attribute, old, new, kwargs):
        self.last_door = "Other"
        self.log("Last door is: {}".format(self.last_door))

    def motion_callback(self, entity, attribute, old, new, kwargs):
        if self.is_light_times():
            if self.get_state(entity=self.light) == "off":
                if self.now_is_between("07:00:00", "20:00:00") or self.last_door != "Bedroom":
                    self.turn_on(self.light, brightness_pct = 100)
                else:
                    self.turn_on(self.light, brightness_pct = 1)
            self.set_timer()

Here we can see that the initialize method loads only the new configuration parameters. The existing parameters from the parent class are loaded from the parent’s initialize method via the super call. The new configuration options are passed as lists, allowing us to specify several bedroom or other doors. In order to set the relevant callbacks I loop over each list and set the callback. The callback is the same for each entry in the list since it only matters what type they are. The specifics of each door are irrelevant.

Next we have the actual callback methods for when the doors open. These just set the internal variable last_door to the relevant value and log it for debugging purposes.

Most of the new logic comes in the motion_callback method. Here I have reused the is_light_times and set_timer methods from the parent class. The remainder of the logic first checks that the light is off and then recreates the operation of the template I used in my original automation. This sets the light to dim if the last door opened was to one of the bedrooms and bright otherwise. There are also some time based restrictions on this for times when I always want the light bright.

The configuration is pretty similar to the previous example, with the addition of the lists for the doors:

brightness_controlled_motion_light:
  module: motion_lights
  class: BrightnessControlledMotionLight
  motion_sensor: binary_sensor.motion_sensor
  light: light.motion_light
  timeout: 120
  bedroom_doors:
    - binary_sensor.bedroom1_door_contact
    - binary_sensor.bedroom2_door_contact
  other_doors:
    - binary_sensor.kitchen_hall_door_contact

Conclusion and Next Steps

The previous automation (or more rightly set of automations) totaled to 78 lines. The Python code for the app is only 56 lines long. However, there is another 11 lines of configuration required. By this measurement, it seems like the two are similar in complexity. However, we now have an easily re-usable implementation for two types of motion controlled lights with the AppDaemon implementation. Further instances can be called into being with only a few lines of simple configuration. Whereas the YAML automation would need to be duplicated wholesale and tweaked to fit.

This power makes me keen to continue with AppDaemon. I’m also keen to integrate it with my CI Pipeline. Although I’m actually thinking of separating it out from my HASS configuration. With this I’d like to try out some more modern Python development tooling, since it’s been quite some time since I’ve had the opportunity to do any serious Python development.

I hope you’ve enjoyed reading this post. For anyone already using AppDaemon, this isn’t anything groundbreaking. However, for those who haven’t tried it or are on the fence, I’d highly recommend you give it a go. Please feel free to show off anything you’ve made in the feedback channels for this post!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

automate dumb audio system

Automating My Dumb Audio System

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I’ve been on somewhat of a mission to improve the integration between my various media devices and my home automation system. One part which has until now been untouched is the main living room audio system, which is still somewhat dumb. I’m not all the way to automating this yet, but I’m making progress. In this post I’m going to detail my progress so far, the issues I ran into and how I’m planning to improve the integration in future. I’ll also partially review the different components involved.

The Audio System

As detailed previously, our living room audio is provided by a Polk Signa S2 soundbar. I really like this soundbar and it was a massive step up from the TV audio we had previously. The system was very easy to set up, pretty much plug in the HDMI ARC connection, turn it on and go. It came pre-paired with it’s wireless sub, so that just worked. I’ve had a few instances (maybe five or so the last few months) where the main unit was not able to connect to the sub on startup, which it communicates by flashing an LED. Turning the unit off and on again allows it to retry, which in my experience always works.

The audio quality is great to my ear, but I don’t consider myself an expert in audio stuff. It’s enough to fill our living space with sound and shake the walls if you turn the bass up!

automate dumb audio system
The soundbar in all it’s glory

Where this device falls down is the lack of easy integration with other devices. This is of course pretty normal in this space, but it doesn’t mean we have to accept it!

Clever, Dumb Speakers

On the surface the soundbar behaves somewhat intelligently. It will turn on when the TV comes on. Presumably this is done via HDMI CEC because it will also turn on when the TV is turned on via CEC. This means it isn’t just intercepting the IR commands from the TV remote. However, all my attempts to control the unit via CEC from the Raspberry Pi connected to the TV failed. Weirdly, the soundbar also won’t switch off when the TV is turned off via CEC, whereas it does with the remote. The soundbar also integrates the volume between the TV and itself and allows adjustment via either the TV or it’s own remote.

This is all nice for the “normal” case of just using the TV. We can just sit down and watch without having to find the extra remote and everything just works. Where it falls down is anything outside of this basic use case. The soundbar has basic preset modes for different applications, “movie”, “music” and “night mode”. It also has 3 levels of what it refers to as voice adjust, which amplifies the frequencies contained in human speech to make dialogue clearer (it actually works pretty well). None of these settings are available unless using the device’s own remote.

automate dumb audio system
The soundbar remote, most of the functions are only available via this remote

IR Control

As I couldn’t control the device via CEC, I was pretty much resigned to having to build an IR remote control device. The soundbar also has bluetooth, but it’s only for audio streaming. There doesn’t appear to be any control capability and it’s turned off when the device is in standby. I’ll come back to the bluetooth later, as I do have some ideas around it for future work.

After failing to find the time to build an IR blaster (hardware is hard), I decided to buy a Broadlink device. Specifically the Broadlink RM Mini 3, since they were fairly cheap and supported in Home Assistant.

automate dumb audio system
The Broadlink RM Mini 3, sitting happily in it’s place on the bookcase

When the RM Mini arrived I was pleased with how it looked and easily found a place for it on a bookshelf where it could see the TV and soundbar. The line of sight reaches across the whole of our living/dining area, so I was hoping the range would be sufficient to reach. As it would turn out I was right, there have been no problems so far.

OMG, That App

Setting up the Broadlink was an exercise in frustration, mainly because the IHC app is truly awful. No… that doesn’t cover it: the app is a train wreck on board a sinking ship that’s just been hit by a meteor. For example, on the sign up page, it gives you 60 seconds to both enter the validation code it sent you by email and type your new (and hopefully secure) password. Unfortunately, I don’t have a screenshot of this, since I don’t want to reinstall that natural disaster of an app to get one!

Who really thinks that one minute is enough to sign into their email (or even just switch to your mail app), grab and enter the code and generate and enter a reasonable password?! That’s even assuming the email comes through in that time. Repeat after me: Email is not a synchronous communication medium! It was not designed for real time communication, messages can be delayed for any reason. I use grey-listing on my server, which would have made it impossible for me to sign up had I not been able to temporarily disable it.

Always Remember: The Cloud is Just Someone Else’s Computer

The app will also fail to find the device if you are on separate networks and gives you no way just to enter its IP address. This meant I had to put my phone onto my IoT network, on which most outgoing traffic is blocked. I then had to punch holes in the firewall for the device and my phone, because the setup process requires Internet access. The one for the device may not be required, but it definitely didn’t work if my phone couldn’t get out.

A device like this really has no business needing Internet access in the first place. It could also be configured purely over the local network and even without an app if they just put an AP+captive portal configuration page on it. Sigh.

Anyway, I got it onto the network. Eventually. However, things didn’t get better from there.

Issues in Home Assistant

I set up the device in HASS thinking that the hard part was over and that it would be plain sailing from now on. That proved to be misguided. Upon running the broadlink.learn service via the dev-tools, nothing happened. When I checked the log via the info tab I got the message Failed to connect to device from the broadlink component.

Upon examining the code, I could see that this happens if the device fails to authenticate properly. The auth call comes from the underlying library, python-broadlink. I checked the latest version of this out (0.11.1) and tried the CLI tool with the latest version, which also didn’t work. The tool would just time out without getting any data back. Also, the little white LED on the device didn’t light up.

Fixing it… but not really

I checked out and installed the previous version (0.10) and tried the same thing and it worked! I was able to learn IR codes and send them back to control devices from the CLI. The next step was to work out what commit broke the library. I did this by git bisecting between the good version (0.10) and the bad version (0.11.1). The resulting commit was 38a40c5, where the approach to encrypting the payloads changed. By analysing the change set, I was able to come up with a patch for which I’ve submitted a PR.

I then decided to try this out by installing my version of the library inside my HASS Docker container (temporarily) to see if it resolved the issue. Weirdly, I ran into the same issue! After some debugging to make sure it’s picking up the right python module I did some packet captures with tcpdump. I could see that the authentication packet payload was different to that of the working library outside of HASS. At this point I’m a bit stumped. I’ve submitted an issue to HASS, but in the meantime I decided to come up with a workaround.

Unix to the Rescue

Since I now had a working CLI tool, I could now get the Broadlink working with HASS via the shell_command integration in HASS. I started by creating a new python virtual environment from inside my HASS container and installed my version of the python-broadlink library:

docker exec -it ha_homeassistant_1 bash
cd /config
python -m venv ./broadlink-env
source broadlink-env/bin/activate
apt update
apt install -y git
pip install -U git+https://github.com/webworxshop/python-broadlink.git@fix-pyaes
which python # save the output from this command
exit

It’s necessary to do this inside the container because the command will be called from within the container by HASS, so we want to make sure that venv gives us a compatible python environment. The virtualenv will be persisted to the home assistant config volume.

The next step was to copy the Broadlink CLI tool into my shell_commands directory and update the she-bang on the first line with the output from the which command above:

#!/config/broadlink-env/bin/python

I then wrote a simple wrapper script to save me from having to specify all the options in my HASS config files later (saved to shell_commands/broadlink_cli_wrapper.sh):

#!/bin/bash

/config/shell_commands/broadlink_cli --type 0x2737 --host IP --mac MAC --send $1

This just sends whatever comes in as argument 1 to the script with the broadlink_cli tool. Make sure to fill in the IP of your device and it’s MAC address (lowercase, bytes reversed).

HASS Configuration

Now we can move on to configuring this in HASS. I used the learning functionality of the CLI tool to grab the hex codes for each of the buttons on my soundbar remote. I then added them all to my HASS config (in a new package file):

shell_command:
  soundbar_power_toggle: /config/shell_commands/broadlink_cli_wrapper.sh 2600600000012192131213121212133614111311143513371336121213121311133712121312123713111312131212121312131113121311133712371336133613361435143513361300056a0001264914000c3f0001254a15000c3d0001284712000d050000000000000000
  soundbar_source_tv: /config/shell_commands/broadlink_cli_wrapper.sh 260060000001219313111312131113361411131213361237133614111311131212371311131214351435143514351436131114111212131213111411141112121336143514361336130005670001274813000c3d0001254914000c3c0001264814000d050000000000000000
  soundbar_source_aux: /config/shell_commands/broadlink_cli_wrapper.sh 2600600000012192141113111312133613121311133613371237131113121311133713111312133613111312133613361312131113121312123712371311131213361336133614351300056b0001264912000c400001264913000c400001264813000d050000000000000000
  soundbar_source_bluetooth: /config/shell_commands/broadlink_cli_wrapper.sh 2600500000012192141112121312133614111212133614351436131113121311143514111411123713111435141113121237121213121311133614111435143514111336143514351300056a0001254914000d050000000000000000
  soundbar_volume_up: /config/shell_commands/broadlink_cli_wrapper.sh 2600600000012093141112121312123713111312133613361336131213121212133613121311133712121336143513371237131113121312123713111312131113121336133613361400056a0001264913000c410001254915000c3e0001264913000d050000000000000000
  soundbar_volume_down: /config/shell_commands/broadlink_cli_wrapper.sh 260058000001229213111312121213361312121213371237123712121312131213361212131212371336133613361336143612121312121213121212141113121212133613361337130005680001274913000c3d0001274912000d05
  soundbar_bass_up: /config/shell_commands/broadlink_cli_wrapper.sh 260058000001229214111212131212371312131113361336143612121312131113361411131213361212141112371336133613121237131113371237121213121212133712121336140005660001264a13000c3c0001274814000d05
  soundbar_bass_down: /config/shell_commands/broadlink_cli_wrapper.sh 260050000001209313111411131213361311131213361336143513121311141114351411121214351435131214351336143514111435141112131236141113121212143514111336130005670001274814000d050000000000000000
  soundbar_mute_toggle: /config/shell_commands/broadlink_cli_wrapper.sh 260050000001209313121311141112371311131214341535143513121312121214351312131113371212131212121312121214361311141112371237133614351336141113361435130005690001264914000d050000000000000000
  soundbar_effect_movie: /config/shell_commands/broadlink_cli_wrapper.sh 26004800000122921212141113111337121213121237133613361411121213121336131212121336133613371237121214351337133612121312131113121336131212121411133613000d05
  soundbar_effect_night: /config/shell_commands/broadlink_cli_wrapper.sh 260058000001219214111212131213361410131213361336143514111411121213361411131113371336131114111336143513361336141114111336123712121312131212121435140005690001274814000c400001244914000d05
  soundbar_effect_music: /config/shell_commands/broadlink_cli_wrapper.sh 2600580000012093131113121311133712121312123712371336141112121312133613111411133613121311131213361336133613371311133614351435141113121311141112371300056a0001254913000c3f0001254913000d05
  soundbar_voice_adjust_1: /config/shell_commands/broadlink_cli_wrapper.sh 26004800000121921411131213111435141113111337133612371311131213121237131114111336131213361237133614351435143513121336141113111312121213121311133712000d05
  soundbar_voice_adjust_2: /config/shell_commands/broadlink_cli_wrapper.sh 2600500000012193131114111311133713111312133613361336141113111312133613121311133613361436133612371336143513361411131113121411131113121311141113361300056a0001264914000d050000000000000000
  soundbar_voice_adjust_3: /config/shell_commands/broadlink_cli_wrapper.sh 2600500000012192131213121212133613121311133712371336131113121411123713111312133613121212131213111411131113121333173513361336133712371336133614111200056a0001264912000d050000000000000000

Upon restarting your HASS instance this gives you a shell_command service for each button on the remote. I spent a little while testing all the commands via the dev-tools to make sure I had them all right.

Replacing the Remote

As a first step to fully automating the soundbar, I wanted to replace the remote with a card in my HASS UI. Then I don’t have to find the remote!

I started by creating several input_select entities for the source, effect and voice adjust options of the soundbar:

input_select:
  soundbar_source:
    name: Soundbar Source Select
    options:
      - TV
      - Aux
      - Bluetooth
    initial: TV
    icon: mdi:video-input-hdmi

  soundbar_effect:
    name: Soundbar Effect
    options:
      - Movie
      - Night
      - Music
    initial: Movie
    icon: mdi:movie

  soundbar_voice_adjust:
    name: Soundbar Voice Adjust
    options:
      - Level 1
      - Level 2
      - Level 3
    initial: Level 1
    icon: mdi:voice

I then mapped the options to the correct service calls using automations:

automation:
  - alias: Set soundbar source
    trigger:
      platform: state
      entity_id: input_select.soundbar_source
    action:
      service_template: >
        shell_command.{% if trigger.to_state.state == 'TV' %}soundbar_source_tv{% elif trigger.to_state.state == 'Aux' %}soundbar_source_aux{% else %}soundbar_source_bluetooth{% endif %}

  - alias: Set soundbar effect
    trigger:
      platform: state
      entity_id: input_select.soundbar_effect
    action:
      service_template: >
        shell_command.{% if trigger.to_state.state == 'Movie' %}soundbar_effect_movie{% elif trigger.to_state.state == 'Night' %}soundbar_effect_night{% else %}soundbar_effect_music{% endif %}

  - alias: Set soundbar voice adjust
    trigger:
      platform: state
      entity_id: input_select.soundbar_voice_adjust
    action:
      service_template: >
        shell_command.{% if trigger.to_state.state == 'Level 1' %}soundbar_voice_adjust_1{% elif trigger.to_state.state == 'Level 2' %}soundbar_voice_adjust_2{% else %}soundbar_voice_adjust_3{% endif %}

Here, I make good use of the service_template option and the trigger object to map the services. The main downside of this approach is that the automation doesn’t trigger if you re-select the currently selected option in the UI. Since we have no idea what the actual state of the soundbar is, it would be useful to be able to do this.

I did something similar for the power state and mute state, this time using input_boolean:

input_boolean:
  soundbar_power:
    name: Soundbar Power
    initial: off
    icon: mdi:flash

  soundbar_mute:
    name: Soundbar Mute
    initial: off
    icon: mdi:volume-mute

automation:
  - alias: Set soundbar power
    trigger:
      platform: state
      entity_id: input_boolean.soundbar_power
    action:
      service: shell_command.soundbar_power_toggle

  - alias: Set soundbar mute
    trigger:
      platform: state
      entity_id: input_boolean.soundbar_mute
    action:
      service: shell_command.soundbar_mute_toggle

This isn’t ideal, especially for the power state since because there is no state feedback it will often be out of sync. If I can get some feedback on the power state (see below) I intend to convert this to a template switch.

Creating Presets

I created a couple of scripts containing the preferred settings for both TV watching and music. These just set the settings in sequence, with some delays to allow the soundbar to react (it’s a bit slow):

script:
  soundbar_preset_movie:
    alias: Soundbar Preset Movie
    sequence:
      - service: input_boolean.turn_on
        entity_id: input_boolean.soundbar_power
      - delay:
          seconds: 15
      - service: input_select.select_option
        data:
          entity_id: input_select.soundbar_source
          option: TV
      - delay:
          seconds: 1
      - service: input_select.select_option
        data:
          entity_id: input_select.soundbar_effect
          option: Movie
      - delay:
          seconds: 1
      - service: input_select.select_option
        data:
          entity_id: input_select.soundbar_voice_adjust
          option: Level 2

  soundbar_preset_music:
    alias: Soundbar Preset Music
    sequence:
      - service: input_boolean.turn_on
        entity_id: input_boolean.soundbar_power
      - delay:
          seconds: 15
      - service: input_select.select_option
        data:
          entity_id: input_select.soundbar_source
          option: Aux
      - delay:
          seconds: 1
      - service: input_select.select_option
        data:
          entity_id: input_select.soundbar_effect
          option: Music

I think I’ll comment out the power on step and following delay until I get the power feedback working. Currently if the state is inverted in HASS the script will not do the right thing. It’s annoying to have to make sure the states line up before triggering the script.

Lovelace UI

I created a card in Lovelace to allow me to control all of this. I don’t show the power control here, because I have another card which controls power for all my media devices.

automate dumb audio system
My Soundbar Card in Lovelace

Here’s the YAML:

entities:
  - action_name: Execute
    icon: 'mdi:movie'
    name: Movie Preset
    service: script.turn_on
    service_data:
      entity_id: script.soundbar_preset_movie
    type: call-service
  - action_name: Execute
    icon: 'mdi:music'
    name: Music Preset
    service: script.turn_on
    service_data:
      entity_id: script.soundbar_preset_music
    type: call-service
  - entity: input_boolean.soundbar_mute
    name: Mute
  - entity: input_select.soundbar_source
    name: Source Select
  - entity: input_select.soundbar_effect
    name: Effect
  - entity: input_select.soundbar_voice_adjust
    name: Voice Adjust
show_header_toggle: false
title: Soundbar
type: entities

That gives me pretty much full control over the soundbar from within HASS. I haven’t integrated the volume and bass controls yet, mainly due to the lack of level feedback. The volume is controllable from the TV remote and we don’t adjust the bass much. As such these aren’t really necessary right now.

Conclusion

Wow, that turned out to be a long post. It actually turned out to be a much larger project than I expected, given all the issues I ran into. If I’d have known how much trouble I was going to have with the RM Mini, I probably would have opted to build my own IR blaster with an ESP module. That said, now that it’s working, it’s great. The device also looks pretty good, which is a plus for something which must be located prominently.

I obviously want to get this working with the native HASS integration. Hopefully the integration will be fixed soon, but my workaround gives me a functional device in the meantime.

Further Work

I’d also like to fix the main defect in the current setup, which is the power feedback issue. This stands in the way of further automation, since currently you need to check the power state closely. I have a couple of possibilities for this. The first option is to use a power monitoring smart switch to detect when the soundbar is powered up. Whether this will be successful will depend on the power usage of the soundbar and the accuracy of the power monitoring. This also requires me to invest in further hardware.

The second option is to try and detect the soundbar via its bluetooth interface. My current plan is to do this via l2ping. I haven’t had much time to look into this yet, but if I could get it working that would be great. I already have a Raspberry Pi 3 in close proximity to the soundbar so that can do the detection. This can then be fused with the IR commands using a template switch in HASS.

If you made it this far, thanks for reading and getting through my rambling thoughts! I’ll be following up this post with further updates on fixing the problems detailed here, so please follow me if you’d like to see how I solve them.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Micropython Room Sensor: Part 1 – The Initial Prototype

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


Having recently moved into a new house, I find myself with a profusion of Smart Lights. These are by no means everywhere yet, but in more locations than I have previously had. The main problem is that I currently lack a way to drive these intelligently. Right now I have a few automations that drive them based on sunset/sunrise times and media state through Home Assistant. This is mainly due to having almost no sensors deployed in the new house currently – something I’m aiming to address in this project with my Micropython Room Sensor.

I’m planning to build and deploy a set of ESP8266 based sensors across the house, with at least one in each room. The base hardware for this is the ESP8266 connected to a PIR motion sensors and DHT22 temperature and humidity sensor (so nothing ground breaking). This should give me temperature/humidity data for the whole house as well as motion events that can be used as a starting point to drive the light automation.

Why ESP8266?

I’ve gone through several options for the sensor platform on the way to settling on the ESP8266. I’ve tried out MySensors and even gone as far as building a battery powered prototype complete with PCB. Unfortunately I stuffed up the PCB design so that the radio didn’t work and never got around to working out the problem. Thinking further on this I find the software platform limiting. For example, any kind of OTA firmware update is difficult.

The main disadvantage of the ESP8266 is the power usage. However, given that I now own this house and that these are permanent sensors this becomes less of a problem. I’m planning on powering them via a 12v supply run from the main server cabinet through the roof space to each sensor. There is more than enough power from a 12v 2A supply to provide power to all the sensors.

Also, it’s cheap and easy to work with!

Why Micropython?

I had originally looked at either writing some custom firmware using the Arduino ESP8266 core or using ESPEasy. I’ve tried out ESPEasy on my first prototype and it does what I need, but as an Embedded Software Engineer by day I felt I needed to be more adventurous.

I love Python as my go to language whenever the platforms don’t dictate something else (in the case of embedded this almost always ends up being C or C++). Recently I’ve been getting more engaged with the Python ecosystem via a couple of excellent podcasts (Talk Python and Python Bytes), so this was a good time to get back into writing some Python at home. I had played around with earlier versions of Micropython on both the ESP8266 and the ESP32 so I had a good idea of its capabilities and knew it could do what I needed.

Current Status

As of right now, I have one prototype sensor running using a ‘bare’ ESP-12 module with adapter board, plus the sensors soldered on to some vero-board (photos below). Using the ‘bare’ ESP module added quite a bit of complexity since you are responsible for pulling the various lines required to program/run the chip up or down. It’s not difficult, but it’s more difficult than it should be. For that reason, after I’ve exhausted my one remaining ‘bare’ module/adapter I will be switching to Wemos D1 Mini modules.

In terms of the hardware, I encountered a couple of issues building the first prototype. The ESP8266 adapter board I used has a regulator to take 5V down to the 3V3 required for the ESP. However, I wanted to use 12V to reduce the voltage drop through the wires in the ceiling space. This meant I needed another regulator to go from 12V down to 5V.

A Mistake in Regulation

At first I used a spare linear regulator I had in my parts box for this. As it turns out this was a mistake. Whilst it powered the ESP and sensors just fine, the whole thing got very hot (even with an extra heat sink). This also affected the reading from the temperature sensor. The solution was to order some cheap buck converter modules online. These did the trick without getting the slightest bit warm and are much more efficient.

Flash Issues

The second issue is a weird one. Whilst re-programming the ESP from ESPEasy to Micropython, I noticed that after running the erase flash command the current going to the board would get very high (~400mA at 12V!). This would result in the ESP getting very hot. Seemingly this didn’t damage it because I did it several times and it always continued to work. In addition to this the Micropython firmware wouldn’t boot, instead just spewing incomprehensible (at any of the common baud rates) garbage to the UART. As it turns out, the chip was not being erased correctly, running an erase regions across the full extent of flash did the trick:

esptool.py --port /dev/ttyUSB0 erase_region 0 4194304

If anyone knows what the problem with these modules is, please get in touch in the comments. I couldn’t find any reference to this problem online and the (probably cloned) Wemos modules I have don’t have the same problem.

Anyway, here are some photos of the finished prototype:

Fully assembled prototype micropython room sensor

Fully assembled prototype with PIR sensor connected.

Front of prototype micropython room sensor

Front of prototype showing ESP8266 and adapter board

Back of prototype micropython room sensor

Back of prototype, the wires in the bottom right corner are where it was re-worked after changing the power supply

The Micropython Room Sensor Software

The software is fairly basic, but has a few nice features. I use the boot.py script to load the configuration from the config.json file on the internal filesystem. I then connect to the wifi following the example given in the Micropython documentation.

After this main.py is executed which connects to the broker and initialises the sensors. This is where things get a bit more interesting. I’ve written a minimal implementation of the Home Assistant MQTT Discovery spec, which so far supports binary sensors and standard sensors. This allows the sensors to come up in HASS without any configuration on the HASS server (except enabling MQTT discovery, which is a one time operation).

Reading from the sensors themselves is fairly standard. I use a pin change interrupt for the motion sensor and the standard dht module for the temperature/humidity. The use of uasyncio allows the DHT sensor to be polled at regular intervals. The code for the whole program is available via GitLab (also mirrored on GitHub). You can see a screenshot of the sensors in HASS below:

Home Assistant state card

Home Assistant state card showing the automatically added entities

How Did Micropython Perform?

In comparison to other embedded platforms/languages, writing a Micropython application is awesome. The power of Python makes you extremely productive. I wrote the whole software in a couple of hours, including the MQTT discovery implementation, using a Wemos board with no sensors connected as my development platform.

When I uploaded this to the prototype board it hit a couple of exceptions in the sensor code, which previously hadn’t been exercised. The key thing here is that it hit exceptions! The resulting stack traces of course tell you where and why the exception occurred. What could have been a long and arduous debugging session turned into a few minutes of tweaking. The availability of uasyncio is also a really nice feature and allows direct knowledge transfer from CPython.

Future Work

There is still some work to do in order to finish the project, mostly on the hardware side. Obviously I need to make enough sensors to cover the house, which is just an ongoing assembly task. There is also the in ceiling wiring, assembly into cases and mounting. I’ll be sure to post an update on this as I make progress. There are also a couple of software improvements I would like to make.

Firstly, I would like to add a meta sensor which makes use of the LWT feature of MQTT to report the status of the unit, but also still utilises HASS MQTT discovery. I’m also planning to spin out the current MQTT discovery implementation into it’s own Python module. Ideally this will be published to PyPI. The only other software task is to write a script to automate the deployment of the Micropython firmware, application and device specific configuration. This should be extremely useful when programming multiple units.

Thats all for now. I’ll be posting further parts in this series as the project progresses.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Monthly Update: January 2017

So having (re-)discovered that writing blog posts takes an inordinate amount of time, I’ve not been updating this blog as I was attempting to. I found that in order to get out two or three longish technical posts per week would eat up most of my free time. As such I’ve decided to focus on completing projects and will attempt to write them up as part of the completion process.

Another non-new years resolution I’ve made is to just release more of the stuff I do to the world. This is more than just an effort at dumping stuff over the fence. I want to document things so that they are useful to others. Hopefully, this will mean more projects will show up on my Gitlab account. It will also include publishing any contributions I make to other projects.

As part of this I’m undertaking to write a monthly update here, detailing what I’ve managed to accomplish during the month. I’m aiming to publish these in the last few days of each month and this is the first. So without further ado…

The two projects I’ve mainly focused on this month have been:

  1. Contributing back the Kankun SP3 wifi switch component I made for Home Assistant. I’ve been running this component for ages on my own instance, but have never contributed it back. This took me quite some time, since the Home Assistant developers have a heavy focus on code quality and documentation (a good thing). All in all the experience I’ve had contributing that one small component was a good one and I’ll definitely be contributing more when I have time. I’m happy to say my changes were accepted and are in the 0.36 release. You can find the documentation for the Kankun SP3 component here.
  2. Another Home Assistant related project is the Home Assistant Mycroft Skill I’ve been working on. I’ve now released this as version 1.0.0 (in so far as pushing a git tag constitutes a release). The skill is now capable of turning on and off various entities within HASS and works quite well. I decided to implement fuzzy string matching for entity friendly names since when I was testing turning on and off my kettle, Mycroft would always think I said ‘cattle’. Using the python fuzzywuzzy module this was easy. Basically I look through all the available entities and select the one with the largest score as returned by fuzzywuzzy (which is based on Levenshtein Distance). I’m pretty happy with the result, which you can find here.

That’s all for now, see you next month (or before if I feel like writing in the meantime).

Monitor Dynamic DNS Status with Nagios

For anyone running services on their home network a Dynamic DNS setup is a must have. But what happens when your Dynamic DNS client fails to update one day, when you’re going on a trip and you end up locked out of your network? If you’re running Nagios as your monitoring solution then you can easily detect this situation. This post will show you how and provide a Nagios plugin for doing just this.

The basic idea is to compare the DNS result for your local network FQDN with your external IP address. To retrieve our external address we use a 3rd party service, which being outside our network can see our external IP. In my case I use ifconfig.co, which conveniently has the ability to return its result in JSON for easy consumption by any number of tools. DNS lookup of our FQDN is provided by the Python socket.gethostbyname  function. This gives us too addresses which, if everything is working, will be identical. If our Dynamic DNS client it having issues, the addresses will be different.

Anyway, on to the code (we’re going to need the Python requests module, so install it with pip install requests):

#!/usr/bin/env python3

import requests
import socket
import sys
import argparse

STATUS_OK = 0
STATUS_WARNING = 1
STATUS_CRITICAL = 2
STATUS_UNKNOWN = 3

def format_output(status, msg):
    if status == STATUS_OK:
        print("OK - {}".format(msg))
    elif status == STATUS_WARNING:
        print("WARNING - {}".format(msg))
    elif status == STATUS_CRITICAL:
        print("CRITICAL - {}".format(msg))
    elif status == STATUS_UNKNOWN:
        print("UNKNOWN - {}".format(msg))
    sys.exit(status)

def main():
    parser = argparse.ArgumentParser(description="Nagios plugin to check that external IP matches DNS IP")
    parser.add_argument('-H', '--hostname', metavar='HOST', default='',
        help="DNS hostname to look up")
    args = parser.parse_args()

    req = requests.get("https://ifconfig.co/json")
    if req.status_code != 200:
        format_output(STATUS_UNKNOWN, "Unable to determine external IP, status code = {}".format(req.status_code))
        
    ext_ip = req.json()['ip']
    dns_ip = socket.gethostbyname(args.hostname)

    if ext_ip != dns_ip:
        format_output(STATUS_CRITICAL, "DNS IP ({}) does not match external IP ({})".format(dns_ip, ext_ip))
    else:
        format_output(STATUS_OK, "DNS IP correct. External IP is {}".format(ext_ip))

if __name__ == "__main__":
    main()

This is a fairly basic Nagios plugin that implements the approach described above. The only slightly tricky thing is output formatting and return code conventions, which must be exactly correct for Nagios to interpret the results of your plugin. This convention is documented in the Nagios plugin API documentation (I love this approach as an example of Unixy design).

To use this with nagios, put the plugin in the nagios plugins directory (/usr/local/nagios/libexec/  in my case) and make it executable (chmod +x). Then you need to update your config to add a new command in your objects/commands.cfg  file:

# 'check_dynamic_ip' command definition
define command {
       command_name    check_dynamic_ip
       command_line    $USER1$/check_dynamic_ip.py -H "$ARG1$"
}

You will also need a corresponding service check in your server.cfg  file:

define service{
        use                             generic-service
        host_name                       hostname
        service_description             Dynamic IP Status
        check_command                   check_dynamic_ip!my.domain.com
}

Then simply restart Nagios (sudo systemctl restart nagios.service) and you’re done.

Now you can enjoy knowing when your going to be locked out of your network 😉