shed lights rainbow

Smart Outdoor Solar Powered Lighting with the ESP32

This post may contain affiliate links. Please see the disclaimer for more information.

The primary form of heating in our home is a wood burning stove. You’re probably thinking that this isn’t very ‘smart’ and you’d be right. It’s the least smart part of my smarthome, requiring daily lighting and monitoring during the winter months. What it is though is eco-friendy and cheap to run. Also, nothing quite beats a wood fire for warmth and hygge. The point of all this is that we have a wood shed (two actually) at the back of our house. During dark winter nights, trips to these have been fraught with danger and mystery due to a lack of lighting. That is until now: this winter my completely over-engineered, ESP32 driven, solar powered, smart LED lights will light the way!

Of course, I could just use a head torch for my after dark trips to the wood shed. Or buy any number of (doubtless cheap and nasty) solar powered lights from the local DIY store. However, that didn’t seem very fun and wouldn’t integrate with Home Assistant and the rest of my smarthome.

Components

For these lights I decided to use WS2812B light strip driven by an ESP32. The solar power system would utilise an off the shelf solar controller and lead acid battery along with a 5W solar panel which I acquired cheap from work.

Full Parts List

solar powered esp32 panel specs
The specs of the solar panel

GCode for the Ender 3 is also available for the 3D printed parts. If you have another printer you’ll need to slice the STL files yourself. I had a few issues with some of the small brackets adhering to the bed, but I was able to get a high enough yield thanks to the OctoPrint-Cancelobject plugin.

led brackets
The LED brackets
battery bracket
The battery bracket

The charge controller is rather over-spec’d for the LEDs I’m actually running (which pull less than 2A at the 12V battery voltage, on full brightness). This is because I originally thought I would use the full 5m of LED strip that I ordered. However when it arrived and I saw how bright they are I downgraded to just 2m. I had also ordered a beefier power supply for this purpose. The one I ended up using was left over from my failed tablet salvaging efforts. This means I have 3m of LED strip and a decent power supply left over for a future project.

Putting it Together

I spent the better part of a day putting together the components and soldering all the cabling. I then spent at least another day mounting things on the shed and fixing issues. All the joins between cables are double covered with heatshrink tubing. The first layer insulates the cable from any neighbouring cables. The second layer is for waterproofing and has roof sealant injected into it. The same is done on the joins between the cable and the LED strip. Hopefully this will stand up to the intense New Zealand rain. Seriously, you have not seen rain until you’ve seen NZ rain! Luckily most of the components are mounted out of the worst of the weather. The cables enter the box through the IP68 cable gland which should be weather tight.

solar powered esp32 mounting
Mounting the electronics in the case
mounting battery
The battery is held securely with that bracket
led mounting bracket
Mounting the first LED bracket
led strip mounted
The LED strip mounted to the shed

When I tested the LEDs after soldering the lead out wires to them I found that the colours were off and transitions and effects were flickery. This could have been due to either a power issue or an issue on the data line. Measuring the 5V line showed minimal voltage drop there. Since the LEDs were OK before adding the lead out I went with the data issue.

solar powered esp32 sacrificial pixel
The sacrificial pixel assembly before mounting in the case

The issue turned out to be due to voltage drop on the 3.3V signal from the ESP32 along the lead out cable. The data line is quite sensitive to voltage drop here because the LEDs are supposed to receive a 5V data signal. They work with 3.3V, but not much lower. In order to solve this I added an single extra pixel at the microcontroller end to boost the signal voltage from 3.3V to 5V. This solves the issue since the data signal is being amplified by each pixel in the chain.

ESPHome Code

The ESPHome code to drive the lights is fairly simple. First we start with the standard setup:

---
esphome:
  name: shed_lights
  platform: ESP32
  board: esp32doit-devkit-v1

wifi:
  ssid: !secret wifi_ssid2
  password: !secret wifi_passwd2
  use_address: !secret shed_lights_ip

mqtt:
  broker: !secret mqtt_broker
  username: !secret mqtt_user
  password: !secret mqtt_passwd

# Enable logging
logger:

ota:
  password: !secret ota_passwd

Here we define the board type, set up the wifi and MQTT connections, enable logging and set up OTA updates. If you’re wondering why I’m using MQTT rather than the ESPHome API, it’s for no other reason than I like MQTT!

power_supply:
  - id: 'led_power'
    pin:
      number: GPIO25
      inverted: true

Next I set up a power supply component. This is used along with the single channel relay to automatically power up and down the power supply to the LEDs. This will save a bit of power and also makes sure that there is no power flowing in the cables outside the box for most of the time, which may help in the event of a leaky connection. In order to do this I’m not running the ESP32 from the same 5V supply, instead using one of the USB ports from the solar controller.

The LED strip configuration is then pretty standard:

light:
  - platform: fastled_clockless
    chipset: WS2812B
    pin: GPIO23
    num_leds: 61
    rgb_order: GRB
    name: "Shed Lights"
    effects:
      - addressable_rainbow:
      - addressable_color_wipe:
      - addressable_scan:
      - addressable_twinkle:
      - addressable_random_twinkle:
      - addressable_fireworks:
      - addressable_flicker:
    power_supply: 'led_power'

Note that there are 61 pixels here, that 2m at 30 pixels per meter plus one sacrificial voltage boosting pixel. The addition of the effects is a bit of a gimmick since I’m mostly interested in white light for the application. I only bought the RGB LEDs because the price difference wasn’t enough to justify only buying white.

solar powered esp32 finished
The finished electronics mounted to the shed
solar powered esp32 panel
The solar panel is mounted to the fence behind the shed
solar powered esp32 relative positions
The electronics box and panel on the end of the shed

Voltage Sensing and Health Monitoring

After putting all this together and mounting it on the shed I decided that I’d like to have some form of monitoring for the voltages from the battery and solar panel. The solar controller obviously monitors these but there is no way to get this data out.

In the end I soldered up a couple of voltage divider circuits and added these to the setup in the box. I also added a DHT22 sensor for temperature and humidity sensing inside the box.

voltage monitoring schematic
Schematic of the voltage dividers used for the power monitoring circuits

The ESPHome configuration for these follows. I also added a binary status sensor and a WiFi signal sensor to allow me to monitor the system remotely.

binary_sensor:
  - platform: status
    name: "Shed Lights Status"

sensor:
  - platform: wifi_signal
    name: "Shed WiFi Signal"
    update_interval: 180s
  - platform: dht
    pin: GPIO15
    model: AM2302
    temperature:
      name: "Shed Battery Box Temperature"
    humidity:
      name: "Shed Battery Box Humidity"
    update_interval: 180s
  - platform: adc
    pin: GPIO39
    name: "Shed Battery Voltage"
    icon: "mdi:car-battery"
    attenuation: "11db"
    filters:
      - multiply: 4.24
      - sliding_window_moving_average:
          window_size: 12
          send_every: 12
    update_interval: 15s
  - platform: adc
    pin: GPIO36
    name: "Shed Solar Panel Voltage"
    icon: "mdi:solar-panel"
    attenuation: "11db"
    filters:
      - multiply: 5.73
      - filter_out: 0.00
      - sliding_window_moving_average:
          window_size: 12
          send_every: 12
    update_interval: 15s

The voltage readings I am getting from the two voltage sensors are a little weird. The battery voltage is higher than I would expect and the solar panel voltage is lower. I double checked the multiplication factors against the raw ADC readings and the resistors used and the readings make sense. Initially I thought this could be due to the temperature inside the box (~50°C when it’s in full sun!), but now I’m not so sure since this is also the case at lower temperatures. It could be the behaviour of the charge controller. I’ll continue to monitor it over different charge states.

shed lights hass card
The card in Home Assistant showing the system status and lighting controls

The Finished Product

Now the moment you’ve all been waiting for – gratuitous photos of the LEDs in fancy colours!

shed lights white
In white, the intended operation mode
shed lights red
Red is insanely bright!
shed lights green
Green
shed lights blue
Blue
shed lights purple
Purple/pink, the favourite of some people in the household
shed lights rainbow
Finally the rainbow effect, the other effects don’t come out so well in a still photo

Conclusion

This is actually my first time using the WS2812B LED strip and I have so say I’m really impressed. You can be sure there will be other LED lighting projects coming in future now that I’ve dipped my toes in!

I’m really pleased with the final product. The LEDs look awesome and provide ample light for their task. The ESPHome base has so far been rock solid in terms of stability, which is what I’ve come to expect from using it in other projects.

The initial intention was to make these lights motion activated. However, I couldn’t find a motion sensor which was suitable for outdoor use. I’d also have to locate the sensor at the other end of the sheds from the battery box which would mean a whole load of wiring. As such I’ve decided to build by own wireless outdoor motion sensor and connect it to my MySensors network. I’ll then trigger the LEDs via an automation in Home Assistant. I’ll post an update on this when I have it running.

This has been a really fun and interesting project. As always, please let me know what you think in the feedback channels and feel free to share your own LED lighting projects.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

continuous integration home assistant

Continuous Integration for Home Assistant, ESPHome and AppDaemon

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I set up continuous integration and deployment from my Home Assistant configuration. This setup has been nothing short of awesome! It’s liberated me from worrying about editing my configuration – all I do is git push and relax. Either HASS will notify me when it restarts or I’ll get an email from Gitlab telling me the pipeline failed.

I wanted to take this configuration further and expand it to other parts of my Home Automation infrastructure. In this post I’ll cover expanding it to perform deployments of my HA stack with Docker, building and deploying to ESPHome devices and unit testing and deploying my AppDaemon apps.

Let’s get on with it!

Automating Docker Deployment

I’d originally held off doing this because I wasn’t looking forward to building custom Docker images in Gitlab CI. However, I managed to complete the original pipeline without having to add any extra dependencies to the HASS containers (such as git which I thought may be required). This makes the job of deploying my HA stack much easier, especially as I already had it mostly scripted. The first step was to add my update.sh script to my repo and tweak it to suit:

#!/bin/bash
set -e

cd /mnt/docker-data/home-assistant || exit
docker-compose -p ha pull
docker-compose -p ha down
docker-compose -p ha up -d --remove-orphans
docker system prune -fa
docker volume prune -f
exit

This is a pretty simple modification to my previous script. The main additions are that I use the -p argument to set the project name used by docker-compose. By default this is taken from the directory name, but I wanted it to match the name of my previous project even though the directory has changed from ha to home-assistant. The other main modification is that I’ve added the --remove-orphans argument to clean up any lingering containers. This is useful if I remove a container from the docker-compose.yml file. In addition I’ve removed the apt commands and cleaned up the script a bit so that it passes my shellcheck job.

The next step was simply to add the docker-compose.yml file to the repo. Then I continued by editing the CI configuration.

Updated Home Assistant CI Jobs

I first split up my previous deployment job into two jobs. The first of these is the main deployment job which pulls the new configuration. The second restarts HASS. The restart job goes in a new pipeline stage and will only be run when the docker-compose.yml or update.sh files haven’t changed:

deploy:
  stage: deploy
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add openssh-client
    - echo "$DEPLOYMENT_SSH_KEY" > id_rsa
    - chmod 600 id_rsa
  script:
    - ssh -i id_rsa -o "StrictHostKeyChecking=no" $DEPLOYMENT_SSH_LOGIN "cd /mnt/docker-data/home-assistant && git fetch && git checkout $CI_COMMIT_SHA && git submodule sync --recursive && git submodule update --init --recursive"
  after_script:
    - rm id_rsa
  only:
    refs:
      - master
  tags:
    - hass

restart-hass:
  stage: postflight
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add curl
  script:
    - "curl -X POST -H \"Authorization: Bearer $DEPLOYMENT_HASS_TOKEN\" -H \"Content-Type: application/json\" $DEPLOYMENT_HASS_URL/api/services/homeassistant/restart"
  only:
    refs:
      - master
  except:
    changes:
      - docker-compose.yml
      - update.sh
  tags:
    - hass

I then added another job (again in another pipeline stage) which performs our Docker deployment. This will be run only when either the docker-compose.yml or update.sh files changes:

docker-deploy:
  stage: docker-deploy
  image:
    name: alpine:latest
    entrypoint: [""]
  environment:
    name: home-assistant
  before_script:
    - apk --no-cache add openssh-client
    - echo "$DEPLOYMENT_SSH_KEY" > id_rsa
    - chmod 600 id_rsa
  script:
    - ssh -i id_rsa -o "StrictHostKeyChecking=no" $DEPLOYMENT_SSH_LOGIN "cd /mnt/docker-data/home-assistant && ./update.sh"
  after_script:
    - rm id_rsa
  only:
    refs:
      - master
    changes:
      - docker-compose.yml
      - update.sh
  tags:
    - hass
continuous integration home assistant
A full pipeline run with a deployment of the Docker containers running in the final stage.

With that in place I can now redeploy my HA stack by modifying either of those files, committing to git and pushing. In order to facilitate HASS updates with this workflow, I changed the tag of the HASS Docker image to the explicit version number. That way I can simply update the version number and redeploy for each new release.

Continuous Integration for ESPHome

Inspired by the previous configs I have seen for checking ESPHome files, I wanted to implement the same checks. However, I wanted to go further and have a full continuous deployment setup which would build the relevant firmware when its configuration was changed and send an OTA update to the corresponding device. As it turned out this was relatively easy.

I started out by importing my ESPHome configs into Git, which I hadn’t previously done. You can find the resulting repository on Gitlab. For the CI configuration I first copied over the markdownlint and yamllint jobs from my Home Assistant CI configuration.

I then borrowed the ESPHome config check jobs from Frenck’s configuration. These check against both the current release of ESPHome and the next beta release. The beta release job is allowed to fail and is designed only to provide a heads up for potential future issues.

Then I came to implement the build and deployment job. Traditionally these would be performed in separate steps, but since ESPHome can do this in a single step with it’s run subcommand I decided to do it the easy way. This also removes the requirement to manage build artifacts between steps. I created the following template job to manage this:

# Generic deployment template
.esphome-deploy: &esphome-deploy
  stage: deploy
  variables:
    PYTHONPATH: "/usr/src/app:$PYTHONPATH"
  image:
    name: esphome/esphome:latest
    entrypoint: [""]
  before_script:
    - apt update && apt install -y git-crypt openssl
    - |
      openssl enc -aes-256-cbc -pbkdf2 -d -in git-crypt.key.enc -out \
          git-crypt.key -k $OPENSSL_PASSPHRASE
    - git-crypt unlock git-crypt.key
    - esphome dummy.yaml version
  after_script:
    - rm -f git-crypt.key
  retry: 2
  tags:
    - hass

Most of the complexity here is in unlocking the git-crypt repository so that we can read the encrypted secrets file. I opted to store the git-crypt key in the repository, encrypted with openssl. The passphrase used for openssl is in turn stored in a Gitlab variable, in this case $OPENSSL_PASSPHRASE. Once the decryption of the key is complete, we can unlock the repo and get on with things. We remove the key after we are done in the after_script step.

Per-Device Jobs

Using the template configuration, I then created a job for each device I want to deploy to. These jobs are executed only when the corresponding YAML file (or secrets.yaml) is changed. This ensures that I only update devices that I need to on each run. The general form of these jobs is:

my_device:
  <<: *esphome-deploy
  script:
    - esphome my_device.yaml run --no-logs
  only:
    refs:
      - master
    changes:
      - my_device.yaml
      - secrets.yaml

Of course you need to replace my_device with the name of your device file.

continuous integration home assistant
A run of the ESPHome pipeline with deployments to two devices

With these jobs in place I have a full end-to-end pipeline for ESPHome, which lints and checks my configuration before deploying it only to devices which need updating. Nice! You can check out the full pipeline configuration on Gitlab. I now no longer have need to run the ESPHome dashboard, so I’ve removed it from my server.

Continuous Integration for AppDaemon

I mentioned previously that I wanted to split out my AppDaemon apps and configuration into a separate repo from my HASS config. I did this as a prerequisite step of this setup and you can again find the new repo on Gitlab.

The inspiration for this configuration came mostly to @bachya on the HASS forum, whose post in reply to my earlier setup provided most of the details. Thanks for sharing!

I started out by copying across the now ubiquitous markdownlint and yamllint jobs. I then added jobs for pylint, mypy, flake8 and black:

pylint:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install pylint
    - pylint --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - pylint --rcfile pylintrc apps/

mypy:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install mypy
    - mypy --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - mypy --ignore-missing-imports apps/

flake8:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install flake8
    - flake8 --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - flake8 --exclude=apps/occusim --max-line-length=88 apps/

black:
  <<: *preflight
  image:
    name: python:3
    entrypoint: [""]
  before_script:
    - pip install black
    - black --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - black --exclude=apps/occusim --check --fast apps/

Although this ends up being very verbose, I decided to implement these all as separate jobs so that I get individual pass/fail states for each. I’m also pretty sure the mypy job doesn’t do anything right now, because I’m not using any type hints in my Python code. However, the job is there for when I start adding those.

Unit Testing AppDaemon

Another thing that @bachya introduced me to was Appdaemontestframework. This provides a pytest based framework for unit testing your AppDaemon apps. Although I’m still working on the unit tests for my so far pretty minimal AD setup I did manage to get the framework up and running, which was a little tricky. I had some issues with setting up the initial configuration for the app, but I managed to work it out eventually.

The unit testing CI job is pretty simple:

# Unit test jobs
unit-tests:
  stage: test
  image:
    name: acockburn/appdaemon:latest
    entrypoint: [""]
  before_script:
    - pip install -r apps/test/test_requirements.txt
    - py.test --version
    - mv fake_secrets.yaml secrets.yaml
  script:
    - py.test
  tags:
    - hass

All we do here is install the requirements that I need for the tests and then call py.test. Easy!

The deployment job for AppDaemon was also trivial, since it is pretty much a copy of the HASS one. Since AD detects changes to your apps automatically, there’s no need to restart. For more details you can check out the full CI pipeline on Gitlab.

continuous integration home assistant
A run of the AppDaemon pipeline – lots of preflight checks here!

Conclusion

Phew, that was a lot of work, but it was all the logical follow on from work I’d done before or that others had done. I now have a full set of CI pipelines for the three main components of my home automation setup. I’m really happy with each of them, but especially the ESPHome pipeline. As an embedded engineer in my day job I find it really cool that I can update a YAML file locally, commit/push it and then my CI takes over and ends up flashing a physical device! That this is even possible is a testament to all the pieces of software used.

Next Steps

I’m keen to keep going with CI as a means of automating my operations. I think my next target will be sprucing up my Ansible configurations and running them automatically from CI. Stay tuned for that in the hopefully near future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Road to Docker Part 2

My Road To Docker – Part 2: My Home Automation Stack

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


In my first post of this series, I outlined my plan to convert my infrastructure over to a layered setup. This would consist of virtual machines (in various VLANs), with most of the services running in Docker. This post details the second stage of my road to Docker, although really was is the first stage since I’m writing these out of order! I actually converted my home automation systems over to Docker before tackling the web stack.

The motivation behind upgrading the home automation system first was to do it at the same time as I did a large update to Home Assistant, since I’d been holding back on updating. The main reason for this was the switch to Lovelace as the default UI, which I was dreading. As it turned out, I waited long enough for the awesome HASS developers to make all my problems go away (or at least the Lovelace related ones).

System Summary

I’ve written about my home automation setup before, but here is a brief recap of what I’m running (only the server side stuff):

I had also been running InfluxDB and Grafana. However, something broke in my setup and I hadn’t got around to fixing it. I therefore decided to cut my losses with that and not reinstall it (for now).

Finding Docker Images

Luckily for me, the four main components of my system all have official/recommended Docker images available. This was useful as I’m always pretty reticent to use some questionably maintained image from the Docker Hub, mainly due to the lack of security updates. I also wanted to avoid building custom Docker images for now, until I work out a decent update strategy.

In addition to the four services above I wanted to run the ESPHome dashboard in order to manage my devices better. I had previously just been using the command line tool to build and upload to them. This also has an official Docker image.

Road to Docker Part 2
Looks like I have a few devices to update!

I also ended up running a MaryTTS container to replace PicoTTS that I had been running for my voice announcements. This is due to the lack of PicoTTS inside the HASS Docker image. It was recommended that I use the silversniper/marytts image. Looking at this image, it hasn’t been updated in three years which. This reinforces my point about random images from Docker Hub. Luckily, this isn’t an externally facing application so it isn’t too critical from a security standpoint. However, I think I’ll look into updating that at some point.

Stacking Containers, again…

I set up a new clean VM inside my home automation VLAN. I was a little reserved about doing this, since it means everything in that VLAN (most of which is blocked from the internet) can see the full HA server. However, my main worry over not doing that was the mDNS used by ESPHome. If I can get Avahi/mDNS working across VLANs at some point I will move it. I still have a big network re-organisation to do, so hopefully it will get done then.

The full docker-compose.yml file for my new stack is given below:

version: '3'

services:
  mosquitto:
    image: eclipse-mosquitto
    restart: always
    ports:
      - 1883:1883
      - 8883:8883
      - 9001:9001
    volumes:
      - /mnt/docker-data/mosquitto/config:/mosquitto/config
      - /mnt/docker-data/mosquitto/data:/mosquitto/data
      - /mnt/docker-data/mosquitto/logs:/mosquitto/logs

  zigbee2mqtt:
    image: koenkk/zigbee2mqtt
    volumes:
      - /mnt/docker-data/zigbee2mqtt:/app/data
    devices:
      - /dev/ttyACM0:/dev/ttyACM0
    depends_on:
      - mosquitto
    restart: always
    network_mode: host

  homeassistant:
    image: homeassistant/home-assistant
    volumes:
      - /mnt/docker-data/home-assistant:/config
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - mosquitto
    restart: always
    network_mode: host

  nodered:
    image: nodered/node-red-docker:v8
    ports:
      - 1880:1880
    volumes:
      - /mnt/docker-data/node-red:/data
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - mosquitto
      - homeassistant
    restart: always

  esphome:
    image: esphome/esphome
    volumes:
      - /mnt/docker-data/esphome:/config
    restart: always
    network_mode: host

  marytts:
    image: sliversniper/marytts
    ports:
      - 59125:59125
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /mnt/docker-data/marytts/lib/voice-dfki-poppy-hsmm-5.2.jar:/marytts/marytts-5.1.2/lib/voice-dfki-poppy-hsmm-5.2.jar:ro
    restart: always

There’s nothing particularly earth shattering here. The main point of interest is that I mount the voice file for my preferred MaryTTS voice inside the container. Actually finding the voices is a little interesting. The official way to download them is a GUI tool that won’t run inside the container. I eventually found the XML file which lists all the available voices and extracted the URL of the one I wanted (the online demo helps to decide).

The only other parts worth noting are that I mount all my volumes under /mnt/docker-data, which is an NFS share onto the ZFS array of the virtual machine host. This then gets rolled into my normal backups. I also didn’t bother with a reverse proxy for any of this, since I already have one for HASS sitting in my DMZ (yet to be Dockerised). The other services just get accessed via the machine hostname and port since they are only used internally.

Sometimes Waiting Pays

I didn’t run into any issues particular to setting this up in Docker. At this point I think it’s a pretty well trodden path and this setup is pretty much standard. I did however run into several issues with upgrading to the latest Home Assistant.

First, lets tackle the elephant in the room – Lovelace. I was really worried going into this that it was going to be a huge amount of work. My mind was somewhat put at rest by seeing the UI editor in action via misperry’s video. When I actually came to it, the migration process automatically re-created my existing UI pretty much perfectly. Lesson learned: there is something to be said for waiting for mature software, rather than jumping on the new shiny thing immediately!

Lovelace itself is awesome! The ease of configuration has made me actually focus on making my HASS UI nicer rather than just the bare minimum I could get away with that I had previously. In the screenshot below, you can see my new “Outdoors” panel. This contains weather information, outdoor related sensor readings and a couple of local webcam views.

Road to Docker Part 2
I probably should have taken this screenshot during the day

Remaining Issues

Most of my remaining issues were due to the Home Assistant “Great Migration”. This resulted in a load of entity IDs changing in various components. Obviously, this resulted in my having to update my configuration to change all the names. It took a little while to troubleshoot. This was because if the changed name is used in an automation, the automation has to actually fire to cause an error. In many cases it also just won’t fire, if the name is used in the triggers for the automation.

The final major issue I encountered, was with my frankly awesome vacuuming robot, which appeared to stop reacting to service calls in HASS. The underlying issue appears to be the Botvac D3 returning a different error message than the D7 that the library was tested with. So far this hasn’t been fixed, but I’m currently using the category: 2 workaround suggested and that’s working fine. I think I’ll have a look into fixing that issue and submit a PR when I get time.

Managing Updates

Managing updates to Docker images has always been an bit of an issue for me. In the past I’ve used Watchtower with some success. However, due to the capacity for breaking changes I want to manage HASS updates more carefully. It was suggested to me to just use a bash script which I can run periodically to do this. This isn’t something that had occurred to me before, probably because it’s so simple! Here’s the script I’m using:

#!/bin/bash
set -e

cd /mnt/docker-data/stacks/ha
sudo apt update
sudo apt upgrade -y
sudo apt autoremove -y
sudo apt clean
docker-compose pull
docker-compose down
docker-compose up -d
docker system prune -fa
docker volume prune -f
exit

This works beautifully and allows me to easily keep up with release to HASS and the other components, once I’ve verified that it’s reasonably safe to update.

Conclusion and Next Steps

Overall I’m pretty happy with how this move has turned out. Once the initial teething issues were all worked out the system has been very stable. I’m appreciating the extra utility of the ESPHome dashboard, which makes it very convenient to update my devices. It’s also great to be back on the latest version of Home Assistant.

In terms of next steps, I would like to give InfluxDB and Grafana another try. My main issue here has always been building the dashboards. It seems to be pretty tricky to get something both good looking and useful in Grafana. I also haven’t seen any pre-built dashboards for use with data from Home Assistant. Perhaps this is because they are so peculiar to individual setups.

I also have an LXD container running ZoneMinder. I’d like to re-deploy this as a Docker container on the same VM. Previously, I’ve not had too much luck running ZoneMinder in Docker. I’ll to see if the situation has improved when I tackle this migration.

I’m not actively working on any further migrations of other services to Docker at the moment, so there will probably be a break in this series for now. However, given my current success I’ll definitely be continuing on with this migration. I just want to work on some other projects for a while!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Installed room sensor

Room Sensor Project: Part 2 – Infrastructure and Mounting

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


Looking back, it seems to have been a ridiculously long time since my last room sensor post. It’s well over a year, but it doesn’t seem that long ago. This project has been majorly delayed by a few issues and generally hasn’t been top of my todo list. However, I’m now at the point where I have the prototype sensor installed and working. Actually, it’s been working for several months, but I hadn’t got around to writing it up! In this post I’ll mostly be detailing the infrastructure used to get power to the sensor. There will also be some discussion on the case and mounting as well as a few words on the software.

Finding a Case

It would have been really nice just to 3D print a case and mounting bracket for the sensors. Unfortunately, I don’t (yet) have a 3D printer and it was cheaper to buy cases than to get a 3D printing service to print them. I settled on a 100x60x25mm case and ordered 15 of them. Once they arrived I was able to fit all the electronics inside and cut a slot in the bottom for the DHT22 sensor. A dremel-like tool would have helped a lot here, but I managed to do it manually and it looks OK. I actually reversed the case so that the lid became the rear as this looks a little nicer and helped with the mounting of the components.

I also fitted a light sensor based on an LDR voltage divider circuit to the front of the case. Unfortunately I had issues with the ADC pin on the bare ESP8266 module I used for the prototype. It’s odd because I’ve had this working with the Wemos D1 modules before (which have their own voltage divider on the input also). In the end I didn’t manage to get this working and have resolved to replace the prototype board with a Wemos based one when I do the sensors for the rest of the house.

I drilled a hole in the front of the case for the PIR sensor and mounted the diffuser over the hole. The PIR board itself was hot glued to the reverse of this. This works really well – in fact if anything the sensor is a little too sensitive. I need to tweak the pots a little to dial this down.

You can see pictures from during the assembly as well as the fully assembled case below:

Main room sensor board mounted to case lid.
Main room sensor board mounted to case lid.
PIR difuser mounted to case (front view)
PIR diffuser mounted to case (front view)
PIR difuser mounted to case (top view)
PIR diffuser mounted to case (top view)
PIR sensor mounted to inside of case.
PIR sensor mounted to inside of case.
Side view of room sensor board showing power connections
Side view of room sensor board showing power connections
Fully assembled case (upside down)
Fully assembled case (upside down)
Fully assembled case (side view)
Fully assembled case (side view)
Fully assembled case (standing on the DHT sensor)
Fully assembled case (standing on the DHT sensor)

Power Setup and Wiring

In order to get power to the sensors I had decided on running 12V lines through the roof space of the house. These would then come down through the ceilings in the corner of each room for the sensors. The cables would be fed from a central distribution board mounted near the loft hatch in the ceiling. Since the power requirements for the 12 sensors I wish to eventually install are minimal a single 2A is enough to power the whole lot with some room to spare. Some pictures of the distribution board (before and after installation) are shown below:

Room sensor power distribution board (before installation)
Unwired room sensor power distribution board (before installation)
Room sensor power distribution board (post installation)
Room sensor power distribution board (post installation)

I had initially wanted to mount the power supply down in the ‘server rack’ and run the power up through an (existing) whole in the wall to the loft. However, after an abortive attempt at running a cable through the wall (in which I was foiled by pesky insulation and there was much swearing), I eventually came to the conclusion that mains power was needed in the roof space.

Time passes…

It took a while to really commit to and allocate funds to this option. Eventually the electrician came and installed four shiny new power points in the roof space just next to the loft hatch. Four power points is obviously overkill for this project. However, in the intervening time some other projects had come up for which the remaining plugs would be useful.

Once the power points were in place I ran the cabling for the 12V line to the room in which the sensor was to be installed and connected it all up. As if by magic power flowed and the sensor sprung into life! (barring various frustrating issues with loose connections).

I had quite an interesting time working out how to mount the sensor in the corner of the room. I initially stuck it up with 3M double sided sticky pads, but ended up pulling it up and down several times so that I ran out of these. Eventually I opted for good old fashioned blu-tack as a temporary solution! I’m intending to replace this with a 3D printed bracket which will fit the oddly shaped space between the case and the wall. This will allow the sensor to be attached much more permanently. However, for now the blu-tack does the job and proves the concept.

Installed room sensor
Installed room sensor

Relay Power Control

I mentioned above that I had installed several extra power sockets in the roof space for other projects. One of those other projects required putting a Raspberry Pi in the roof space. The other project is now completed and I’m hoping to document it soon. For this I pressed into duty my old Raspberry Pi Model B (the one with 512Mb of RAM). Although old, this hardware is sufficient for running a small Node-RED instance as well as performing it’s other intended duty.

This gave me a nice way to control the power supply to the room sensor distribution board and hence all the sensors. That meant that if a sensor were to go offline I could cycle the power remotely without having to climb up in the roof space. To do this I inserted a relay into the power 12V line between the power supply and the power distribution board.

The Raspberry Pi with the relay assembly
The Raspberry Pi with the relay assembly

I connected this to the normally closed input of the relay so that the relay must be switched on to kill the power. The state is then inverted in Node-RED. In this way the switch in Home Assistant shows as on most of the time. I only had 5V relays sitting in my parts box. So, I soldered up a quick transistor circuit on a breadboard to allow me to drive the relay from the 3.3V logic of the Pi. Doing this is left as an exercise for the reader, since I forgot to document what I built!

Driving the Relay in Node-RED

To drive the relay I use a variation of my MQTT discovery switch in Node-RED. This is implemented via the following flow:

The room sensor relay control flow
The room sensor relay control flow

The JSON for this is shown below (copy it and import into Node-RED):

[{"id":"e683c60f.71219","type":"tab","label":"Room Sensors","disabled":false,"info":""},{"id":"49f1f109.608fa","type":"rpi-gpio out","z":"e683c60f.71219","name":"Relay 2","pin":"13","set":true,"level":"0","freq":"","out":"out","x":760,"y":60,"wires":[]},{"id":"80843251.fcdf58","type":"mqtt in","z":"e683c60f.71219","name":"Room Sensors Command","topic":"homeassistant/switch/room_sensors/cmd","qos":"2","broker":"65d3656f.217c14","x":170,"y":80,"wires":[["afc07304.5d321"]]},{"id":"afc07304.5d321","type":"switch","z":"e683c60f.71219","name":"On or Off?","property":"payload","propertyType":"msg","rules":[{"t":"eq","v":"ON","vt":"str"},{"t":"eq","v":"OFF","vt":"str"}],"checkall":"false","repair":false,"outputs":2,"x":430,"y":80,"wires":[["a2b8666b.9e744","6c973ed5.7607f8"],["76f0cbf9.50415c","6c973ed5.7607f8"]]},{"id":"a2b8666b.9e744","type":"change","z":"e683c60f.71219","name":"ON","rules":[{"t":"set","p":"payload","pt":"msg","to":"0","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":610,"y":60,"wires":[["49f1f109.608fa"]]},{"id":"76f0cbf9.50415c","type":"change","z":"e683c60f.71219","name":"OFF","rules":[{"t":"set","p":"payload","pt":"msg","to":"1","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":610,"y":100,"wires":[["49f1f109.608fa"]]},{"id":"d673854.bac6478","type":"mqtt out","z":"e683c60f.71219","name":"Send Messages","topic":"","qos":"2","retain":"true","broker":"65d3656f.217c14","x":750,"y":240,"wires":[]},{"id":"95bdf49b.5a011","type":"function","z":"e683c60f.71219","name":"Format config messages","func":"var config = {\n    payload: {\n        name: \"Room Sensors\",\n        command_topic: \"homeassistant/switch/room_sensors/cmd\",\n    },\n    topic: \"homeassistant/switch/room_sensors/config\"\n};\nreturn config;","outputs":1,"noerr":0,"x":430,"y":240,"wires":[["d673854.bac6478"]]},{"id":"c45bb370.032b48","type":"inject","z":"e683c60f.71219","name":"@Startup","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":true,"onceDelay":"3","x":150,"y":240,"wires":[["95bdf49b.5a011"]]},{"id":"6c973ed5.7607f8","type":"function","z":"e683c60f.71219","name":"Set topic","func":"msg.topic = \"homeassistant/switch/room_sensors/state\";\nreturn msg;","outputs":1,"noerr":0,"x":620,"y":160,"wires":[["d673854.bac6478"]]},{"id":"65d3656f.217c14","type":"mqtt-broker","z":"","name":"Home Broker","broker":"mybroker.example.com","port":"1883","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthPayload":"","closeTopic":"","closeQos":"0","closePayload":"","willTopic":"","willQos":"0","willPayload":""}]

Over the following months of usage, I noted several (infrequent) instances where the sensor stopped responding. In these cases it needed a manual (though remote controlled) power cycle. In order to automate this and so reduce downtime, I wrote the following Home Assistant automation:

  - alias: Auto-reset Room Sensors
    trigger:
      - platform: state
        entity_id: binary_sensor.prototype_sensor_status
        to: "off"
        for: "00:10:00"
    action:
      - service: switch.turn_off
        entity_id: switch.room_sensors
      - delay:
          seconds: 30
      - service: switch.turn_on
        entity_id: switch.room_sensors
      - service: notify.notify
        data_template:
          title: "Room Sensors Reset"
          message: "Device '{{ trigger.to_state.name }}' was offline for 10 minutes, room sensors reset."

Basically, this will trigger after the sensor has been offline for 10 minutes. Once triggered it will turn the sensor off and on again (with a 30 seconds delay in between). It will also send a notification to inform me that this has happened. I think this has been triggered twice and as a result the sensor hasn’t been unavailable for any length of time.

Software Changes

Since installing the prototype sensor, I haven’t actually been running my Micropython Room Sensor software on it. Instead I’ve been trying out ESPHome on this and another project, since it’s been getting a lot of attention in the HASS community recently. I specifically wanted to see if ESPHome was an easier/maintenance free option for these types of projects.

My take away from this is that ESPHome is really nice and very easy if you don’t want to do anything complicated. If all you have are a few sensors or actuators that you want to connect, it’s great! In fact it’s almost perfect for this kind of project. You can even do some moderately complicated data conversions and on device automation using the lambda syntax. For this reason, I’d put it in the same basket as the likes of ESPeasy. Although it has some advantages in comparison to other systems, especially if you are already running Home Assistant. Kudos to Otto Winter for coming up with such a great piece of software!

However, it does get more difficult when you want to more complicated things. I ran into some of these issues in my other project, which I’ll detail when I eventually write it up. For now ESPHome gets my wholehearted recommendation.

ESPHome Configuration

I especially like that since the configuration for ESPHome devices is just YAML it’s really easy to store in git. I haven’t got a cleaned up git repo for my projects ready to publish. However, since the configuration for this project is so simple, I can post the whole thing here:

esphome:
  name: room_sensor_prototype
  platform: ESP8266
  board: esp12e

wifi:
  ssid: 'my-wifi'
  password: 'supersecret'

mqtt:
  broker: 'mqtt.example.com'
  username: 'test'
  password: 'supersupersecret'

# Enable logging
logger:

ota:
  password: 'massivelysupersecret'

sensor:
  - platform: dht
    pin: 12
    model: AM2302
    temperature:
      name: "Test Temperature"
    humidity:
      name: "Test Humidity"
    update_interval: 15s

binary_sensor:
  - platform: status
    name: "Prototype Sensor Status"
  - platform: gpio
    pin: 2
    name: "Test Motion"
    device_class: motion

You’ll notice that I’m still using the MQTT transport rather that the native API component with the Home Assistant ESPHome component. This is mainly because I built this before the native API was released and I didn’t need to update it. I understand there are some advantages to using the native API, so I probably will try it at some point, especially if I want to try an esp-cam project.

What’s Next

So far, I’m really happy with the performance of the sensor. I’ve been using it in a few automations which I’m intending to detail in a further post. The next thing to do is build further sensors for the remainder of the house. In order to make this less error prone I’ve decided to design an adapter PCB in Kicad for the Wemos D1 Mini clones I’ve been using in other projects. This will get me away from the fiddling with bare ESP modules and hopefully mean that the light sensor will work.

As mentioned above, I also want to design a 3D printed bracket to fit the oddly shaped space in the corner behind the sensor. This will have to wait until I get a 3D printer, which will hopefully happen later this year.

Aside from that the only other job will be deploying the new sensors once they are built. This will mean running all the remaining power cables through the roof space, so lots of crawling around up there (yay! /s).

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.