home assistant gitlab ci

Continuous Integration/Deployment for Home Assistant with Gitlab CI

This post may contain affiliate links. Please see the disclaimer for more information.

One of the best things about writing this blog is the interactions I have with other people. Of course it’s always nice to get feedback, whether it’s positive or (constructively) negative. It’s even better to see what similar projects other people are undertaking. Sometimes comments even start me off in a different direction than I had been taking.

This project was inspired by one such conversation. This started out rather gruff but actually ended up being really positive and motivated me to go further with an approach that I’d mostly dropped. The conversation in question was in relation to my recent “Seven Home Assistant Tips and Best Practices” post. Specifically it was around testing and deploying your Home Assistant config with Continuous Integration (via Gitlab CI).

I’m already familiar with Continuous Integration and Continuous Deployment through work. I had developed a minimal setup for validating my HASS config. However, I’d previously given up on it due to the slowness of Gitlab’s shared CI runners. What follows is my new attempt. Thanks to /u/automate_the_things and all the other commenters on that thread for the inspiration/persuasion to do this!

Setting Up a Local Runner

Ideally, I’d like to fully self host my own Gitlab instance. However, the recommended RAM requirements are between 4 and 8GB, which is a little ridiculous. Perhaps when I upgrade my server I’ll be able to spare enough RAM for this. For now I’m just running a local runner and connecting it to the cloud version of Gitlab.

I decided to go with deploying the runner as a Docker container and also executing my jobs also within their own containers. This fits with my recent Docker shenanigans and I’m familiar with this setup having deployed it for work. CI systems are one area where I’ve always felt that using containers makes sense, even before my recent Docker adventures. Each build running in it’s own container removes a lot of the complexity in managing build environments for different projects. It also means that your build machines only require Docker.

I set up my runner in a new VM on my main server. Then I pretty much just followed the official instructions to install the runner. I did convert the docker run command to a minimal docker-compose.yml file for ease of reproduction however:

Once that was done, I finished by getting the runner registered and connected to my project.

home assistant gitlab ci
Once your runner is active it should show up in your Gitlab runners page (accessible via Settings->CI/CD->Runners)

Build Pipeline Configuration

In my research for this project, I came across Frenck’s Gitlab CI configuration, which is really awesome (thanks @Frenck). I’ve based mine heavily on his with some tweaks for my configuration and environment. The finished pipeline runs the following jobs:

  • shellcheck – this job performs various checks on any shell scripts in the repository
  • yamllint – this job performs a full lint check of all my YAML files. Since I’ve never run this before it threw up loads of errors. I started fixing a few of these, but eventually marked the job with allow_failure: true to allow the pipeline to continue even with these errors. I’ll work on fixing these issues bit by bit over the next few weeks. I also remove some files which are encrypted with git-crypt
  • jsonlint – pretty much the same as yamllint, but for JSON files. Any files that are encrypted are excluded from this check.
  • markdownlint – similar as the previous jobs, but for checking of markdown files (such as the README.md file)
  • ha-latest – checks the HASS configuration against the current release of Home Assistant
  • ha-rc – runs a HASS configuration check against the next release candidate for Home Assistant
  • ha-dev – checks the HASS configuration against the development version of Home Assistant. Both of these jobs are configured to allow failure. This is just intended to give me advanced warning of and breaking configuration that may prevent HASS from starting up in a future release.
  • deploy – deploys the configuration to my HASS server. I’ll discuss this in more detail below.
home assistant gitlab ci
My full pipeline (note the yellow status of the failing yamllint job)

You can find the finished configuration in my hass-config repository.

Deployment Approaches

There are several ways I could have done the deployment, which may suit different scenarios:

  • We could use the Home Assistant Gitlab CI sensor to poll the pipeline status. We would then trigger an automation to pull down the configuration when the pipeline passes. I haven’t tried this approach. However, it is potentially useful if your HASS server and Gitlab runner are on different networks and your HASS server is not publicly available.
  • We could use a pipeline webhook from Gitlab to a webhook handler in HASS. This would trigger an automation similar to that above. Again, I haven’t tried this. It would be useful if your Home Assistant instance and Gitlab CI runner are on different networks. However, it would require your HASS instance to be publicly available.
  • Similar to the approach above you could trigger the HASS webhook handler from a job in your Gitlab runner directly with CURL (rather than using the built in webhooks). This has the advantage over the previous two approaches that it gives you an explicit deploy stage in your pipeline. This is turn gives you the ability to track deployments via environments. It’s also potentially a lot simpler, since it would only be triggered if the previous stages in the pipeline passed. You also wouldn’t have to parse the JSON payload.
  • The approach I have taken is to deploy directly from the runner container via SSH. This is because my runner and HASS machines are running in the same network and so I can easily SSH between them without having to open any ports through my firewall. It also centralises all the deployment logic in the CI configuration, without any HASS automations needed.

My Deployment Job

As per my CI configuration, the deployment job is as follows:

As you can see, this job runs in a plain Alpine Linux container and is deploying the the home-assistant environment. This allows me to track what versions were deployed and when from the Gitlab UI.

The before_script portion installs a couple of dependencies which we need for later and pulls the (password-less) SSH key we need for logging into the HASS server from the project variables. This is stored in the $DEPLOYMENT_SSH_KEY variable in the Gitlab configuration. The resulting file must have it’s permissions set to 600 to allow the SSH client to use it.

Moving on to the script portion. The first step performs the actual deployment of the repository to the server via SSH. Here we use our SSH key that we wrote out above. The public portion of this is installed on the HASS server as for the ci user. We also disable strict host key checking to prevent the SSH client prompting us to accept the fingerprint.

home assistant gitlab ci
The Gitlab CI variables page (accessible via Settings->CI/CD->Variables)

The SSH command connects to the server specified in $DEPLOYMENT_SSH_LOGIN, which is again set in the Gitlab variables configuration. This has the form ci@<hass host IP>. It should be noted here that the Alpine container defaults to using Google’s DNS. This means that resolving internal hostnames for your network will fail. I’m using the IP addresses for now to get around this.

Remote Control Commands

The SSH command sends a sequence of commands to be run on the HASS server. These commands are as follows:

  • cd /mnt/docker-data/home-assistant – change directory to the configuration directory on the server
  • git fetch – fetch all the new stuff via git
  • git checkout $CI_COMMIT_SHA – checkout the exact commit that we are running a pipeline for via one of Gitlab’s built in variables

This arrangement of commands allows me to control exactly what gets deployed to the server for each pipeline run. In this way we won’t accidentally deploy the wrong version if new code is checked into the server whilst our pipeline is running.

In order for the git fetch command to work another password-less SSH key is required. This time this is for the ci user on the HASS system. The public portion of this is installed as a deploy key for the project in Gitlab. I suppose it’s equally valid to pull changes via HTTPS (for public repos), but since the remote on my repository was already set up to use SSH I decided to continue using it.

Restarting Home Assistant from Gitlab CI

The second command in our script section is to restart Home Assistant after the configuration has been updated. Here we use CURL to call the homeassistant.restart service as per the API docs. The Home Assistant authentication token and URL are stored in Gitlab CI variables again.

Finally, we enter the after_script section, which will be executed even in the case that one of the above commands fails. Here we simply delete the id_rsa SSH key file.

I’ve restricted my deploy job to run only on pushes to the master branch. This allows me to use other branches in the repo as I please without having them deployed to the server accidentally. I’ve also used a tag of hass to prevent running on runners not intended for this job. Since I only have one runner right now this isn’t a concern, but having it in place makes things easier if/when I add more runners.

Conclusion

I’m really pleased with how this CI pipeline has turned out. However, I’m still a little concerned at how long all these steps take. The latest pipeline took 6 minutes and 42 seconds to run (plus the time it takes HASS to restart). This isn’t very much if it’s just a fire and forget operation. It is however, a long time if I am trying to iterate on my configuration. I’m going to play around with the runner configuration to see if I can get this down further. I also want to investigate options for testing on my local machine. In my previous attempt at this my builds could sit for longer than this waiting for one of Gitlab’s shared runners. So I’ve at least made progress in the speed department.

Next Steps

In terms of further improvements, I’d like better notifications of the pipeline progress and notifications when HASS competes it’s restart. I will implement these with Gotify. Right now I only get an email if the build fails from Gitlab. I’m also going to integrate the pipeline status into Home Assistant with the previously mentioned Gitlab CI sensor. I’m even tempted to turn one of my smart lights into a build light to alert me to problems!

I also want to take my use of CI in my infrastructure further. My next target will be building some modified Docker images for a couple of my services in CI. Deployment of the relevant Docker stacks is another thing I’d like to try out. I’ve not had chance to play with building containers via Gitlab CI before so that will be interesting.

I hope that you’ve enjoyed this post and that it’s inspired you to give something similar a go. Thanks to this pipeline I’ll never have to worry about HASS being unable to start due to a broken configuration again. I’m open to further improvements to this process. Please feel free to share any you may have via the feedback channels. Thanks for reading!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

automate dumb audio system

Automating My Dumb Audio System

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I’ve been on somewhat of a mission to improve the integration between my various media devices and my home automation system. One part which has until now been untouched is the main living room audio system, which is still somewhat dumb. I’m not all the way to automating this yet, but I’m making progress. In this post I’m going to detail my progress so far, the issues I ran into and how I’m planning to improve the integration in future. I’ll also partially review the different components involved.

The Audio System

As detailed previously, our living room audio is provided by a Polk Signa S2 soundbar. I really like this soundbar and it was a massive step up from the TV audio we had previously. The system was very easy to set up, pretty much plug in the HDMI ARC connection, turn it on and go. It came pre-paired with it’s wireless sub, so that just worked. I’ve had a few instances (maybe five or so the last few months) where the main unit was not able to connect to the sub on startup, which it communicates by flashing an LED. Turning the unit off and on again allows it to retry, which in my experience always works.

The audio quality is great to my ear, but I don’t consider myself an expert in audio stuff. It’s enough to fill our living space with sound and shake the walls if you turn the bass up!

automate dumb audio system
The soundbar in all it’s glory

Where this device falls down is the lack of easy integration with other devices. This is of course pretty normal in this space, but it doesn’t mean we have to accept it!

Clever, Dumb Speakers

On the surface the soundbar behaves somewhat intelligently. It will turn on when the TV comes on. Presumably this is done via HDMI CEC because it will also turn on when the TV is turned on via CEC. This means it isn’t just intercepting the IR commands from the TV remote. However, all my attempts to control the unit via CEC from the Raspberry Pi connected to the TV failed. Weirdly, the soundbar also won’t switch off when the TV is turned off via CEC, whereas it does with the remote. The soundbar also integrates the volume between the TV and itself and allows adjustment via either the TV or it’s own remote.

This is all nice for the “normal” case of just using the TV. We can just sit down and watch without having to find the extra remote and everything just works. Where it falls down is anything outside of this basic use case. The soundbar has basic preset modes for different applications, “movie”, “music” and “night mode”. It also has 3 levels of what it refers to as voice adjust, which amplifies the frequencies contained in human speech to make dialogue clearer (it actually works pretty well). None of these settings are available unless using the device’s own remote.

automate dumb audio system
The soundbar remote, most of the functions are only available via this remote

IR Control

As I couldn’t control the device via CEC, I was pretty much resigned to having to build an IR remote control device. The soundbar also has bluetooth, but it’s only for audio streaming. There doesn’t appear to be any control capability and it’s turned off when the device is in standby. I’ll come back to the bluetooth later, as I do have some ideas around it for future work.

After failing to find the time to build an IR blaster (hardware is hard), I decided to buy a Broadlink device. Specifically the Broadlink RM Mini 3, since they were fairly cheap and supported in Home Assistant.

automate dumb audio system
The Broadlink RM Mini 3, sitting happily in it’s place on the bookcase

When the RM Mini arrived I was pleased with how it looked and easily found a place for it on a bookshelf where it could see the TV and soundbar. The line of sight reaches across the whole of our living/dining area, so I was hoping the range would be sufficient to reach. As it would turn out I was right, there have been no problems so far.

OMG, That App

Setting up the Broadlink was an exercise in frustration, mainly because the IHC app is truly awful. No… that doesn’t cover it: the app is a train wreck on board a sinking ship that’s just been hit by a meteor. For example, on the sign up page, it gives you 60 seconds to both enter the validation code it sent you by email and type your new (and hopefully secure) password. Unfortunately, I don’t have a screenshot of this, since I don’t want to reinstall that natural disaster of an app to get one!

Who really thinks that one minute is enough to sign into their email (or even just switch to your mail app), grab and enter the code and generate and enter a reasonable password?! That’s even assuming the email comes through in that time. Repeat after me: Email is not a synchronous communication medium! It was not designed for real time communication, messages can be delayed for any reason. I use grey-listing on my server, which would have made it impossible for me to sign up had I not been able to temporarily disable it.

Always Remember: The Cloud is Just Someone Else’s Computer

The app will also fail to find the device if you are on separate networks and gives you no way just to enter its IP address. This meant I had to put my phone onto my IoT network, on which most outgoing traffic is blocked. I then had to punch holes in the firewall for the device and my phone, because the setup process requires Internet access. The one for the device may not be required, but it definitely didn’t work if my phone couldn’t get out.

A device like this really has no business needing Internet access in the first place. It could also be configured purely over the local network and even without an app if they just put an AP+captive portal configuration page on it. Sigh.

Anyway, I got it onto the network. Eventually. However, things didn’t get better from there.

Issues in Home Assistant

I set up the device in HASS thinking that the hard part was over and that it would be plain sailing from now on. That proved to be misguided. Upon running the broadlink.learn service via the dev-tools, nothing happened. When I checked the log via the info tab I got the message Failed to connect to device from the broadlink component.

Upon examining the code, I could see that this happens if the device fails to authenticate properly. The auth call comes from the underlying library, python-broadlink. I checked the latest version of this out (0.11.1) and tried the CLI tool with the latest version, which also didn’t work. The tool would just time out without getting any data back. Also, the little white LED on the device didn’t light up.

Fixing it… but not really

I checked out and installed the previous version (0.10) and tried the same thing and it worked! I was able to learn IR codes and send them back to control devices from the CLI. The next step was to work out what commit broke the library. I did this by git bisecting between the good version (0.10) and the bad version (0.11.1). The resulting commit was 38a40c5, where the approach to encrypting the payloads changed. By analysing the change set, I was able to come up with a patch for which I’ve submitted a PR.

I then decided to try this out by installing my version of the library inside my HASS Docker container (temporarily) to see if it resolved the issue. Weirdly, I ran into the same issue! After some debugging to make sure it’s picking up the right python module I did some packet captures with tcpdump. I could see that the authentication packet payload was different to that of the working library outside of HASS. At this point I’m a bit stumped. I’ve submitted an issue to HASS, but in the meantime I decided to come up with a workaround.

Unix to the Rescue

Since I now had a working CLI tool, I could now get the Broadlink working with HASS via the shell_command integration in HASS. I started by creating a new python virtual environment from inside my HASS container and installed my version of the python-broadlink library:

It’s necessary to do this inside the container because the command will be called from within the container by HASS, so we want to make sure that venv gives us a compatible python environment. The virtualenv will be persisted to the home assistant config volume.

The next step was to copy the Broadlink CLI tool into my shell_commands directory and update the she-bang on the first line with the output from the which command above:

I then wrote a simple wrapper script to save me from having to specify all the options in my HASS config files later (saved to shell_commands/broadlink_cli_wrapper.sh):

This just sends whatever comes in as argument 1 to the script with the broadlink_cli tool. Make sure to fill in the IP of your device and it’s MAC address (lowercase, bytes reversed).

HASS Configuration

Now we can move on to configuring this in HASS. I used the learning functionality of the CLI tool to grab the hex codes for each of the buttons on my soundbar remote. I then added them all to my HASS config (in a new package file):

Upon restarting your HASS instance this gives you a shell_command service for each button on the remote. I spent a little while testing all the commands via the dev-tools to make sure I had them all right.

Replacing the Remote

As a first step to fully automating the soundbar, I wanted to replace the remote with a card in my HASS UI. Then I don’t have to find the remote!

I started by creating several input_select entities for the source, effect and voice adjust options of the soundbar:

I then mapped the options to the correct service calls using automations:

Here, I make good use of the service_template option and the trigger object to map the services. The main downside of this approach is that the automation doesn’t trigger if you re-select the currently selected option in the UI. Since we have no idea what the actual state of the soundbar is, it would be useful to be able to do this.

I did something similar for the power state and mute state, this time using input_boolean:

This isn’t ideal, especially for the power state since because there is no state feedback it will often be out of sync. If I can get some feedback on the power state (see below) I intend to convert this to a template switch.

Creating Presets

I created a couple of scripts containing the preferred settings for both TV watching and music. These just set the settings in sequence, with some delays to allow the soundbar to react (it’s a bit slow):

I think I’ll comment out the power on step and following delay until I get the power feedback working. Currently if the state is inverted in HASS the script will not do the right thing. It’s annoying to have to make sure the states line up before triggering the script.

Lovelace UI

I created a card in Lovelace to allow me to control all of this. I don’t show the power control here, because I have another card which controls power for all my media devices.

automate dumb audio system
My Soundbar Card in Lovelace

Here’s the YAML:

That gives me pretty much full control over the soundbar from within HASS. I haven’t integrated the volume and bass controls yet, mainly due to the lack of level feedback. The volume is controllable from the TV remote and we don’t adjust the bass much. As such these aren’t really necessary right now.

Conclusion

Wow, that turned out to be a long post. It actually turned out to be a much larger project than I expected, given all the issues I ran into. If I’d have known how much trouble I was going to have with the RM Mini, I probably would have opted to build my own IR blaster with an ESP module. That said, now that it’s working, it’s great. The device also looks pretty good, which is a plus for something which must be located prominently.

I obviously want to get this working with the native HASS integration. Hopefully the integration will be fixed soon, but my workaround gives me a functional device in the meantime.

Further Work

I’d also like to fix the main defect in the current setup, which is the power feedback issue. This stands in the way of further automation, since currently you need to check the power state closely. I have a couple of possibilities for this. The first option is to use a power monitoring smart switch to detect when the soundbar is powered up. Whether this will be successful will depend on the power usage of the soundbar and the accuracy of the power monitoring. This also requires me to invest in further hardware.

The second option is to try and detect the soundbar via its bluetooth interface. My current plan is to do this via l2ping. I haven’t had much time to look into this yet, but if I could get it working that would be great. I already have a Raspberry Pi 3 in close proximity to the soundbar so that can do the detection. This can then be fused with the IR commands using a template switch in HASS.

If you made it this far, thanks for reading and getting through my rambling thoughts! I’ll be following up this post with further updates on fixing the problems detailed here, so please follow me if you’d like to see how I solve them.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

home assistant tips and best practices

Seven Home Assistant Tips and Best Practices

This post may contain affiliate links. Please see the disclaimer for more information.

I’ve been really busy with other things this week, so I haven’t had much time to work on computing projects. So, I thought I’d write a more informational post including some tips and best practices for Home Assistant configuration. I’m also starting to think more about tidying up my own configuration, so some of these are todo list items for me!

Home Assistant is rapidly moving towards being less reliant on textual configuration, with more and more configuration being available via the GUI. However, Paulus has stated that the YAML configuration will remain an option. For the time being, there is still quite a bit that absolutely needs to be configured via YAML. These tips are mostly around organising your configuration well in order to manage it’s complexity as it grows. Let’s get into it…

1. Use the secrets.yaml file (for everything)

I’ve put this one first because I believe it’s the single most important thing you can do which will allow you to manage your configuration. For anyone that’s not familiar with this, the secrets.yaml file allows you to remove all the sensitive data from your configuration files and store it in a simple key value file:

When you want to use a secret in your config, just prefix the name with !secret:

This should be used wherever you have any marginally sensitive data. Things like passwords and API keys are a given. However, host names, IP addresses, names of family members, times/dates of events and locations should also be hidden.

Separating these secrets out from your main configuration allows you to more easily share you configuration without leaking personal data. It also allows you to back up to locations where the data may become public. Your home automation configuration is a very personal thing! You can leak significant information about the operation of your home and family members.

It seems to be a common misnomer that this makes your system somehow more secure. It doesn’t! There is no protection (such as encryption) applied to the secrets file on the disk of your HASS system. What this does do is help prevent accidental data leakage. It won’t protect your secrets in the case that someone breaks into your HASS server and gets access to your raw config files. However, in that case you already have major problems.

EXTRA TIP: You can also use secrets in ESPHome, using the same syntax.

2. Store your configuration in Git

This one follows directly on from the use of the secrets file and is really enabled by it. Once you have removed all the sensitive data from your configuration, it becomes really easy to share it.

One of the best ways to share your configuration is to create a Git repository of it and publish it online via GitHub or GitLab. This forms a handy offsite backup. You also get the benefit of being able to track the history of your configuration and easily revert changes if you need to. Git was designed for managing software and your HASS configuration is software.

This is something I actually need to get much better at. Although I’ve had my configuration published for some time, I tend to let it get out of date with the local changes I make. This negates a lot of the benefits and makes it a pain to update it in one big blob. If I kept it updated better this would be more incremental and easier.

I’m not going to go into a detailed run through of how to get your config into Git. The Home Assistant documentation has a wonderful page to get you started. I go even further and use git-crypt to encrypt my sensitive files, like secrets.yaml so that even they can be included (without anyone being able to read them).

3. Group related items with packages

There are several ways to structure your configuration. Lot’s of people start out using a single large configuration.yaml file. Later on you may decide that this is unwieldy and decide to split it up.

In this case your configuration gets split by what kind of configuration it is. For example, automations go in one file (or directory), sensors in another, scripts in another and so on. The problem is that you end up with related configuration being all over the place, scattered throughout different files or directories.

Packages aim to solve this problem, by allowing related configuration to live together regardless of type. This means that you can put the configuration for an integration and any logic relating to it together. For example, here is a snippet from my vacuum package, in which I have both the vacuum integration config and the automations relating to the vacuum:

Shortly after starting with home assistant I started splitting up my configuration into directories based on configuration type. This served me well for a while, but over the last year or so I’ve been moving (slowly) towards using packages. Now all my new configuration goes into packages, but I still have a load of the older stuff to move over.

4. Use multiple triggers with and conditions

In my TV power saving automation from my last post I used multiple triggers with an and condition that checked the state of those triggers again. This is a generally useful pattern for anything where you want something to happen as soon as all relevant entities are in a given state.

For example, to run an automation when everyone goes out:

Of course, I could just use a group for that automaton. This pattern gets really useful where the states you want to check differ between entities – as in the TV example.

5. Use hass-cli for quick config checking and restart

When editing my config I prefer to use vim over SSH to the HASS server. I’ll usually also be using the Home Assistant GUI to debug and test things via the dev-tools. Switching pages in the UI every time I want to restart is a bit of a pain. This is especially annoying with the recent UI changes which put the config check and restart buttons underneath the (for me greyed out) settings for location, name, timezone, etc.

As such a quick way to check my config and restart is very useful. To this end I have this in my ~/.bashrc file on my Home Assistant server:

This relies on having hass-cli installed. The command first calls the homeassistant.check_config service and then calls the homeassistant.restart service. It would be better if hass-cli could return 0 or 1 depending on the state of the configuration check and so not call the next step. However, it seems like HASS also checks the config before restarting. Having the the config check there is still useful to alert you to an invalid config.

home assistant tips and best practices
HASS-CLI will alert you to an invalid config via a persistent notification

With this in place, all I need to do is temporarily pause my vim session (with Ctrl-Z), type hass-restart and wait for it to restart.

6. Use the trigger object for more general automations

When an automation has multiple triggers it is possible to customise its behaviour based on what entity triggered it. This is done via the trigger object, which is passed into the actions of the automation for use in templates.

Here is an example from my battery notifications:

As you can see this automation is triggered on one of several entity state changes (I shortened the list for reproduction here). The resulting notification however is customised to the particular entity which trigger the automation at that time.

7. Use entity ID lists in complex template sensors

Home Assistant has automatic state tracking logic used for determining when to update the state of a template sensor or binary sensor. However some templates, particularly those containing for loops are too tricky for it to work out. The workaround to this is to specify the entity IDs to track manually via the entity_id list.

home assistant tips and best practices
Even if your template works in the dev-tools it may not work in a sensor without specifying the entity IDs

For example:

I’m not sure why the sensor above would be useful (it’s a contrived example). I haven’t actually had the need to use a template like this so far, but it’s good to know how to make it work just in case I need it.

Conclusion

These are just seven items that I could think of, but I hope that you will find them useful. I’m sure there are probably many more tips and best practices for Home Assistant configuration. Feel free to share yours in the comments and other feedback channels. If I get enough suggestions I may do a crowdsourced follow up to this article!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

smart tv less dumb

Quick Project: Making my Smart TV Less Dumb

This post may contain affiliate links. Please see the disclaimer for more information.

The word “smart” has become so loaded in these days of smart-this and smart-that. The question is what qualifies as “smart”? Surely the answer is some form of Internet connectivity? However, I’m not sure that goes far enough. I like to see some degree of interoperability with other systems before I call a device smart.

For me the other system in question is Home Assistant. I’m not going to call a device truly smart unless there is a way to make it integrate with HASS. Of course, HASS has a lot of integrations which should make it easy right?

Not so for my “smart” TV (a 2017 model Sony Bravia) which fits neatly into a chasm between Sony’s previous Bravia offerings and their Android TV line. As such the integration in HASS doesn’t work with it, because the service it relies on is not available on the TV and it’s not running the truly smart Android TV OS.

Sony just seem to have slapped Netflix, YouTube and a couple of other apps on the previous TV software, disabled the network control service (presumably because it didn’t work with those apps) and sold these as Smart. Seems pretty stupid to me. It’s a shame, because otherwise it’s a really nice TV. It’s high time I made this Smart TV less dumb.

Getting Smart

I’ve written previously about how I’ve integrated power switching for the TV into HASS via HDMI CEC and Node-RED. Well recently I’ve made a couple of improvements to this which have increased it’s utility and made my Smart TV slightly less dumb.

The first of these was to add a ping sensor to report on the power state of the TV. It seems like the TV has pretty rudimentary (but in my opinion good) power management, since it responds to pings when on and doesn’t when in standby.

Here’s the YAML for that sensor:

Basically this is just pinging once every 15s. I kept the repetitions this low in order to ping more frequently and still keep the network traffic low. If a ping is lost I’m not too worried since the next one happens pretty quickly. Also the TV is on a wired Ethernet connection, so packet loss is very low (as compared to wifi).

I’ve fused this with my existing power switch in order to overcome some reporting weirdness from the CEC switch. It seems to me that the TV doesn’t always report it’s power state, especially when turned off via the remote.

Here is the template switch that integrates the two:

This switch becomes the primary means for controlling and displaying the TV power state in HASS. Now it’s really reliable, with the caveat that it may not update the state immediately, due to the ping interval. This is something which I can live with because it will now show the correct state most of the time. Previously it showed the incorrect state most of the time!

Detecting Netflix

As I was port scanning the TV (what? you mean it’s just me that scans every device that comes into my house?), I noticed an that port 9080 was open. Some investigation proved that this port is opened by the Netflix app on the TV and low and behold, if I exit Netflix the port is closed.

This gives us a nice way to detect if Netflix is running on the TV. This is useful, because despite my protestations people in my household seem to want to use the native app rather than Kodi (instability of the Kodi plugin is one reason for this). This is unfortunate in that I can’t see the state in HASS and therefore have had to disable my automation to turn off the TV when nothing is playing, lest it switch off five minutes into a Netflix stream.

I used the command line binary sensor to detect the open port (thanks to VDRainer on the HASS community forums for helping me with an issue here):

Here we run an nmap command which will detect the open port, grep for the answer and output ON or OFF depending on the state. By default this will be run every minute, which is good enough for me. Unfortunately, I haven’t found a way to use secrets with the command, so you’ll need to insert the IP of the TV manually if you want to use this.

smart tv less dumb
The Netflix Sensor

This doesn’t give us any information on whether or what Netflix is playing, just that the app is open. That information is probably available via this port, since it seems to be running some kind of web service for remote control. Someone would need to do the relevant investigation to work out what the endpoints are though.

Putting It All Together

Now I can update my TV power off automation:

Here I’m triggering the automation if either the local Kodi player or the Netflix sensor are idle/off for 5 minutes. I also check that both of these are in the desired state in the conditions. The extra time condition is to account for the use of the TV YouTube app, which I can’t detect. It only tends to be used before 8pm, so I restrict the time that this automation runs just in case.

Conclusion

That actually turned out to be quite a long post for a ‘quick’ project! I’m pretty happy with this approach and it’s allowed me to bring back an automation which has been disabled for ages. Hopefully this can start saving some power again, since people still leave the TV on. Now if only I could detect that pesky YouTube app! Then I really will have made my smart TV less dumb, but not yet truly smart (at least as far as I’m concerned).

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

my road to docker smtp

My Road to Docker: Sorting Out SMTP

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


Having said in my last ‘Road to Docker’ post that I didn’t have any current plans for another post, something came up which warrants a write up here. I managed to solve the problem in question in a Dockerised fashion and I’m quite pleased with the solution. Let’s get into it….

The Backstory

After converting my web stack over to Docker recently, I had installed the WP Mail SMTP plugin in WordPress to handle mail from the site. This was required since WordPress could no longer send via a local mail setup. I configured the plugin to send via my existing mail server. This worked well for a while – then I encountered a speed bump on my road to Docker!

For some reason (I think due to an update of the WordPress container), I just stopped receiving email from the site. Upon investigation it seemed that the TLS connection was unable to be started correctly. I got the following debug log when testing mail via WordPress:

Looking into the logs on the mail server, I found the following corresponding error:

I tried googling the error, but after trying a few of the suggested fixes, I gave up and decided to solve the problem a different way.

Fixing it, Take 1

My first attempt involved installing Postfix on the host in a smarthost configuration to my main server. This is the same setup as I use on most of my servers for system mail from cron, etc (via a custom Ansible role).

After getting the mail system running and able to send mail from the host, I tried to configure it in WordPress. However, I was unable to connect to the host machine from the container on the Docker host IP address. I investigated this and found that because I was using a private network the IP address was different than the standard Docker interface address.

Trying this address didn’t work either. Perhaps this is a security feature in Docker, or perhaps I was doing it wrong. Either way it pushed me on to a better solution.

Fixing it, Take 2

My next plan involved putting Postfix into a container. This would be put on the same private network as the WordPress container to allow access. I needed to keep the smarthost configuration to talk to the main mailserver. A quick search turned up a suitable image in the form of boky/postfix. This image is intended exactly for this purpose and I was able to set it up without too much trouble.

To spin this up I added the following to my previous docker-compose.yml file:

Pretty simple! As per the previous post I put all the secrets into an env.sh file to keep them separate from the stack.

The mail forwarder is available both on the internal Docker network and on the host system to replace the native mail forwarding setup. Setting this up with WordPress ended up being trivial (see screenshot below). However, I required some further configuration to make the mail on the host system work.

my road to docker smtp
We just use the container name as the hostname and 587 as the port! Authentication and SMTP aren’t required for the local connection.

MSMTP Setup

In order to redirect the system mail from the host via the Dockerised mail forwarder, I had to set up MSMTP. This is pretty effectively documented elsewhere, so I won’t go into details. The only differences in this setup are that we don’t require authentication or TLS to the mail forwarder because it’s only available locally. The mail forwarder itself is already handling authentication and TLS to the main mail server.

For reference, here is the msmtprc file I ended up with:

Conclusion

Here we’ve seen how to quickly deploy a mail forwarding server for your Dockerised applications. We’ve also configured our host system to also use it, so we don’t need to run two forwarders.

I feel that this is the best solution to this problem (although I still don’t quite know what the original problem was!). It feels like a nicer solution than my original solution of running the mail forwarder locally. It has also resulted in one more Dockerised application!

I’m intending to convert my other Docker servers over to this approach in the near future. For the system mail part, I still need to create an Ansible role to push out the MSMTP configuration. Actually, the whole question of Ansible/host configuration and how it fits with Dockerised services is still something I need to work out. If anyone has any ideas feel free to share in the comments.

As I said in the last post, I don’t have any more ‘Road to Docker’ posts planned in the immediate future. However, the migration is ongoing so there will be more at some point!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.