Lovelace Multi-room Audio Contols

Quick Project: Lovelace Multi-Room Audio Controller

This post may contain affiliate links. Please see the disclaimer for more information.

Welcome to a new segment on my blog! Here I’m going to do quick write ups of some of the little projects that I complete. These projects often come between the bigger ones that I usually write about. This is going to be bit of an experiment, but I’m aiming to publish these in addition to my usual blogging schedule. However, they will be done as time permits so may not be as regular.

I wanted to start doing posts like this, since I realised I do quite a few small projects which never make it onto the blog. Primarily this is because I just forget about them once they’re complete. Let’s see how it goes and please let me know if you like these posts in the comments or via Twitter.

Today’s Project: a Lovelace Multi-Room Audio Controller

This post is about a bit of playing around I was doing in Lovelace (the new Home Assistant UI) over the weekend. I started out installing the Home Assistant Community Store (HACS) and came across the Mini Media Player card. This struck me as the perfect thing for making better controls for my multi-room audio system. Without further ado, here are the finished Lovelace multi-room audio controller in all its glory:

Lovelace Multi-room Audio Contols
I love the beautifully minimalist look of this

Basically, we have controls of the main Mopidy music player. This is followed by the volume controls for the overall Snapcast group and the individual Snapcast clients. This is pretty much a copy of the layout in the Snapcast Android client. However, that lacks any way to control the player.

Get to the YAML!

Below is the YAML I used to create my Lovelace multi-room audio controller. Is should be noted that this can be entered through the GUI editor, by just switching to the raw editor. You don’t need to be using YAML mode.

This consists of a vertical stack card containing several mini media player cards with the group setting set to true to make them nest fine into a what appears visually to be a single control panel. I then just use the hide parameters to get rid of the controls I don’t need on each one and add a custom icon for each. Done!

Conclusion

I really like this layout. The only improvement I can think of would be if the cover artwork for the currently playing track was displayed. You can see I’ve tried this in the YAML above, but it didn’t work for me. This might be more of a limitation of the MPD integration in HASS than of mini media player.

That’s it for now. Again, please let me know if you like this post format and I’ll keep doing them in between my other posts.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Road to Docker Part 2

My Road To Docker – Part 2: My Home Automation Stack

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of a series on this project. Here is the series so far:


In my first post of this series, I outlined my plan to convert my infrastructure over to a layered setup. This would consist of virtual machines (in various VLANs), with most of the services running in Docker. This post details the second stage of my road to Docker, although really was is the first stage since I’m writing these out of order! I actually converted my home automation systems over to Docker before tackling the web stack.

The motivation behind upgrading the home automation system first was to do it at the same time as I did a large update to Home Assistant, since I’d been holding back on updating. The main reason for this was the switch to Lovelace as the default UI, which I was dreading. As it turned out, I waited long enough for the awesome HASS developers to make all my problems go away (or at least the Lovelace related ones).

System Summary

I’ve written about my home automation setup before, but here is a brief recap of what I’m running (only the server side stuff):

I had also been running InfluxDB and Grafana. However, something broke in my setup and I hadn’t got around to fixing it. I therefore decided to cut my losses with that and not reinstall it (for now).

Finding Docker Images

Luckily for me, the four main components of my system all have official/recommended Docker images available. This was useful as I’m always pretty reticent to use some questionably maintained image from the Docker Hub, mainly due to the lack of security updates. I also wanted to avoid building custom Docker images for now, until I work out a decent update strategy.

In addition to the four services above I wanted to run the ESPHome dashboard in order to manage my devices better. I had previously just been using the command line tool to build and upload to them. This also has an official Docker image.

Road to Docker Part 2
Looks like I have a few devices to update!

I also ended up running a MaryTTS container to replace PicoTTS that I had been running for my voice announcements. This is due to the lack of PicoTTS inside the HASS Docker image. It was recommended that I use the silversniper/marytts image. Looking at this image, it hasn’t been updated in three years which. This reinforces my point about random images from Docker Hub. Luckily, this isn’t an externally facing application so it isn’t too critical from a security standpoint. However, I think I’ll look into updating that at some point.

Stacking Containers, again…

I set up a new clean VM inside my home automation VLAN. I was a little reserved about doing this, since it means everything in that VLAN (most of which is blocked from the internet) can see the full HA server. However, my main worry over not doing that was the mDNS used by ESPHome. If I can get Avahi/mDNS working across VLANs at some point I will move it. I still have a big network re-organisation to do, so hopefully it will get done then.

The full docker-compose.yml file for my new stack is given below:

There’s nothing particularly earth shattering here. The main point of interest is that I mount the voice file for my preferred MaryTTS voice inside the container. Actually finding the voices is a little interesting. The official way to download them is a GUI tool that won’t run inside the container. I eventually found the XML file which lists all the available voices and extracted the URL of the one I wanted (the online demo helps to decide).

The only other parts worth noting are that I mount all my volumes under /mnt/docker-data, which is an NFS share onto the ZFS array of the virtual machine host. This then gets rolled into my normal backups. I also didn’t bother with a reverse proxy for any of this, since I already have one for HASS sitting in my DMZ (yet to be Dockerised). The other services just get accessed via the machine hostname and port since they are only used internally.

Sometimes Waiting Pays

I didn’t run into any issues particular to setting this up in Docker. At this point I think it’s a pretty well trodden path and this setup is pretty much standard. I did however run into several issues with upgrading to the latest Home Assistant.

First, lets tackle the elephant in the room – Lovelace. I was really worried going into this that it was going to be a huge amount of work. My mind was somewhat put at rest by seeing the UI editor in action via misperry’s video. When I actually came to it, the migration process automatically re-created my existing UI pretty much perfectly. Lesson learned: there is something to be said for waiting for mature software, rather than jumping on the new shiny thing immediately!

Lovelace itself is awesome! The ease of configuration has made me actually focus on making my HASS UI nicer rather than just the bare minimum I could get away with that I had previously. In the screenshot below, you can see my new “Outdoors” panel. This contains weather information, outdoor related sensor readings and a couple of local webcam views.

Road to Docker Part 2
I probably should have taken this screenshot during the day

Remaining Issues

Most of my remaining issues were due to the Home Assistant “Great Migration”. This resulted in a load of entity IDs changing in various components. Obviously, this resulted in my having to update my configuration to change all the names. It took a little while to troubleshoot. This was because if the changed name is used in an automation, the automation has to actually fire to cause an error. In many cases it also just won’t fire, if the name is used in the triggers for the automation.

The final major issue I encountered, was with my frankly awesome vacuuming robot, which appeared to stop reacting to service calls in HASS. The underlying issue appears to be the Botvac D3 returning a different error message than the D7 that the library was tested with. So far this hasn’t been fixed, but I’m currently using the category: 2 workaround suggested and that’s working fine. I think I’ll have a look into fixing that issue and submit a PR when I get time.

Managing Updates

Managing updates to Docker images has always been an bit of an issue for me. In the past I’ve used Watchtower with some success. However, due to the capacity for breaking changes I want to manage HASS updates more carefully. It was suggested to me to just use a bash script which I can run periodically to do this. This isn’t something that had occurred to me before, probably because it’s so simple! Here’s the script I’m using:

This works beautifully and allows me to easily keep up with release to HASS and the other components, once I’ve verified that it’s reasonably safe to update.

Conclusion and Next Steps

Overall I’m pretty happy with how this move has turned out. Once the initial teething issues were all worked out the system has been very stable. I’m appreciating the extra utility of the ESPHome dashboard, which makes it very convenient to update my devices. It’s also great to be back on the latest version of Home Assistant.

In terms of next steps, I would like to give InfluxDB and Grafana another try. My main issue here has always been building the dashboards. It seems to be pretty tricky to get something both good looking and useful in Grafana. I also haven’t seen any pre-built dashboards for use with data from Home Assistant. Perhaps this is because they are so peculiar to individual setups.

I also have an LXD container running ZoneMinder. I’d like to re-deploy this as a Docker container on the same VM. Previously, I’ve not had too much luck running ZoneMinder in Docker. I’ll to see if the situation has improved when I tackle this migration.

I’m not actively working on any further migrations of other services to Docker at the moment, so there will probably be a break in this series for now. However, given my current success I’ll definitely be continuing on with this migration. I just want to work on some other projects for a while!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Multi-Room Audio Client

Multi-Room Audio System: Indoor and Outdoor Audio with Snapcast and Mopidy

This post may contain affiliate links. Please see the disclaimer for more information.

One of the projects I really wanted to do when moving into our new house was build a multi-room audio system. Traditional multi-room audio systems, such as Sonos, cost a massive amount for the functionality they provide. It looks like a cheaper alternative is now available using Chromecasts, but you are still at the mercy of what the manufacturer wants to do (like discontinuing the Chromecast Audio).

In this post I’m going to detail my multi-room audio setup, which plays perfectly synced audio across three sets of speakers, both indoors and outdoors. This system is 100% DIY and uses Free Software throughout. It’s also cheaper than even a single Sonos speaker.

System Overview

My system is comprised of a central server running Mopidy and Snapcast (the snapserver portion) and three audio players, each running the Snapcast client (snapclient).

Two of the clients are resident on the Raspberry Pi systems we use for Kodi on our TVs. One of these is located in the Living Room and connects into our soundbar. The other is located in the master bedroom and currently just uses the TV speakers.

The third client is located on a Raspberry Pi in our loft space, which is connected via an amplifier to speakers mounted outdoors by our patio.

The parts list for this setup is as follows:

The Fusion speakers listed above are outdoor/marine rated and certainly seem fine in the New Zealand climate (warm humid summers, wet cool winters). They are definitely not the best speakers in the world (the price reflects that). However, the quality is sufficient for my application of background/work music in an outdoor environment.

Overall, the total cost for the components ordered for this project was less than NZ$250. This comes in at less than the price of a single Sonos speaker. I’ve not included the Raspberry Pis in this, since I already had them and only one was specifically installed for this project.

Software Setup

The software setup is a pretty standard for this kind of project – basically just Mopidy feeding audio to Snapcast. As such I’m not going to give a full installation guide, since there are plenty of resources available. Take a look at the links below for full instructions (these are the resources I used when setting this up):

Multi-Room Audio UI
The Web UI Via Iris

In terms of client/remote control software, I’m using Iris as a web interface for Mopidy. On the Android side I’m using M.A.L.P. as well as the Snapcast app. M.A.L.P. seems to be a reasonable MPD client and supports multiple servers, which may come in useful in future. The main issue I have with it is that it gets the album art wrong frequently and there seems to be no way to override it’s choices (or use the correct album art from the server).

Of course, I also have both Mopidy and Snapcast integrated with Home Assistant!

Outdoor Speaker Hardware Setup

So far, so easy. Here is where I ran into issues. I mounted the speakers to the brick wall of our house just fine, but ran into problems running the cables up through the roof space to the amplifier. This was mainly due to one speaker being on the corner of the house where the roof is low. In this corner the steel supports for the roof were too close together for me to squeeze through. Also the level of the soffit where the cable came in was lower than ceiling height, so that the soffit forms a well around the outside of the house. All this made it nearly impossible to grab the cable.

Multi-Room Audio Speakers
Left Speaker (the one with the tricky cable)
Multi-Room Audio Speakers
Right Speaker

I fashioned a makeshift tool from an old mop handle and reacher grabber with a line attached to the handle so that I could actuate it from the end of the pole. I even went as far as installing the Android IP Webcam app on an old phone and mounting that on the far end. With this I could then view the image on my phone and use the light on the camera end to see better. This helped, until the battery on the phone died! Eventually I managed to grab the cable by pushing the whole length of it up through the soffit. The resulting bundle was much easier to grab.

Phew, now we’re getting somewhere…

Overall, getting the speakers installed took most of a day, with several hours spent laying on my front in the (hot) loft space trying to grab the cable. The provided speaker cables also had to be lengthened with some extra speaker cable from my local DIY store. Luckily I knew this before I installed them and didn’t have to pull them back.

Multi-Room Audio Client
The loft RPi with relays, sitting on top of the amplifier
Multi-Room Audio Client
The LED on the amplifier is unnecessarily bright

The remainder of the install was pretty much plug and play. I connected one of the USB soundcards to the Raspberry Pi and connected it’s output via audio cable to the amplifier. I spliced the relay into the 12V power line from the power supply to the amplifier to allow me to remotely control it’s power. Both the RPi and the amp are powered from the mains sockets I previously had installed in the loft.

Node-RED Relay Control

Multi-Room Audio Power Control
Power Control is done via Home Assistant

As with the relay power control for my room sensors, I used Node-RED to turn the relay on and off via MQTT. The flow uses my Home Assistant MQTT Discovery approach to be automatically added to HASS. There’s not much to say about this since it’s pretty much identical to the setup for the room sensors. Here’s the flow:

Multi-Room Audio Power Control
The relay power control flow

And here’s the corresponding JSON:

I also have a couple of automations which I use to mute/unmute the relevant Snapclient when the speakers are turned off. My completely unfounded hypothesis is that Snapcast should be intelligent enough to not send any data to muted clients, which should reduce unnecessary traffic on the network. I’ve not done any investigation to verify this however. In any case, here are the automations:

Indoor Setup with Libreelec and Kodi

It wouldn’t be a multi-room audio setup with out multiple clients! So on to the indoor systems. These are the running on my two Libreelec systems, connected to the TVs. The first of these is the most interesting since that connects to to our Polk Signa S2 soundbar. I’m actually planning a review of this in the near future, but for now we’ll just say it sounds awesome. I didn’t include it in the hardware list above since I didn’t purchase it just for this project.

I connected to the soundbar using the second USB soundcard and audio cable. This means I can play audio without having the TV on, just by setting the soundbar to it’s AUX input. The other system in the master bedroom, just sends audio via the HDMI port to the TV.

Multi-Room Audio on Libreelec
Settings of the Libreelec Snapclient Plug-in

On the software side of these I used the excellent Libreelec Snapclient plug-in. Just install it from the official Libreelec repo and you’re good to go. In order for the auto-discovery to work, you should make sure that the Snapserver and Libreelec machine are on the same network. The only other issue I had is that sometimes the ‘list sound cards’ dialog in the plugin settings wouldn’t work. I found it easier to just list the devices on the command line with snapclient -l and put the relevant device number into the addon settings.

Conclusion and Next Steps

Overall, this system is pretty great. It’s served us well for outdoor audio all through the summer and has become our primary way of listening to our music collection. There are a few rough edges, like the issues with album art on Android.

The main other point of complaint is the profusion of volume controls. This could be a separate rant altogether, since everything has it’s own volume control for some reason. For this system I just don’t touch the volume in Mopidy and use the individual channel controls in Snapcast. It looks like there is now a plugin to provide better integration here, but I haven’t tried it yet.

The next steps for this system will be to re-build the server side system as part of my ongoing migration to Docker+VMs. At this point I’d like to add a couple more groups to the Snapserver. One of these will be for audio streaming in over Bluetooth. This will allow for us to stream audio directly from our phones to any of the speakers in the house.

The second group will be for TTS notifications from Home Assistant. I know I can move channels between groups via HASS automations to decide where the audio goes. The main stumbling block on this at the moment is how to get the audio from the HASS server to the media server, which will be separate VMs. If anyone has any ideas here, please let me know!

That’s it for now. Thanks for reading!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.


Home Assistant automation for preserving last door opened state

Home Assistant Automation in Depth: Fusing Sensors Together for Stateful Automations

This post may contain affiliate links. Please see the disclaimer for more information.

This post is part of my “Home Assistant Automation in Depth” series. Here is the series index:

One of the most powerful things about Home Automation is being able to undertake many actions simultaneously, based upon a single input. For example, almost every automation system has the concept of scenes which allow you to set the state of multiple devices.

Just as powerful, if not more, is the ability to perform an action based on multiple inputs. We see this in the Home Assistant automation language which introduces the concepts of triggers and conditions. Triggers are the initial event that that the automation reacts to. Conditions are extra inputs which should be in the correct state to proceed.

With these simple concepts, we can easily create automation routines which operate only at certain times, are dependent on the state of the sun or even the moon (if you happen to be a werewolf). However, how do we track the state of something that happened, given that it’s current state may have changed?

State and Events

As a brief aside, we’ll discuss the difference between the state of something and events which occur. I’ll discuss this in general computing terms as well as within HASS.

State is data which describes the current situation. For example if the temperature is 24°C, then the state of the temperature entity should have a numeric value of 24 and a unit of Celsius. In Home Assistant states come in two varieties, the primary state and attributes. The primary state is generally just referred to as the state of the entity. Attributes are more like ancillary data, or metadata, but can be just as useful as the main state.

Events are things that happen. As such they are transient and only exist at the exact point in time at which they occur. Home Assistant automation events can be triggered when something changes state, when a particular time occurs, when a message is received and many others. Even an entity being in a particular state for a certain length of time can be considered an event. See the HASS Triggers documentation for more details. It should be noted that all the items on which an automation can trigger are events in the general computing sense not just those which trigger the ‘event’ trigger.

A system is ‘event driven’ if it primarily reacts to events occurring in the environment rather then polling the state of the environment. In this way, a Home Assistant automation can be seen as an event driven system, since they are primarily triggered by events.

Just get on with it!

Okay, okay.

Enough with the computer science lesson. How is this relevant to triggering an automation when multiple things happen? Well let me describe a situation:

You get up in the middle of the night, open the bedroom door and walk down the hall. You go into the bathroom, where the motion detector senses your presence and helpfully turns the light on. At 100% brightness. Temporarily blind you then proceeds to do whatever it was that got you out of bed.

The light controlled by Home Assistant automation
A very bad picture of the light in question.

Now I can guess what you are saying. Why not just adjust the brightness based on time of day. Well we could, but what happens when we have multiple people in the house, some of whom are awake and some asleep?

[For those that are wondering, the light pictured above is a Mi-Light RGB-CCT downlight. I think that link is to the correct one, but I can’t be sure as I bought mine from LimitlessLED before they closed down. Being an NZ company they were able to provide the documentation for my electrician to install these. I had several of these installed when the house was built, but I actually wish I’d had the whole house done with them.]

Sensor Fusion

Hopefully, we have more than just the time of day and the motion event to go on. With the help of another sensor (or set of sensors) we can integrate the data together and not blind anyone. It should be noted that this isn’t really sensor fusion in the strict mathematical sense! However, the definition fits quite well and I like the name!

In my case the second source of data are my zigbee door sensors. One of these is sensing the state of the bedroom door and the other is sensing the state of the kitchen door to the main living space. The logic is simple, if the bedroom door is opened the light comes on dim when motion is detected. If the kitchen door is opened, the light comes on bright when motion is detected.

Here’s where the difference between state and events becomes important. If I close the door behind me, the system cannot determine the correct door that should drive the lighting. This is because either neither is open, or the wrong door is open (if it was left open). It turns out the event of the door opening was the important part. Not the actual state at the time of the motion event.

We need to convert the door opening event into a state which we can store somewhere else. Luckily, we can do this easily in Home Assistant using an input_select entity.

Finally, some YAML

Here, we create an input_select entity which will store the last opened door. Note, you can extend this to store more states. For my purposes I only need to differentiate between bedrooms and kitchen.

An automation is used to catch the door open events and translate that to the correct state of the input_select:

I’ll walk through this automation step by step:

  • First we have our triggers, one for each door contact sensor we have. Obviously you can add as may of these as you want. For now I just have two.
  • There are no conditions, the automation just runs whenever triggered.
  • We call a single action which calls the select_option service of our input_select entity.
  • A simple template is used to compare the triggering entity ID (i.e. which contact sensor triggered the automation) with the entity ID of the kitchen door sensor. In that case we set the option to ‘Kitchen’ otherwise we set it to ‘Bedroom’. In this way, this automation will scale to multiple bedrooms, but can only have one kitchen. If you have multiple kitchens (!) or other relevant rooms, you’ll need a more complex template.

Stateful Home Assistant Automation

Next comes our automation to control the light. This automation is stateful, in that it executes differently depending on the state of the input_select we just set:

Breaking it down

Again, I’ll go through the automation step by step:

  • First we have our trigger, in this case from our motion sensor.
  • Next we have a couple of conditions to only run the automation when it’s dark enough. In this case 20 minutes before sunset and 20 minutes after sunrise. We wrap these in an or block so that only one has to match.
  • Now we get to the actions, the first of these is to start a timer. This will be used later for turning off the light. For completeness the YAML for the timer looks like this:
  • The next action is actually another condition, which basically matches only if the light is already off. If this is not matched the action block will terminate here. This condition ensures that the brightness of the light does not get changed if the door state changes. Placing this after the timer start also ensures that the timer is restarted by motion events in the bathroom.
  • The final action is obviously to turn the light on. The interesting part is the template logic. This contains a nested if block, the first part of which checks against some pre-defined times where we only want the light bright (before 8pm and after 7am). This is mostly useful in winter, since the sunset and sunrise rules will prevent the automation from running before/after these times in summer.
  • The inner if statement is where we check our door state. Here we check whether the last door was to a bedroom. In that case we set the brightness to 1% (plenty bright enough for night time use on these lights). Otherwise we go to full brightness.

Finishing It Off

The final piece of the puzzle is turning off the light when the timer expires:

This is reasonably self explanatory. The only odd looking bit is in the trigger where we must use the event platform to directly catch the timer.finished event, since the timer doesn’t have it’s own trigger type.

Conclusion

Phew! Hopefully you got this far and didn’t get lost in the weeds of the states vs events stuff!

The automations shown here have been running for several months pretty much flawlessly. I’ve given them some minor tweaks such as introducing the outer if statement to check the hours, when winter came around and it became apparent that it was needed.

The usual complaint is that motion sensing lights tend to turn off when people are still in the room but not moving. Since the motion sensor in this case is so sensitive we’ve not had much of an issue with this, it will trigger even on the slightest motion. Interestingly it also seems not to false trigger, so I think the balance is just right.

Thanks for following along! I don’t have any more plans for another Home Assistant Automation in Depth article, but I’m sure there will be another installment in the future. I just need to write some more interesting automations.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Self Hosted Push Notifications Server UI

Self Hosted Push Notifications with Gotify and Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

Notifications, that bane of modern existence! Most people only have to deal with getting too many. However, if you’re running a smarthome or any other kind of moderately complex computer setup you need to decide how you are going to send and receive them too. Many notification systems rely on “trusted” third parties (a.k.a Apple or Google) to handle the delivery of notifications through to the current communications device of choice – the smartphone. Of course, this breaks my fully self hosted ethos and is to be avoided. Luckily it’s now possible to achieve self hosted push notifications with Gotify.

The Backstory

Pretty much since starting my smarthome journey (and really even before), I’ve had trouble with notifications. For a long time email was the goto solution – especially with my nicely self hosted email setup. Then I tried XMPP, then Rocket.Chat (I even wrote the Rocket.Chat notification platform for Home Assistant). There were probably a few more notification systems that I tried and definitely many more that I looked into. Nothing really stuck. Most were too complex to setup and maintain for the benefit that they provided. I mean, who really wants to run a whole chat server just for sending a few notifications?

Then Home Assistant implemented HTML5 notifications. These were cool, but not without downsides. For a start the full capabilities of the platform are only really supported by Chrome on Android. The notifications also go through – you guessed it – Google. However, the notification content is end-to-end encrypted between the HASS instance and the device. You can also do cool things like including images inline and having action buttons to press (actionable notifications).

If I’m honest, HTML5 notifications never really fit into the easy to set up basket either. The set up process is quite involved and requires creation of a project via Google Cloud Services and verification that you own the domain in question. However, once I got it working they worked well for quite some time.

Why can’t you get this right, Google?

After a while things started to have problems. First, I would quite often get delayed notifications. Sometimes to the point where the notification would not come through until you picked up and unlocked the phone. This negates the point of the notification entirely! Why is it that my self hosted email server can push messages in real time to k9mail running on the same phone, but Google’s own system has issues? I thought GCM/FCM was made of magic that simultaneously allowed it to be more reliable than anything else and consume no battery!?! /s

The next nail in the coffin was Google’s incessant pestering and breaking of things. First they wanted me to set up payment details, even though they weren’t going to charge me (why?). Then they said they were shutting down the GCM APIs used by my (admittedly somewhat outdated) version of HASS on May 29th this year. I assume that I could have fixed that by updating HASS (which I have since done). However, by this point I’d had enough and shut down the whole thing.

I initially fell back to SMTP/email notifications from HASS, which I still had running for a few lower priority things. However, I was on the look out for a replacement. I’d already heard of Gotify via /r/selfhosted, so I decided to give it a try. Since some of my other projects are starting to pay off and my smarthome is getting smarter, having a reliable notification system is becoming more pressing for me.

Deploying the Server

My ideal notification system would just use MQTT to push notifications to an app running on my phone. This wouldn’t require me to set up anything else but the app since I have everything else to support that. However, the designers of Gotify decided to use Websockets so an extra piece of server software is required.

Luckily, this software is written in Go (hence the project name). It also comes in a handy Docker container for easy deployment. Being written in Go makes it both fast and means it consumes barely any resources.

One consideration when deploying this is that you probably want it to be somewhere externally accessible, so that your phone can connect to it when not on your wifi. I installed it on an already accessible host that runs a few other dockerised services. I followed the official instructions, but came up with this to add to my docker-compose file for that server:

Well, that was easy.

Further configuration can be accomplished via config file or environment variables. However, I found the default settings to be fine for me.

Further Setup

I also needed to set up my reverse proxy to route requests through and set up TLS via Let’s Encrypt. I’m not going to go through that here. There are instructions (for the Gotify part) for nginx and apache available. Also, if you’ve already set up TLS for HASS then you can follow the same process. The Gotify app will show large warnings if you don’t use TLS. However, it will allow it so you don’t need to do this if you are only doing a bit of local testing.

Self Hosted Push Notifications Server UI
Gotify has a nice web interface, for configuration and receiving/viewing notifications on the desktop

After that I installed the app from F-Droid and added an exception in the battery optimisation page of my Android phone settings. This is different on every phone, but you need to make sure Gotify is listed as “Not optimised”. If you don’t do this Android will kill the app during sleep and you won’t receive notifications. For those that are going to bang on about how this will give you horrible battery life, I haven’t noticed a difference. Admittedly, I was already running a few apps unoptimised, such as k9mail and OwnTracks.

Setting up an Application

Before we can send notifications we need to create an application on the Gotify server. Applications map to individual streams of notifications on the recipient devices. One minor issue is that (as of the time of writing) applications are user specific, there is no way to share an application between users. This isn’t such an issue for us since we will need to set up individual notification platforms in HASS for each user anyway.

Self Hosted Push Notifications Application Setup
Our Application Screen

I set up my application as “Home Assistant” (surprise, surprise). I also uploaded the HASS logo which will be displayed in the notifications. Once the application is configured you will be given a secret token/key that can be used to send notifications via the REST API. You’ll need to copy this for use later.

Configuring Home Assistant

I played around for quite a while sending notifications with cURL as per the documentation and also some more complex messages via the RESTED Firfox addon. However, I’m going to skip straight to how to integrate this will Home Assistant, since that’s probably why you’re here!

Gotify has a simple REST api for sending notifications. Therefore we can use the REST notification platform in HASS to integrate it without a custom component:

This goes wherever you have your other notification platforms set up, for me this is in my notify.yaml file. After restarting HASS you should have the notify.gotify_1 service available. The reason for numbering it is that we will need more notification services to extend this to other users. You’ll need to update the resource key to the URL of your Gotify server and the X-Gotify-Key header value to the key you generated for your application earlier (which I recommend keeping in your secrets.yaml file, as I’m doing).

Sending Notifications

Sending self hosted push notifications
Sending a notification from Home Assistant

We should now be able to send notifications via the services developer tool in Home Assistant. It should be noted that you need to encode the data to send in JSON here for it to work. The data should contain ‘title’ and ‘message’ fields, exactly the same as any other notification platform. For example:

Once you hit the ‘CALL SERVICE’ button, you should immediately see a notification on your phone from Gotify:

Self hosted push notification received
Receiving our first notification

Again, that was easy (wasn’t it?).

Advanced Notifications

So far, we can send simple text notifications from Home Assistant via Gotify. However, Gotify also supports sending markdown formatted notifications, which opens up many more options.

To configure this, we edit our REST notification platform to the following:

Don’t forget to update the resource and X-Gotify-Key values as before. This updated configuration adds some extra data as per the Gotify documentation to indicate that the message payload should be rendered as markdown.

So let’s send a markdown formatted message. In the services dev tool again, select your notification service and use the following data:

And you should get:

Self hosted push notifications with markdown
Yay markdown!

Neat.

Using that for Something Useful

Markdown formatting is all well and good, but not all that useful just for making silly (but well formatted) messages. We’d like to actually put it to good use.

I’ve thought about including links in various notifications which could be used to trigger a Home Assistant webhook. This could then perform some action, but I’m still not convinced on the usability of it and haven’t had a chance to try it out.

One very useful option is including an image in the notification. This is particularly interesting if this image could come from a camera in Home Assistant. As it turns out this is relatively easy thanks to some minor templating:

Here I’m including the latest image from a local beach webcam that I have set up in HASS as a generic IP camera. All we need to do is use the entity_picture attribute to get the path of the image on the HASS server and join it to the base URL to build our image source. The resulting message is shown below:

Self Hosted Push Notifications with images
Looks like a nice day down at the beach (even though it’s winter down here)

Obviously, this could be very useful for security alerts, etc.

Conclusion

Overall, I’m pretty impressed with Gotify. Although the project is still young, it works as advertised and I haven’t had any functional issues. There are a few rough edges, but no showstoppers. I’m looking forward to seeing the feature set improve over time. I’d particularly like to see actionable notifications, which would set it up as a full alternative to HTML5 notifications in HASS.

I’ve been able to integrate Gotify into HASS up to the level of it’s current feature set. This is thanks to the well thought out REST API and the flexibility of the REST notification platform in HASS. So far I haven’t needed a custom/official component. As the Gotify API becomes more featureful, it’s likely that a component will be needed in order to unlock it’s full potential. However, as it stands the REST notification platform works just fine.

I’ve had no problems so far with delayed or missed notifications – which is better than Google can do! That in itself is an achievement the Gotify developers should be proud of.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.