Mt. Taranaki

Home Assistant MQTT Discovery Sensors in Node-RED

Alternatively Titled: How I Made Home Assistant Aware of the Volcano Next Door

Mt. Taranaki

If this guy blows, we’re gonna have a bad day

As I’ve previously mentioned, I’m a big fan of the Home Assistant MQTT Discovery feature. I’ve also historically been a fan of Node-RED and have recently been getting back into it. This has been mostly due to the uptick in interest in the platform in the HASS community. So, I decided to have a play around and come up with an implementation of an auto-discovered MQTT sensor in Node-RED. This post documents using this approach pull some interesting data into Home Assistant.

Since moving to a different part of New Zealand last year I’ve wanted implement a sensor in HASS which would monitor the state of the local volcano. Luckily, GeoNet provide a nice API for getting volcanic alert levels for all the volcanic fields in NZ. I was initially going to write a custom component for doing this (and at some point contribute it back). However, being generally even shorter on time than usual at the moment I never quite got there. That was until I was playing around with Node-RED and had a brain wave.

The Flow

I’m going to cut straight to the chase and show a screenshot of the flow I came up with. I’ll then explain it below (the flow JSON can be found later in the post):

The full volcano data flow

The full volcano data flow

The start of the flow is pretty basic – a simple inject node which injects a timestamp every 6 hours. The payload to this is irrelevant since it’s just used to kick off the flow. I didn’t want to hit the API endpoint too often since I’ve so far never seen the data change. If the mountain suddenly goes boom, I think I’ll have more pressing issues than whether my data is up to date.

Next, we have the HTTP Request node which goes out and performs a GET request to the URL given in the API documentation above. I enabled TLS support and opted to get the response data back as a parsed JSON object.

Filtering Data

Since the API returns data for all the volcanic fields in New Zealand, I needed to filter the data. The next node just selects the Taranaki/Egmont field that I am interested in. I used the following code in a function node to do this:

for(var i in msg.payload.features) {
    var feature = msg.payload.features[i];
    if(feature.properties.volcanoID=="taranakiegmont") {
        msg.payload = feature.properties;
        msg.topic = "homeassistant/sensor/volcano_" + msg.payload.volcanoID + "/state";
        break;
    }
}

Basically this just iterates over all the features in the data and finds the one with the ID taranakiegmont and then substitutes it’s data in as the message payload. I also build the topic for the subsequent publish to MQTT based on the volcano ID.

The output of this function branches to another function node on one branch and a delay node on the other. The delay node here is used to make sure that the function node above runs and sends it’s output before the original message passes to the the MQTT publish node.

Building HASS Configuration

The top function node is responsible for building the required configuration payloads and topics for the three sensors this will create via Home Assistant MQTT Discovery. Here I create one sensor for each of the quantities in the data from the API. This is achieved with the following snippet of code:

var config1 = {
    payload: {
        name: msg.payload.volcanoTitle + " Activity Level",
        state_topic: "homeassistant/sensor/volcano_" + msg.payload.volcanoID + "/state",
        value_template: "{{ value_json.level }}"
    },
    topic: "homeassistant/sensor/volcano_" + msg.payload.volcanoID + "_level/config"
};
var config2 = {
    payload: {
        name: msg.payload.volcanoTitle + " Activity Description",
        state_topic: "homeassistant/sensor/volcano_" + msg.payload.volcanoID + "/state",
        value_template: "{{ value_json.activity }}"
    },
    topic: "homeassistant/sensor/volcano_" + msg.payload.volcanoID + "_activity/config"
};
var config3 = {
    payload: {
        name: msg.payload.volcanoTitle + " Hazards",
        state_topic: "homeassistant/sensor/volcano_" + msg.payload.volcanoID + "/state",
        value_template: "{{ value_json.hazards }}"
    },
    topic: "homeassistant/sensor/volcano_" + msg.payload.volcanoID + "_hazards/config"
};
return [config1, config2, config3];

This does the same thing for three new message objects, building a payload and topic for each. I use the ability of HASS to grab data from the payload of the main publish by specifying the state topic. I set this to the topic I built in the previous function node. A value template is also specified for each, pretty much exactly as in the Home Assistant MQTT Discovery documentation.

Output Via MQTT

All three outputs of this node are passed to the MQTT publish node, which publishes with QoS 2 and the retain flag set. This means that whenever Home Assistant comes up after a restart it will see the values in both the configuration and state topics for these sensors and re-create them automatically.

Attentive readers would have also noticed that I publish the configuration messages whenever I publish the state (every 6 hours). This doesn’t matter as HASS will just ignore the configuration messages for sensors which it has already discovered.

So, that’s it. With this in place the sensors should appear in Home Assistant:

Home Assistant Volcano Sensors

Note the reassuring zero for activity level!

The JSON:

As promised, here is the full JSON for the flow. To add this to your Node-RED instance copy it to your clipboard and go to Hamburger->Import->Clipboard in Node-RED and paste the JSON. You can select whether to import to the current flow or a new flow and then hit ‘import’ and you should see the nodes:

[{"id":"1f3ef70f.e7a6b9","type":"http request","z":"9b7b48a9.a28de8","name":"Get Volcano Data","method":"GET","ret":"obj","url":"https://api.geonet.org.nz/volcano/val","tls":"49e1f229.3ce5f4","x":330,"y":120,"wires":[["37e97f8a.c207e8"]]},{"id":"ce39e789.a300f","type":"inject","z":"9b7b48a9.a28de8","name":"Every 6 hours","topic":"","payload":"","payloadType":"date","repeat":"21600","crontab":"","once":false,"onceDelay":0.1,"x":140,"y":120,"wires":[["1f3ef70f.e7a6b9"]]},{"id":"37e97f8a.c207e8","type":"function","z":"9b7b48a9.a28de8","name":"Filter for Taranaki","func":"for(i in msg.payload.features) {\n    var feature = msg.payload.features[i];\n    if(feature.properties.volcanoID==\"taranakiegmont\") {\n        msg.payload = feature.properties;\n        msg.topic = \"homeassistant/sensor/volcano_\" + msg.payload.volcanoID + \"/state\";\n        break;\n    }\n}\nreturn msg;","outputs":1,"noerr":0,"x":530,"y":120,"wires":[["ea23e4a2.eaa928","1ca7b72e.202389"]]},{"id":"104d115c.305927","type":"mqtt out","z":"9b7b48a9.a28de8","name":"Send Messages","topic":"","qos":"2","retain":"true","broker":"d76a3146.667c3","x":1020,"y":120,"wires":[]},{"id":"ea23e4a2.eaa928","type":"function","z":"9b7b48a9.a28de8","name":"Format config messages","func":"var config1 = {\n    payload: {\n        name: msg.payload.volcanoTitle + \" Activity Level\",\n        state_topic: \"homeassistant/sensor/volcano_\" + msg.payload.volcanoID + \"/state\",\n        value_template: \"{{ value_json.level }}\"\n    },\n    topic: \"homeassistant/sensor/volcano_\" + msg.payload.volcanoID + \"_level/config\"\n};\nvar config2 = {\n    payload: {\n        name: msg.payload.volcanoTitle + \" Activity Description\",\n        state_topic: \"homeassistant/sensor/volcano_\" + msg.payload.volcanoID + \"/state\",\n        value_template: \"{{ value_json.activity }}\"\n    },\n    topic: \"homeassistant/sensor/volcano_\" + msg.payload.volcanoID + \"_activity/config\"\n};\nvar config3 = {\n    payload: {\n        name: msg.payload.volcanoTitle + \" Hazards\",\n        state_topic: \"homeassistant/sensor/volcano_\" + msg.payload.volcanoID + \"/state\",\n        value_template: \"{{ value_json.hazards }}\"\n    },\n    topic: \"homeassistant/sensor/volcano_\" + msg.payload.volcanoID + \"_hazards/config\"\n};\nreturn [config1, config2, config3];","outputs":3,"noerr":0,"x":780,"y":60,"wires":[["104d115c.305927"],["104d115c.305927"],["104d115c.305927"]]},{"id":"1ca7b72e.202389","type":"delay","z":"9b7b48a9.a28de8","name":"","pauseType":"delay","timeout":"1","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"x":760,"y":120,"wires":[["104d115c.305927"]]},{"id":"49e1f229.3ce5f4","type":"tls-config","z":"","name":"Standard","cert":"","key":"","ca":"","certname":"","keyname":"","caname":"","verifyservercert":true},{"id":"d76a3146.667c3","type":"mqtt-broker","z":"","name":"Home Broker","broker":"localhost","port":"1883","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"willTopic":"","willQos":"0","willPayload":"","birthTopic":"","birthQos":"0","birthPayload":""}]

If you are importing this directly, you will need to configure your MQTT broker settings under the MQTT publish node before hitting ‘deploy’.

Wrap Up

That’s pretty much all there is to it. I hope this has demonstrated the concept of using Node-RED to create sensors in Home Assistant, without any changes to the HASS configuration. The flow presented is pretty simple but actually serves a useful purpose. Hopefully, you can come up with some uses of your own for this approach. Please feel free to share them in the comments below if you do, so that others may benefit from your ideas.

Thanks for reading. I’m working on a few more things with Node-RED so hopefully I’ll post about them soon. Bye!

Simple Automated Video Transcoding…

This post has been sitting in my drafts since 2012. I still use the system described below (occasionally) and it seemed a shame not to post the approach for others to use.

I had a ton of transcoding of videos for storage on my home server/mythbox. My tool of choice has been HandBrake, specifically the CLI version. As I had a fair few videos to get through I wanted to set up some automated system for doing the transcoding. Here is the simple bash script I came up with:

#!/bin/sh

HANDBRAKE_OPTS="-e x264 -q 20.0 -a 1 -E ac3 -B 160 -6 auto -R Auto,Auto -D 0.0 -f mkv --detelecine --decomb --loose-anamorphic -m -2 -T -x b-adapt=2:rc-lookahead=50"
MAILTO=me@webworxshop.com
MAILFROM=transcoder@webworxshop.com
BASEDIR=/mnt/media/transcoding
#BASEDIR=.
INDIR=$BASEDIR/queue
OUTDIR=$BASEDIR/mkvs
DONEDIR=$BASEDIR/done

if [ -e $BASEDIR/.lock ]
then
        echo "Lock file exists, exiting."
        exit 1
fi
touch $BASEDIR/.lock

echo $INDIR
for dir in $(find $INDIR/ -type d | grep VIDEO_TS); do

        toks=(`echo $dir | tr "/" "\n"`)
        HandBrakeCLI -i $dir --main-feature -o $OUTDIR/${toks[4]}.mkv $HANDBRAKE_OPTS
        mv $INDIR/${toks[4]} $DONEDIR/${toks[4]}

        # send the email
        echo "To: $MAILTO" > msg.txt
        echo "From: $MAILFROM" >> msg.txt
        echo "Subject: Transcode Complete" >> msg.txt
        echo "" >> msg.txt
        echo "Transcoding ${toks[4]} completed sucessfully!" >> msg.txt
        cat msg.txt | msmtp $MAILTO
        rm msg.txt

        rm $BASEDIR/.lock

        exit 0;
done

The script requires that you set it up in a directory (/mnt/media/transcoding  on my system) with three sub-directories (queue , mkvs  and done ). The script will transcode one video from the queue  directory on each run. The idea is that the script should be run from cron several times a day when your machine isn’t doing very much else (I run mine overnight and when I’m at work). When the script is done it will send you an email to tell you and move the source file into the done  directory. The transcoded file will be dropped into the mkvs  directory. Obviously, the script will just exit if there’s nothing in the queue. This means that once the system is set up all you have to do is drop new videos into the queue  directory and they will be transcoded automatically.

Let me know if you find this useful, or suggest improvements in the comments.

Monitor Dynamic DNS Status with Nagios

For anyone running services on their home network a Dynamic DNS setup is a must have. But what happens when your Dynamic DNS client fails to update one day, when you’re going on a trip and you end up locked out of your network? If you’re running Nagios as your monitoring solution then you can easily detect this situation. This post will show you how and provide a Nagios plugin for doing just this.

The basic idea is to compare the DNS result for your local network FQDN with your external IP address. To retrieve our external address we use a 3rd party service, which being outside our network can see our external IP. In my case I use ifconfig.co, which conveniently has the ability to return its result in JSON for easy consumption by any number of tools. DNS lookup of our FQDN is provided by the Python socket.gethostbyname  function. This gives us too addresses which, if everything is working, will be identical. If our Dynamic DNS client it having issues, the addresses will be different.

Anyway, on to the code (we’re going to need the Python requests module, so install it with pip install requests):

#!/usr/bin/env python3

import requests
import socket
import sys
import argparse

STATUS_OK = 0
STATUS_WARNING = 1
STATUS_CRITICAL = 2
STATUS_UNKNOWN = 3

def format_output(status, msg):
    if status == STATUS_OK:
        print("OK - {}".format(msg))
    elif status == STATUS_WARNING:
        print("WARNING - {}".format(msg))
    elif status == STATUS_CRITICAL:
        print("CRITICAL - {}".format(msg))
    elif status == STATUS_UNKNOWN:
        print("UNKNOWN - {}".format(msg))
    sys.exit(status)

def main():
    parser = argparse.ArgumentParser(description="Nagios plugin to check that external IP matches DNS IP")
    parser.add_argument('-H', '--hostname', metavar='HOST', default='',
        help="DNS hostname to look up")
    args = parser.parse_args()

    req = requests.get("https://ifconfig.co/json")
    if req.status_code != 200:
        format_output(STATUS_UNKNOWN, "Unable to determine external IP, status code = {}".format(req.status_code))
        
    ext_ip = req.json()['ip']
    dns_ip = socket.gethostbyname(args.hostname)

    if ext_ip != dns_ip:
        format_output(STATUS_CRITICAL, "DNS IP ({}) does not match external IP ({})".format(dns_ip, ext_ip))
    else:
        format_output(STATUS_OK, "DNS IP correct. External IP is {}".format(ext_ip))

if __name__ == "__main__":
    main()

This is a fairly basic Nagios plugin that implements the approach described above. The only slightly tricky thing is output formatting and return code conventions, which must be exactly correct for Nagios to interpret the results of your plugin. This convention is documented in the Nagios plugin API documentation (I love this approach as an example of Unixy design).

To use this with nagios, put the plugin in the nagios plugins directory (/usr/local/nagios/libexec/  in my case) and make it executable (chmod +x). Then you need to update your config to add a new command in your objects/commands.cfg  file:

# 'check_dynamic_ip' command definition
define command {
       command_name    check_dynamic_ip
       command_line    $USER1$/check_dynamic_ip.py -H "$ARG1$"
}

You will also need a corresponding service check in your server.cfg  file:

define service{
        use                             generic-service
        host_name                       hostname
        service_description             Dynamic IP Status
        check_command                   check_dynamic_ip!my.domain.com
}

Then simply restart Nagios (sudo systemctl restart nagios.service) and you’re done.

Now you can enjoy knowing when your going to be locked out of your network 😉

Tiny MQTT Broker with OpenWRT

This post may contain affiliate links. Please see the disclaimer for more information.

So yet again I’ve been really lax at posting, but meh. I’ve still been working on various projects aimed at home automation – this post is a taster of where I’m going…

MQTT (for those that haven’t heard about it) is a real time, lightweight, publish/subscribe protocol for telemetry based applications (i.e. sensors). It’s been described as “RSS for the Internet of Things” (a rather poor description in my opinion).

The central part of MQTT is the broker: clients connect to brokers in order to publish data and receive data in feeds to which they are subscribed. Multiple brokers can be fused together in a heirarchical structure, much like the mounting of filesystems in a unix-like system.

I’ve been considering using MQTT for the communication medium in my planned home automation/sensor network projects. I wanted to set up a heirarchical system with different brokers for different areas of the house, but hadn’t settled on a hardware platform. Until now…

…enter the TP-Link MR3020 ‘travel router’, which is much like the TL-WR703N which I’ve seen used in several hardware hacks recently:

It's a Tiny MQTT Broker!

It’s a Tiny MQTT Broker!

I had to ask a friend in Hong Kong to send me a couple of these (they aren’t available in NZ) – thanks Tony! (UPDATE 2019: Of course now you can get these shipped direct, something I didn’t know about in 2012). Once I received them installing OpenWRT was easy (basically just upload through the exisiting web UI, follow the instructions on the wiki page I linked to above). I then configured the wireless adapter in station mode so that it would connect to my existing wireless network and added a cheap 8GB flash drive to expand the available storage (the device only has 4MB of on-board flash, of which ~900KB is available after installing OpenWRT). I followed the OpenWRT USB storage howto for this and to my relief found that the on-board flash had enough space for the required drivers (phew!).

Once the hardware type stuff was sorted with the USB partitioned (1GB swap, 7GB /opt) and mounting on boot, I was able to install Mosquitto, the Open Source MQTT broker with the following command:

$ opkg install mosquitto -d opt

The -d option allows the package manager to install to a different destination, in this case /opt. Destinations are configured in /etc/opkg.conf.

It took a little bit of fiddling to get mosquitto to start at boot, mainly because of the custom install location. In the end I just edited the paths in /opt/etc/init.d/mosquitto to point to the correct locations (I changed the APP and CONF settings). I then symlinked the script to /etc/rc.d/S50mosquitto to start it at boot.

That’s about as far as I’ve got, apart from doing a quick test with mosquitto_pub/mosquitto_sub to test everything works. I haven’t tried mounting the broker under the master broker running on my home server yet.

The next job is to investigate the serial port on the device in order to attach an Arduino clone which I soldered up a while ago. That will be a story for another day, hopefully in the not-to-distant future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Installing and Configuring Arch Linux: Part 1

OTHERWISE ENTILED: Rob tries to install Arch Linux some of the time, but really spends most of the time drinking beer.

Before I start: NO, UNLIKE EVERY OTHER ARTICLE ON THE WEB, PUBLISHED TODAY, THIS IS NOT A JOKE, K?!?

I’ve been looking for a new distro recently. I do this from time to time, principally because I get bored of what I’m currently running. Last time it was Crunchbang which I settled on. This time I wanted to go more advanced, so I started researching Arch Linux.

For those that don’t know, Arch Linux describes itself as:

…a lightweight and flexible Linux® distribution that tries to Keep It Simple.

I’d heard about Arch in the past from several sources and had heard that you basically have to install and configure everything yourself, but that the package manager (awesomely named Pacman!) manages software without having to compile from source (unless you want to!).

The following series of posts will be a record of my experiences installing and configuring Arch on my home desktop machine. This isn’t intended to be an exhaustive installation guide, more just a record of where I tripped up in order to aid those who come next. If you are searching for an installation guide, try the excellent article on the Arch Wiki.

I’ve separated the post out into days. Note: it didn’t actually take me a full day for each part, I work during the day and only really had a couple of hours each evening to spend on this.

Day 1: Backing Up

Before installing I wanted to make sure I didn’t trash my existing Ubuntu system and all my personal data, as I still need to do all the stuff I usually do with my machine. So I made a backup.

I’m not really going to go into how. Suffice to say I used LVM snapshots and rsync, I might write about this in a future post.

This took a while, as I have quite a lot of data. I thought it best to have a beer in the mean time, so I did.

Day 2: Making Space, Starting the Installation and Various Adventures with LVM

The next thing to do was to resize my existing LVM partition containing Ubuntu so that I had space for Arch. I couldn’t work out how to do this at first as none of the partition tools I tried (GParted and Cfdisk) could resize the partition. I eventually worked out how to do it.

First, on my running Ubuntu system I resized the physical volume with:

$ pvresize --setphysicalvolumesize 500G /dev/sda1

This shrank the space used by LVM down to 500GB (from about 1000GB on my machine).

I then rebooted into the Arch live CD (64-bit edition in my case), and ran:

$ fdisk /dev/sda

Now what you have to do next is slightly alarming. You actually have to delete the partition and recreate it in the new size. This works, without destroying your data, because fdisk only manipulates the partition table on the disk, it doesn’t do any formatting of partitions, etc.

I did this through fdisk so that the partition was 501GB (making it a little bigger than the PV just to make sure). I then rebooted back into Ubuntu and ran:

$ pvresize /dev/sda1

To allow it to use all the space. This probably isn’t necessary but I wanted to be safe.

Next, I proceeded to the installation. For some reason the Arch boot CD was really slow to boot and gave me loads of read errors, I think this might have something to do with my drive as I’ve been experiencing the same with other disks. Eventually it booted and dropped my at the default prompt.

From then I basically followed the installation guide for setting up the package source (CD) and the date and time.

I then set about partitioning the disks. The Arch installer uses Cfdisk, which is fine. I just added two partitions to my disk, a small (255Meg) one for my /boot partition and a large LVM one for the rest of the system (I like LVM and wanted to use it again on Arch).

This was fine, but I had some problems setting up the LVM through the installer, even though the user guide seems to think it can do it. Every time I tried, it would just fail on creating the Volume Group, weird.

I gave up for the evening and (you guessed it) went for a beer!

Day 3: Successful Installation

The next day I thought I’d try googling for LVM on Arch, luckily when I got in to work @duffkitty on identi.ca had seen one of my posts complaining about having problems and had given me a link to the LVM article on the Arch Wiki.

This advocated setting up the whole LVM setup manually (and guides you through it) and then just setting the partitions to use in the installer. It also gives you some important things to look out for when configuring the system. Following these instructions worked like a charm and I was able to format everything correctly and install the base system.

I then moved on to configuring the system, following the install guide and taking into account the instructions in the LVM article. Everything went pretty much fine here and I eventually got to installing the bootloader. Here I replaced the Ubuntu Grub version with the one installed by Arch. This left me having to add an entry for Ubuntu, which wasn’t difficult, I just copied the Arch one and changed the partition and file names.

Then it was time to ‘type reboot and pray’ as the Arch installation guide puts it.

So I did.

When I rebooted the bootloader came up with the Arch and Ubuntu entries. I selected Ubuntu just to check everything was OK.

It didn’t work.

Panicking and Swearing Ensued.

I rebooted and selected Arch.

That worked (thankfully).

When it had booted I logged in and opened up the Grub config file again. it turned out I mis-typed the name of the Ubuntu initrd file, that was easily fixed. Rebooting got me safely back to Ubuntu.

So now I have a functioning dual boot between my original Ubuntu install and a very basic Arch install, I think I might need some software there!

But first… beer.

So What’s Next???

Well, firstly I need to get my network connection up and running as I didn’t do that during the install. It’s a Wifi connection over WPA so that’s going to be fun. Then I can start installing software. I’ll probably follow the Beginners Guide on the Wiki (from Part 3). I was also recommended Yaourt by @duffkitty, so I’ll give that a try.

I’ll be continuing to play with Arch over the next few days and reporting my progress in follow up posts here. I’ll also be denting as I go along and you can follow all of these on my #archrob hash tag.

There’ll probably be beer too.

We’ll see how it goes, but eventually I hope to have a system I can use full time.

Bye for now! Happy Easter!