How my Chromecast breaking was the best thing that ever happened to my TV

As detailed in my recent self-hosting update, I’ve been using a Raspberry Pi running OSMC and Kodi as my frontend for TV recordings and for locally streamed media from Emby, since moving into the new house. We’ve supplemented this with a Chromecast to allow us to access Netflix and a local streaming service Lightbox. This has worked well and integrates with Home Assistant reasonably well, so I can automatically dim the lights, etc. when we are watching TV in the evening.

That was until the Chromecast started behaving oddly.

It started as occasionally corrupted audio when starting a new stream (basically the audio would sound like everyone had been breathing helium). Each time this occurred it was remedied by rebooting the Chromecast, at first by cycling the power, then as the problem persisted via a Python script wired to a button in Home Assistant. This went on for a month or so before things got worse.

The next problem was the Chromecast just flat out refusing to load anything from Lightbox. I spent an evening debugging this to have the thing fall off the network and refuse to come back. It must have automatically recovered itself because the next day it was back and working fine. Then a couple of days later it started to have similar issues again, only now with various HDMI picture issues (not detected or video stained pink).

Clearly it was on it’s way out (suspiciously it was just over a year old, which puts it just out of warranty). Having had enough I unplugged the thing and started to look for other options. Having paid $109 for it to last only a year, I wasn’t happy to buy another Chromecast (I had bought the Chromecast Ultra, but only because it was the only model with built in Ethernet).

Aside: The insidious Chromecast ecosystem

As someone who generally prefers FOSS options wherever possible and has no love of DRM, I’ve always had issues with the Chromecast. That said, as someone who wants attain the media I watch via legal means I appreciate that it allowed me to do that. I also liked the ability to control it from my phone as well as play/pause/stop streams via the TV remote and the aforementioned integration with Home Assistant. Basically, I saw it as a necessary evil.

What I didn’t appreciate is what it does to your phone. Before you have even set the thing up you have to install the proprietary Google Home app (why it can’t have a web interface for configuration I don’t know), then every streaming app that supports it is proprietary (even the Emby one), which left me with a gaggle of proprietary apps on my phone which is mostly otherwise populated with Open Source apps from F-Droid. This severely limited my ability to go GApps free, which has been something I’ve wanted to do for quite some time.

So if I could find another option that didn’t rely on my phone, I could get rid of all these horrible apps (some of which I even have to have Magisk installed in order to persuade them that I don’t have root access).

Meanwhile, back at the plot…

Faced with replacing the Chromecast I had two options. The first was to plug the not so smart TV back in to the network and use the built in apps. This was sub-optimal as it didn’t integrate with Home Assistant, couldn’t be controlled from my phone and the Lightbox app on that TV has broken at least once (in fact I don’t even know if it works now, since I went back to using the Chromecast instead).

The second option was to get Kodi to do it all (I guess there was really a third option which was to go out and find some other streaming device, but I really didn’t want to waste my money again).

Kodi To The Rescue!

To cut a long story short, I managed to get everything working with Kodi over the course of a Sunday afternoon. I already knew there was a Netflix addon (requiring Kodi 18), which I’d been meaning to try, but I didn’t know of a Lightbox addon. A quick search turned up Matt Huisman’s Lightbox Addon,¬†which works great (I’d already used his TVNZ OnDemand and 3NOW addons).

Getting Netflix working turned out to be a bit of a pain, since I had to upgrade to the Kodi 18 Alpha release. I followed these instructions, which didn’t work to start with, but that turned out to be down to a corrupted SD card (weirdly the card was fine in normal use but didn’t like installing new packages). After grabbing a new card and restoring from a backup image I was able to update successfully and install the Netflix addon, which works flawlessly.

The Good

Overall, I’m really happy with this setup. The alpha version of Kodi is surprisingly stable (on par with the release version from my experience so far but YMMV), notwithstanding a couple of bugs which I’ll come to shortly. Netflix and Lightbox work pretty much flawlessly and I’m appreciating the newly unified and simplified media system. I’ve already started implementing further integrations with the home automation, which will be the subject of another post.

The Not So Good

I mentioned above that there are a couple of bugs, but I actually suspect that both issues I’m seeing are down to a common cause. The two issues I’m seeing are both related to TV recordings from TVheadend, with audio sync issues as well as raised RPi temperatures on HD recordings and dropped frames on SD recordings. I still need to get around to updating to the latest nightly release and gathering the debug logs required to submit a proper bug report, but since the TV is a ‘production’ system I haven’t got to this yet. This is the kind of issue, that whilst annoying, isn’t a show stopper and that I would fully expect to be fixed by the final release of Kodi 18.

The only other not so good point is that whilst the Netflix and Lightbox plugins are excellent, navigating through the menus is a little slow. I’m putting this down to the need to fetch the listings over the network every time and probably even scrape the respective websites. I would assume that neither site provides a proper API given how generally hostile streaming services are to third party integrations. Perhaps this could be mitigated either in Kodi or the addons by caching the data for a period of time, since it doesn’t change that often. This definitely isn’t a criticism of either of these addons, I’m impressed that they work as well as they do given their adverse environment. Luckily playback in both is flawless.

Conclusion

Again, I’m really happy with this setup. It’s finally given me an almost 100% Open Source (less the binary blobs required for Widevine DRM) media setup, which doesn’t compromise on functionality and sources all the content via legal means. Kodi gets an undeservedly bad reputation in the media for being a platform which enables piracy, something which the project developers have quite rightly distanced themselves from. Having more addons for legitimate services will help to give the platform a better name (of course if the media industry would wake up and just offer DRM free media at a reasonable cost [i.e. not the same price as a physical copy], that would be even better – but I don’t see that happening any time soon). The most annoying thing about this bad reputation for me is every how to guide for Kodi advertising VPNs (of varying levels of dodgyness) at you, as if the copyright police are going to bang down your door for using Kodi with your own media or a legitimate streaming service.

I’m finding this setup to actually be more feature full than the previous set up. This is obvious when you think about it, since with everything running through Kodi all my media benefits from the huge amount of work that has gone into that project over the years. Whereas, with the siloed approach taken by the individual streaming services they are all doomed to reproduce features that may be in competing services or have been features of established media players since the beginning of time. One really nice feature is that Kodi will make the full metadata of media playing via addons available via it’s various API’s which means that remote control apps can see it, but also that it gets pushed through to Home Assistant. This was hit and miss with the Chromecast (Lightbox would provide some metadata, Netflix would provide none).

Basically, this setup is what the smart TV was meant to be, before the interests that compete with producing the best technical solution got their hands on it.

I mentioned earlier that I’m already planning a follow up post to this one. The upcoming post will detail some of the integration work I’ve been doing to integrate my media setup with Home Assistant. Please stay tuned for that in the coming weeks. Until then, bye.

HDMI CEC Flow

HDMI CEC for Home Assistant with Node-RED

I set out on a Sunday morning thinking this was going to be a quick project and, not having decided on a blog topic for this week, it seemed like the ideal candidate. I was wrong – about it being a quick project, hopefully not about it being a reasonable subject for a blog post.

This post is brought to you by issue #12846 in Home Assistant (and the letter ‘C’). That is to say, one of my automations was broken by this issue, which has been sitting open on GitHub since the beginning of march with no progress. I don’t want this to sound like the usual “user of Open Source application complains about free stuff”, because I’m not actually complaining. I understand that software breaks and sometimes there aren’t the resources available to fix it. The solution to this is to get more developers paid to work on Free and Open Source Software (but that’s entirely a discussion for outside of this post).

Actually, this post is here to offer a solution (or at least a temporary one) to the issue, outside of Home Assistant, since I couldn’t fix it myself (I took a look at the code in question and I couldn’t work it out – it needs to be done by someone with more familiarity with the Home Assistant core).

My solution is to use Node-RED along with the HDMI CEC nodes to create an auto-discovered MQTT switch with which I can turn on and off my TV. So, let’s get into the flow…

The Flow

HDMI CEC Flow

The HDMI CEC Switch Flow

This flow runs on an instance of Node-RED running on my OSMC based Raspberry Pi sitting behind my TV (for those keeping up at home, this makes two NR instances on my network – so far). Currently, this is the only flow running on this instance, but I’m considering what else I can run now that I have Node-RED available there. I installed Node-RED on OSMC using the official install/upgrade script. I had fully expected installing Node-RED under OSMC to be a major pain, but it turned out to just amount to running that command.

After the install had finished, I created a user for Node-RED since I like it to run under it’s own user and updated the systemd unit file accordingly. I then installed the CEC nodes linked above from the palette manager. Here I ran into a minor bump in the road in that the CEC nodes couldn’t execute the cec-client program. As it turned out the location of that binary is in a weird location on OSMC, so I added the following in the systemd file to set this up:

I also needed to add my new Node-RED user to the video group to allow access to the CEC device:

Where I really got stuck was playing around with the example flow for the CEC nodes. It wasn’t that it didn’t work as advertised, it was that it broke the CEC command passthrough to Kodi running on the same machine, rendering my TV remote useless within Kodi. Many hours, much futile searching and playing with cec-client later, I still wasn’t any closer to a solution. I knew it must work, because somehow the pycec script I was using previously is able to send an receive CEC packets without interfering with Kodi.

The breakthrough was dropping both a CEC-In and a CEC-Out node into my flow and only grabbing a few CEC packet types in the filter of the input node. I say ‘breakthrough’ – this works most of the time, but it throws a few errors and warnings on start up. I found it to be most reliable when I immediately restarted Kodi after deploying it – this also helps Kodi to regain its CEC connection if necessary.

So How Does It Work?

Oh, yeah. I was going to talk about the flow, but I kinda got sidetracked there.

Well, it’s pretty simple there are two sequences in the flow. One which handles the switch state and MQTT discovery configuration (bottom) and one which handles the incoming commands over MQTT and sending the corresponding CEC commands.

Let’s start with the bottom one:

This sequence has two input paths, the bottom of these executes on start up (or at deploy time) and sends the Home Assistant MQTT Discovery configuration, using the same technique I used in my volcano sensors. The start up message also passes through a 3 second delay before passing to an exec node, which restarts Kodi. I added the following to my sudoers file (via visudo), to allow this:

The top input path receives incoming CEC messages of the type REPORT_POWER_STATUS. In my setup, this only receives power messages from the TV, but you may receive messages from other devices on the bus, in this case you can add a check on the source address of the packet in the following function node (clue: the TV is usually address 0).

The message passes through a function node, which converts the power status to the switch status expected by HASS and also sets the MQTT topic:

Both input paths are connected to a common MQTT output node to send their respective messages (config and state) out to Home Assistant.

The top sequence simply subscribes to the command topic from HASS and determines whether the command was on or off. The JSON payload for the CEC command is then set respectively in either branch – this JSON is taken directly from the example flow linked above. Then we pass this out to the CEC adapter – done. When the device acts upon the CEC command it should send its new power state back through and update the state of our switch. The state will also be updated if you turn on the TV by other means, e.g. the remote.

Pure JSON

This JSON was made in clean, green New Zealand from 100% natural ingredients (electrons):

Bonus: Home Assistant Automation Rules

Here are the Home Assistant automations that I’m using with this. Basically I’m turning off the TV five minutes after either Kodi or the Chromecast stops playing, unless it started again in the meantime:

This uses a timer, which is defined as:

Done. Now we can be lazy/forgetful about leaving TV on and also not waste power. Mission Accomplished.

Conclusion

Hopefully, someone will find time to fix the bug above. I’m probably going to stick with this regardless because I had some other issues running pyCEC on top of OSMC – mainly because they don’t build the libcec bindings for Python 3 by default. I had some custom patches to do this, but it would break (in one way or another on every update). Hopefully, this solution should be more robust. Also, the MQTT connection used in this solution runs over TLS (rather than the unencrypted TCP of the pyCEC network mode), so there is a little security win. Plus, as I already mentioned, now I have a Node-RED instance on a Pi in my living room.

Top shelves

Self Hosting Update

Since my first post on my self hosting setup, things have changed quite a bit. I thought I’d take the time to write up a few of those changes, having recently got much more interested in how I can improve my setup further (stimulated at least in part by seeing some awesome setups browsing /r/homelab). There will be photos of the new setup at the bottom of this post.

So What’s Changed?

Well the first thing was that I moved house. This was a protracted move, with 4 months spent living at my parents place before moving into our new home. Due to space and other constraints I didn’t want to run the servers when living with them, so I settled for playing around with a couple of Raspberry Pis in the mean time. One of these was a new Pi 3 bought specifically for the purpose of becoming a Kodi box, which it does quite nicely thanks to OSMC. The other was a Pi 2 which just had a testing setup of Home Assistant on for me to play around with.

Since moving into the new house, I’ve been building my setup back up and I think I’ve now surpassed level as I was at previously. Since everything had been offline for 4 months, I decided to make a clean break of things. After a back up I formatted the system drive of the main server and installed Ubuntu Server, with a view to running my services in LXD containers. This was made possible by the aforementioned Pi 3 becoming the main TV frontend, along with a Chromecast for Netflix duties. This meant the server could go fully headless for the first time and be relocated to the garage, where it can be attached to a noisy UPS.

Currently, I’m running several containers on the server. These include:

  • A Home Assistant/Mosquitto/Node-RED container
  • A music server container running Mopidy+Snapcast for (eventually) multi-room audio
  • A Tvheadend container to replace Mythtv (not that I was unhappy with it, I just thought I’d try something new)
  • An Emby container for serving other media to Kodi (in future I’d like to add a second RPi/Kodi instance)
  • A CheckMK container to replace the previous built from source Nagios server
  • A couple of others for early stage testing of new projects

New Firewall

In addition to separating the main server from the media frontend I also invested in a new firewall box before moving into the new house. This was primarily due to the new house having a fibre connection and the USB Ethernet device on the old netbook I was using therefore becoming a bottleneck on Internet speed. I picked up one of those dual Ethernet Haswell based mini-computers from AliExpress. This was originally running pfSense natively on the hardware, but in order to try and get a little more out of the new hardware I’ve since swapped this out for a Proxmox host which runs pfSense in a VM (more on this in a future post). This runs really nicely and I’ve noticed that the case doesn’t get anywhere near as hot as it did running pfSense natively (could just be confirmation bias on my part, since the average air temperature has changed somewhat due to it getting towards winter).

I’m also running another VM on this system, which is hosting a testing install of Nextcloud. I haven’t transferred this to ‘production’ yet, mainly due to lack of time to get back to it. I’m pretty happy with it and will probably re-deploy it into an LXC container (Proxmox uses straight LXC not LXD) in order to reduce the memory footprint (should have gone for more RAM in that box!). The main winner on the Proxmox install has been the ease with which I can do complex networking as required for the virtualised firewall and my VLAN setup. This is mainly due to the integration with OpenVSwitch, which I like a lot.

A Proper Switch

Having had the foresight to install Ethernet throughout our new home, I’ve needed to invest in a proper switch since we moved in. For a while I made do with piggy backing together my two wireless access points which provided 5 ports each. With this arrangement I was able to cover all the basics of my network, but I wasn’t able to make every Ethernet jack in the house live and had no room for expansion.

I recently bought a TP-Link TL-SG1024DE 24 port switch, which whilst not the best switch in the world is pretty good value for money and will serve my needs for the foreseeable future. Configuration of the VLANs is a little clunky, compared to the OpenWRT configuration interface I was using previously, but everything works once it’s all configured. The great thing is I’ve been able to connect every port in the house as well as all my other gear and still have a ton of ports left over. The only feature I am missing in this switch is SNMP for monitoring, but I’m reasonably confident of being able to scrape the web interface at some point.

The Future

Based on my positive experience with Proxmox, I’ll probably migrate the main server to that at some point in the future. I’ve really enjoyed using LXD on Ubuntu, but Proxmox just seems better suited to my needs. The one feature I will miss from Ubuntu Server as a host is the kernel livepatching, which is really cool. The main thing holding me back from this at the moment is having to migrate all the existing LXD containers to LXC as there doesn’t seem to be a clean way to do this. This means the migration will have to wait until I can get all the services deployable via Ansible, which I’m working on.

Photos

As promised here are the photos. I’m using some standard garage shelving as a rack stand in, which works pretty well as I don’t have any rack mount gear except the new switch:

 

 

Mt. Taranaki

Home Assistant MQTT Discovery Sensors in Node-RED

Alternatively Titled: How I Made Home Assistant Aware of the Volcano Next Door

Mt. Taranaki

If this guy blows, we’re gonna have a bad day

As I’ve previously mentioned, I’m a big fan of Home Assistant’s MQTT Discovery feature. I’ve also historically been a fan of Node-RED and have recently been getting back into it, not least due to the uptick in interest in the platform in the HASS community. So, I decided to have a play around and come up with an implementation of an auto-discovered MQTT sensor in Node-RED and used it to pull some interesting data into Home Assistant.

Since moving to a different part of New Zealand last year I’ve wanted implement a sensor in HASS which would monitor the state of the local volcano. Luckily, GeoNet provide a nice API for getting volcanic alert levels for all the volcanic fields in NZ. I was initially going to write a custom component for doing this (and at some point contribute it back), but being generally even shorter on time than usual at the moment I never quite got there. That was until I was playing around with Node-RED and had a brain wave.

The Flow

I’m going to cut straight to the chase and show a screenshot of the flow I came up with and explain it below (the flow JSON can be found later in the post):

The full volcano data flow

The full volcano data flow

The start of the flow is pretty basic – a simple inject node which injects a timestamp (the payload is irrelevant) every 6 hours to kick off the flow. I didn’t want to hit the API endpoint too often since I’ve so far never seen the data change and if the mountain suddenly goes boom, I think I’ll have more pressing issues than whether my data is up to date.

Next, we have the HTTP Request node which goes out and performs a GET request to the URL given in the API documentation above. I enabled TLS support and opted to get the response data back as a parsed JSON object. Since the API returns data for all the volcanic fields in New Zealand, the next node just filters for the Taranaki/Egmont field that I am interested in, using the following code in a function node:

Basically this just iterates over all the features in the data and finds the one with the ID taranakiegmont and then substitutes it’s data in as the message payload. I also build the topic for the subsequent publish to MQTT based on the volcano ID.

The output of this function branches to another function node on one branch and a delay node on the other. The delay node here is used to make sure that the function node above runs and sends it’s output before the original message passes to the the MQTT publish node. The top function node is responsible for building the required configuration payloads and topics for the three sensors this will create in Home Assistant (one for each of the quantities in the data from the API). This is achieved with the following snippet of code:

This does the same thing for three new message objects, building a payload and topic for each. I use the ability of HASS to grab data from the payload of the main publish by specifying the state topic as the topic I built in the previous function node and a value template for each, pretty much exactly as in the HASS documentation. All three outputs of this node are passed to the MQTT publish node, which publishes with QoS 2 and the retain flag set. This means that whenever Home Assistant comes up after a restart it will see the values in both the configuration and state topics for these sensors and re-create them automatically. Attentive readers would have also noticed that I publish the configuration messages whenever I publish the state (every 6 hours). This doesn’t matter as HASS will just ignore the configuration messages for sensors which it has already discovered.

So, that’s it. With this in place the sensors should appear in Home Assistant:

Home Assistant Volcano Sensors

Note the reassuring zero for activity level!

The JSON:

As promised, here is the full JSON for the flow. To add this to your Node-RED instance copy it to your clipboard and go to Hamburger->Import->Clipboard in Node-RED and paste the JSON. You can select whether to import to the current flow or a new flow and then hit ‘import’ and you should see the nodes:

If you are importing this directly, you will need to configure your MQTT broker settings under the MQTT publish node before hitting ‘deploy’.

Wrap Up

That’s pretty much all there is to it. I hope this has demonstrated the concept of using Node-RED to create sensors in Home Assistant, without any changes to the HASS configuration. The flow presented is pretty simple but actually serves a useful purpose. Hopefully, you can come up with some uses of your own for this approach. Please feel free to share them in the comments below if you do, so that others may benefit from your ideas.

Thanks for reading. I’m working on a few more things with Node-RED so hopefully I’ll post about them soon. Bye!

Micropython Based Room Sensors: Part 1

Having recently moved into a new house, I find myself with a profusion of Smart Lights (not everywhere yet, but in more locations than I have previously had). The main problem is that I currently lack a way to drive these intelligently. Right now I have a few automations that drive them based on sunset/sunrise times and media state through Home Assistant. This is mainly due to having almost no sensors deployed in the new house currently – something I’m aiming to address in this project.

I’m planning to build and deploy a set of ESP8266 based sensors across the house, with at least one in each room. The base hardware for this is the ESP8266 connected to a PIR motion sensors and DHT22 temperature and humidity sensor (so nothing ground breaking). This should give me temperature/humidity data for the whole house as well as motion events that can be used as a starting point to drive the light automation.

Why ESP8266?

I’ve gone through several options for the sensor platform on the way to settling on the ESP8266. I’ve tried out MySensors and even gone as far as building a battery powered prototype complete with PCB. Unfortunately I stuffed up the PCB design so that the radio didn’t work and never got around to working out the problem. Thinking further on this I find the software platform limiting. For example, any kind of OTA firmware update is difficult.

The main disadvantage of the ESP8266 is the power usage, but given that I now own this house and that these are permanent sensors I’m planning on powering them via a 12v supply run from the main server cabinet through the roof space to each sensor. There is more than enough power from a 12v 2A supply to provide power to all the sensors.

Also, it’s cheap and easy to work with!

Why Micropython?

I had originally looked at either writing some custom firmware using the Arduino ESP8266 core or using ESPEasy. I’ve tried out ESPEasy on my first prototype and it does what I need, but as an Embedded Software Engineer by day I felt I needed to be more adventurous.

I love Python as my go to language whenever the platforms don’t dictate something else (in the case of embedded this almost always ends up being C or C++). Recently I’ve been getting more engaged with the Python ecosystem via a couple of excellent podcasts (Talk Python and Python Bytes), so this was a good time to get back into writing some Python at home. Having played around with earlier versions of Micropython on both the ESP8266 and the ESP32 I had a good idea of its capabilities and knew it could do what I needed.

Current Status

As of right now, I have one prototype sensor running using a ‘bare’ ESP-12 module with adapter board, plus the sensors soldered on to some vero-board (photos below). Using the ‘bare’ ESP module added quite a bit of complexity since you are responsible for pulling the various lines required to program/run the chip up or down. It’s not difficult, but it’s more difficult than it should be. For that reason, after I’ve exhausted my one remaining ‘bare’ module/adapter I will be switching to Wemos D1 Mini modules.

In terms of the hardware, I encountered a couple of issues building the first prototype. Whilst the ESP8266 adapter board I used has a regulator to take 5V down to the 3V3 required for the ESP, I wanted to use 12V to reduce the voltage drop through the wires in the ceiling space. This meant I needed another regulator to go from 12V down to 5V. At first I used a spare linear regulator I had in my parts box for this. As it turns out this was a mistake, whilst it powered the ESP and sensors just fine, the whole thing got very hot (even with an extra heat sink). This also affected the reading from the temperature sensor. The solution was to order some cheap buck converter modules online, which did the trick without getting the slightest bit warm and are much more efficient.

The second issue is a weird one. Whilst re-programming the ESP from ESPEasy to Micropython, I noticed that after running the erase flash command the current going to the board would get very high (~400mA at 12V!), this would result in the ESP getting very hot. Seemingly this didn’t damage it because I did it several times and it always continued to work. In addition to this the Micropython firmware wouldn’t boot, instead just spewing incomprehensible (at any of the common baud rates) garbage to the UART. As it turns out, the chip was not being erased correctly, running an erase regions across the full extent of flash did the trick:

If anyone knows what the problem with these modules is, please get in touch in the comments. I couldn’t find any reference to this problem online and the (probably cloned) Wemos modules I have don’t have the same problem.

Anyway, here are some photos of the finished prototype:

Fully assembled prototype

Fully assembled prototype with PIR sensor connected.

Front of prototype

Front of prototype showing ESP8266 and adapter board

Back of prototype

Back of prototype, the wires in the bottom right corner are where it was re-worked after changing the power supply

The Software

The software is fairly basic, but has a few nice features. I use the boot.py script to load the configuration from the config.json file on the internal filesystem, then I connect to the wifi following the example given in the Micropython documentation. After this main.py is executed which connects to the broker and initialises the sensors. This is where things get a bit more interesting. I’ve written a minimal implementation of the Home Assistant MQTT Discovery spec, which so far supports binary sensors and standard sensors. This allows the sensors to come up in HASS without any configuration on the HASS server (except enabling MQTT discovery, which is a one time operation). Reading from the sensors themselves is fairly standard, using a pin change interrupt for the motion sensor and the standard dht module for the temperature/humidity. The use of uasyncio allows the DHT sensor to be polled at regular intervals. The code for the whole program is available via GitLab (also mirrored on GitHub). You can see a screenshot of the sensors in HASS below:

Home Assistant state card

Home Assistant state card showing the automatically added entities

In comparison to other embedded platforms/languages, writing a Micropython application is awesome. The power of Python makes you extremely productive. I wrote the whole software in a couple of hours, including the MQTT discovery implementation, using a Wemos board with no sensors connected as my development platform. When I uploaded this to the prototype board it hit a couple of exceptions in the sensor code, which previously hadn’t been exercised. The key thing here is that it hit exceptions! The resulting stack traces of course tell you where and why the exception occurred. What could have been a long and arduous debugging session turned into a few minutes of tweaking. The availability of uasyncio is also a really nice feature and allows direct knowledge transfer from CPython.

Future Work

There is still some work to do in order to finish the project, mostly on the hardware side. Obviously I need to make enough sensors to cover the house, which is just an ongoing assembly task. There is also the in ceiling wiring, assembly into cases and mounting. I’ll be sure to post an update on this as I make progress. There are also a couple of software improvements I would like to make. Firstly, I would like to add a meta sensor which makes use of the LWT feature of MQTT to report the status of the unit, but also still utilises HASS MQTT discovery. I’m also planning to spin out the current MQTT discovery implementation into it’s own Python module, which will be published to PyPI. The only other software task is to write a script to automate the deployment of the Micropython firmware, application and device specific configuration, which will be useful when programming multiple units.

Thats all for now. I’ll be posting further parts in this series as the project progresses, so subscribe to the feed, join the mailing list (above) or follow me on Twitter to keep up.