home assistant rubbish collection

Quick Project: Follow up to my Home Assistant Rubbish Collection Panel

This post may contain affiliate links. Please see the disclaimer for more information.

Last month, I wrote a quick post about the Home Assistant Rubbish Collection panel I made for the Lovelace UI. Well, it looks like amaximus was inspired to create his own custom card to do a similar thing. [NOTE: the author of this card hasn’t contacted me directly (I came across it via HACS), I’m only claiming to be the inspiration based on the relative dates].

This card has a couple of cool capabilities that my previous panel didn’t have. Specifically, you can set colour coded icons for your different bins and the icon will change style and colour (to red) when the bin is due to go out in the next day. You can also hide the card completely if a bin is not due to go out within X days.

home assistant rubbish collection
Cards for all four of my sensors, with colour coded icons
home assistant rubbish collection
If the collection is the next day the icons change to red

Installation and Setup

I installed the card by adding a git submodule to my configuration repository in the www/plugins directory, but you can also install directly from HACS. I’m switching over to adding all my custom components and cards as submodules in order to make my config more easily deployable.

After installation, you need to add the path to the garbage-collection-card.js file in the resources section of your Lovelace UI config:

Once that’s done you can add cards to the UI. I just put mine in a vertical stack to group them together:

…and that’s it! If you want to hide a card for bins that don’t need to go out soon use hide_before: x (where x is the number of days). I’ll probably use this to hide bins that don’t need to go out in the current week, but I wanted to show all the cards in the screenshots 😉

Conclusion

I think this is a great improvement on my previous panel, so I’m going to stick with it. Thanks to the author another contributors for taking the time to make it!

It’s kinda cool to think that this blog may have inspired someone else to go out and write some code! If you are inspired to make and share something as a result of one of my posts, please get in contact! Your work will most likely get featured in a future blog post.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

pfsense proxmox open vswitch

Virtualised pfSense on Proxmox with Open vSwitch

This post may contain affiliate links. Please see the disclaimer for more information.

In my recent post about my networking setup I mentioned that my firewall is a virtualised pfSense system running on a Proxmox host. In the comments to that post I was also asked if I was making use of Open vSwitch. Since the answer is that I use Open vSwitch in my pfSense/Proxmox setup, I thought I’d write up my setup for those that are interested.

I’ve actually been meaning to write this up for a long time. I’ve had this setup running since shortly after we moved into this house. On the one hand this means that the setup is pretty battle tested. All of the inter-VLAN and Internet bound traffic on my network runs through this and it’s been running pretty flawlessly for nearly two years.

On the other hand, given the length of time that has elapsed since I set this up and the writing of this post it means that this will be more like archeological exploration than documentation! I’m unlikely to remember every detail or the issues I encountered along the way. As such this post will pretty much document the state of the setup as I can extract it from the running system! Basically, you should only use this post as a rough guide and go away and do your own research. I’ll apologise for this incompleteness in advance. If you try this please let me know of anything I’ve missed and I’ll update the post with extra details.

How’s this going to work?

The basic premise of this whole thing is a Proxmox host with two physical NICs. One of these is the LAN port on which the host will have it’s internal IP. The second is the WAN port, which is assigned directly to the pfSense VM. In my case this is complicated by networking setup required by our Fibre connections here in NZ. These require a connection to the Fibre ONT on VLAN10 over which a PPPoE session to the ISP is established.

Since the WAN interface is directly assigned to the VM, this is all handled internally to pfSense. This means that that the host machine is not exposed to the external network. [OK, for the purists among you, this isn’t strictly true. The host will be exposed at lower levels of the network stack to allow it to forward packets through to the VM. However, since it doesn’t have an IP address on that interface it won’t be accessible from the Internet. I’m sure someone out there will tell me why this is all kinds of horrible.]

On the LAN side we create an Open vSwitch switch and add the LAN interface as a VLAN trunk on it. Another (virtual) trunk interface goes into the pfSense VM and becomes it’s LAN interface. This is analogous to just having another physical switch between the host and the VM. The purpose of this extra complexity is that it allows us to connect other VMs on the host into the vSwitch. These can be in on multiple different VLANs if required.

Hopefully the diagram below makes this somewhat clearer:

pfsense proxmox open vswitch
Sometimes a picture really is worth a thousand words

The Proxmox Host

The Proxmox host itself is a Dual Ethernet Haswell based mini-computer from AliExpress. I’ve been really happy with this as a platfrom aside from the fact that I would have spec’d it with more than 4GB of RAM if I’d been intending to run Proxmox initially. I also added an extra 120GB SSD drive on the internal SATA port for VM storage.

I started out with this host running pfSense natively, which also worked fine. One thing I did find is that when I switched over to Proxmox (Linux based) from pfSense (FreeBSD based) it ran much cooler. I guess that’s just down the the Linux kernel’s better hardware support.

This host is still running Proxmox 5.4 since I haven’t had time to upgrade it to 6.0 yet. This system is pretty much as close to “production” as it gets for me, since the Internet is used all the time!

Proxmox Network Setup

Proxmox enumerates the two NICs as ens1 (LAN) and enp1s0 (WAN). With the WAN port, I created a simple Linux Bridge vmbr1 to allow it to be added to the pfSense VM.

On the LAN side, I created an “OVS Bridge” port and added an “OVS IntPort” named admin which will be the primary interface to the host machine. As such, this interface is assigned a static IP and is assigned to the VLAN that we want the host to be on.

pfsense proxmox open vswitch
The network setup in Proxmox

I have to give kudos here to the Proxmox developers. They’ve made the Open vSwitch setup here pretty much trivial! For what I would consider advanced functionality it’s just as easy as configuring any other network.

A note should also be given here as to what’s going to happen when you configure this. By design Proxmox doesn’t apply any networking changes until you reboot. This is pretty useful to prevent yourself getting locked out. If you are connected directly on the LAN interface (with a static IP) you should make sure that everything is correct before rebooting. After the reboot, reconfigure your local interface to the VLAN you chose in the setup and a static IP. You should then be able to access the Proxmox web interface again.

Setting Up pfSense

The pfSense installation was fairly standard. The only change I ended up making was to change the default CPU type to enable AES-NI instructions. This took a little bit of experimentation and looking up the capabilities of various processors, but I finally settled on the “Westmere” processor.

pfsense proxmox open vswitch
I selected a “Westmere” processor in order to make AES-NI instructions available to the VM

After setting this architecture in the VM settings, rebooting pfSense shows both the correct CPU architecture and that AES-NI is available. It seems that this is probably less important than it was when I set up the system, since Netgate have now decided that AES-NI will not be required for pfSense 2.5.0.

pfsense proxmox open vswitch
AES-NI successfully enabled

One other thing is that you should disable hardware checksum offloading to work with the virtio drivers, as per the official documentation. Before you do this the network will be very sluggish.

Once the pfSense installation was complete I restored from a backup of my previous setup. This made the task of setting up my interfaces significantly easier. However, I’ll go through the networking aspects anyway for those who may be setting up a new system.

pfSense Networking

Luckily for us the pfSense tool to assign interfaces allows us to also set up the VLANs. This is useful to set up a minimal configuration to get you access to the web interface. Basically you want to set up the VLAN for your main LAN segment. Then you can set up the pfSense LAN interface on this VLAN with a static IP. If you’re using a fibre connection similar to mine you can skip the WAN setup for now. Once the “Assign Interfaces” wizard is complete you should have access to the Web Configurator.

The next step was to setup my WAN connection. I first added a VLAN with tag 10 on the vtnet0 device which is the device that corresponds to the physical WAN bridge as enumerated by pfSense. I added a corresponding interface for this and then added a PPPoE interface using the details provided by my ISP. This is then assigned to the WAN interface via the “Interface Assignments” page.

In terms of setting up the local networks, you can pretty much set up whatever VLANs you would like at this point. Take a look at my previous post for inspiration.

Conclusion

As stated earlier, I’ve found this setup to be very stable in production and it’s even made my hardware run cooler. Having my firewall virtualised has also had several other benefits for me. Firstly, I can backup and snapshot the firewall VM at will. I no longer need to worry about an update or bad configuration hosing my firewall. I just snapshot before doing anything major and roll back if anything goes wrong.

The second major benefit is that I can run extra VMs and containers on the host machine, which I couldn’t when it was a dedicated firewall. I’ve used this to implement my small DMZ for Internet facing services. This has the added benefit that DMZ traffic only transits the vSwitch internal to the host and doesn’t have to be shuttled back and forth over the physical network infrastructure. This is much faster, since the virtualised interfaces should (in theory) be 10GBps. However, this is somewhat irrelevant when the upstream Fibre connection is only 100Mbps.

As always, I’m keen to receive feedback and constructive criticism of this setup. Please feel free to get in contact via the feedback channels.

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

continuous integration home assistant

Continuous Integration for Home Assistant, ESPHome and AppDaemon

This post may contain affiliate links. Please see the disclaimer for more information.

Recently I set up continuous integration and deployment from my Home Assistant configuration. This setup has been nothing short of awesome! It’s liberated me from worrying about editing my configuration – all I do is git push and relax. Either HASS will notify me when it restarts or I’ll get an email from Gitlab telling me the pipeline failed.

I wanted to take this configuration further and expand it to other parts of my Home Automation infrastructure. In this post I’ll cover expanding it to perform deployments of my HA stack with Docker, building and deploying to ESPHome devices and unit testing and deploying my AppDaemon apps.

Let’s get on with it!

Automating Docker Deployment

I’d originally held off doing this because I wasn’t looking forward to building custom Docker images in Gitlab CI. However, I managed to complete the original pipeline without having to add any extra dependencies to the HASS containers (such as git which I thought may be required). This makes the job of deploying my HA stack much easier, especially as I already had it mostly scripted. The first step was to add my update.sh script to my repo and tweak it to suit:

This is a pretty simple modification to my previous script. The main additions are that I use the -p argument to set the project name used by docker-compose. By default this is taken from the directory name, but I wanted it to match the name of my previous project even though the directory has changed from ha to home-assistant. The other main modification is that I’ve added the --remove-orphans argument to clean up any lingering containers. This is useful if I remove a container from the docker-compose.yml file. In addition I’ve removed the apt commands and cleaned up the script a bit so that it passes my shellcheck job.

The next step was simply to add the docker-compose.yml file to the repo. Then I continued by editing the CI configuration.

Updated Home Assistant CI Jobs

I first split up my previous deployment job into two jobs. The first of these is the main deployment job which pulls the new configuration. The second restarts HASS. The restart job goes in a new pipeline stage and will only be run when the docker-compose.yml or update.sh files haven’t changed:

I then added another job (again in another pipeline stage) which performs our Docker deployment. This will be run only when either the docker-compose.yml or update.sh files changes:

continuous integration home assistant
A full pipeline run with a deployment of the Docker containers running in the final stage.

With that in place I can now redeploy my HA stack by modifying either of those files, committing to git and pushing. In order to facilitate HASS updates with this workflow, I changed the tag of the HASS Docker image to the explicit version number. That way I can simply update the version number and redeploy for each new release.

Continuous Integration for ESPHome

Inspired by the previous configs I have seen for checking ESPHome files, I wanted to implement the same checks. However, I wanted to go further and have a full continuous deployment setup which would build the relevant firmware when its configuration was changed and send an OTA update to the corresponding device. As it turned out this was relatively easy.

I started out by importing my ESPHome configs into Git, which I hadn’t previously done. You can find the resulting repository on Gitlab. For the CI configuration I first copied over the markdownlint and yamllint jobs from my Home Assistant CI configuration.

I then borrowed the ESPHome config check jobs from Frenck’s configuration. These check against both the current release of ESPHome and the next beta release. The beta release job is allowed to fail and is designed only to provide a heads up for potential future issues.

Then I came to implement the build and deployment job. Traditionally these would be performed in separate steps, but since ESPHome can do this in a single step with it’s run subcommand I decided to do it the easy way. This also removes the requirement to manage build artifacts between steps. I created the following template job to manage this:

Most of the complexity here is in unlocking the git-crypt repository so that we can read the encrypted secrets file. I opted to store the git-crypt key in the repository, encrypted with openssl. The passphrase used for openssl is in turn stored in a Gitlab variable, in this case $OPENSSL_PASSPHRASE. Once the decryption of the key is complete, we can unlock the repo and get on with things. We remove the key after we are done in the after_script step.

Per-Device Jobs

Using the template configuration, I then created a job for each device I want to deploy to. These jobs are executed only when the corresponding YAML file (or secrets.yaml) is changed. This ensures that I only update devices that I need to on each run. The general form of these jobs is:

Of course you need to replace my_device with the name of your device file.

continuous integration home assistant
A run of the ESPHome pipeline with deployments to two devices

With these jobs in place I have a full end-to-end pipeline for ESPHome, which lints and checks my configuration before deploying it only to devices which need updating. Nice! You can check out the full pipeline configuration on Gitlab. I now no longer have need to run the ESPHome dashboard, so I’ve removed it from my server.

Continuous Integration for AppDaemon

I mentioned previously that I wanted to split out my AppDaemon apps and configuration into a separate repo from my HASS config. I did this as a prerequisite step of this setup and you can again find the new repo on Gitlab.

The inspiration for this configuration came mostly to @bachya on the HASS forum, whose post in reply to my earlier setup provided most of the details. Thanks for sharing!

I started out by copying across the now ubiquitous markdownlint and yamllint jobs. I then added jobs for pylint, mypy, flake8 and black:

Although this ends up being very verbose, I decided to implement these all as separate jobs so that I get individual pass/fail states for each. I’m also pretty sure the mypy job doesn’t do anything right now, because I’m not using any type hints in my Python code. However, the job is there for when I start adding those.

Unit Testing AppDaemon

Another thing that @bachya introduced me to was Appdaemontestframework. This provides a pytest based framework for unit testing your AppDaemon apps. Although I’m still working on the unit tests for my so far pretty minimal AD setup I did manage to get the framework up and running, which was a little tricky. I had some issues with setting up the initial configuration for the app, but I managed to work it out eventually.

The unit testing CI job is pretty simple:

All we do here is install the requirements that I need for the tests and then call py.test. Easy!

The deployment job for AppDaemon was also trivial, since it is pretty much a copy of the HASS one. Since AD detects changes to your apps automatically, there’s no need to restart. For more details you can check out the full CI pipeline on Gitlab.

continuous integration home assistant
A run of the AppDaemon pipeline – lots of preflight checks here!

Conclusion

Phew, that was a lot of work, but it was all the logical follow on from work I’d done before or that others had done. I now have a full set of CI pipelines for the three main components of my home automation setup. I’m really happy with each of them, but especially the ESPHome pipeline. As an embedded engineer in my day job I find it really cool that I can update a YAML file locally, commit/push it and then my CI takes over and ends up flashing a physical device! That this is even possible is a testament to all the pieces of software used.

Next Steps

I’m keen to keep going with CI as a means of automating my operations. I think my next target will be sprucing up my Ansible configurations and running them automatically from CI. Stay tuned for that in the hopefully near future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

smarthome network

Building a Smarthome Network with Open Source Software

This post may contain affiliate links. Please see the disclaimer for more information.

I’ve recently been doing some network restructuring and clean up in order to better separate devices on my network and to remove a bit of the cruft that builds up over time. In the process, I realised I haven’t written about how my smarthome network is structured. My network setup is somewhat different to lots of other setups I’ve seen documented. This is because I’m mostly using Open Source software to drive my firewall and wireless access points.

This article will introduce the technologies I’m using on my network and give you an overview of the structure. It’s not intended to be a full how-to, but I’ll try to include links which will guide you though any parts I don’t cover in detail. Let’s get into it…

What’s special about a Smarthome Network?

The typical smarthome today includes a myriad of devices, probably from a variety of different manufacturers. Obviously, there will be your smarthome devices – smart bulbs, switches, hubs/gateways, vacuums, IP cameras, etc. However, there are all the standard devices too – smartphones, tablets, laptops, desktops, etc. Additionally, it’s likely that there will be some devices specifically used for media consumption.

We can see that there are obviously different classes of devices in the lists above. These differ both in function and in the level of trust that you give to them. It’s this attribute of trust that separates a smarthome network from a standard home network for me.

Consider this, are there any devices on your home network that you maybe don’t trust all that much. What about that light bulb over there? When did it last get a security update? Do you really want that sitting on the same network as the devices you trust with your personal communications, important documents and web searches?

Using existing technologies, we can partition our network in such a way as to separate our trusted and untrusted devices from one another. We don’t need to stop there! We can separate devices that require Internet access from those that don’t or along any other lines that we want. The technologies in question are well trusted and have been around for years: VLANs and inter-network firewalling.

Open Source Components

I’ve been operating a partitioned network setup for some time, pretty much since I got seriously into Home Automation technologies. Recently I’ve seen renewed interest in this is the community. The recent series from Rob at The Hook Up on YouTube was particularly good (part 1, part 2, part 3). A large part of the community appears to be using the Unifi line of products from Ubiquity. I’m not questioning the quality or performance of these products – I’ve never tried them. However, I prefer to use Open Source alternatives where possible.

smarthome network
You shall not pass!

The primary components of my smarthome network are running Open Source software. The first of these is my firewall, running pfSense. The second of these are my wireless access points. For these I use a couple of consumer grade TP-Link wireless routers, running OpenWRT. Since these are running solely as access points and Ethernet switches they don’t suffer performance issues from the consumer hardware. I also don’t put excessive strain on my wireless, preferring to use Ethernet wherever possible.

smarthome network
The road goes ever on…

Unfortunately, there doesn’t appear to be an Open Source OS available for readily available Ethernet switches. As an alternative I’m using a couple of TP-Link switches (the TL-SG1024DE and TL-SG105E). These are great switches for the price. Crucially they come with the VLAN capability that we need for our partitioned smarthome network.

Network Partitioning With VLANs

For those unfamiliar with VLANs, they feel like magic when you discover them! A VLAN is a virtual network, which runs over the top of your physical network cables. Several of these networks can be run over the same connection. This means we can run several networks on the same physical hardware. VLANs work by tagging each Ethernet frame with a network identifier so that the receiving hardware knows which network to send it on to. This usually requires hardware support in your switching hardware, although software support is available in most operating systems.

I subdivide my network into several VLANs, based around both trust and functionality:

  • The main LAN network, this houses the trusted client devices (laptops, smartphones, etc). This network has access to most of the others for maintenance purposes.
  • The IOT network, which houses smart devices which absolutely require Internet access. At the moment this is just my Smart TV, Neato Botvac and a Chromecast Ultra. The Neato is the only device that uses the associated WLAN. This network is firewalled from all the others but is allowed to access the Internet.
  • The NOT (Network of Things – name stolen from The Hook Up video series, linked above). This houses smart devices which are locally controlled, such as my ESPHome devices. Some of the devices on this network (such as my Yeelight bulbs, Milight Gateway and Broadlink RM Mini) want to get out to the Internet but are blocked by the firewall. For ease of use I put the VM hosting my Home Automation services on this network and make an exception for it in the firewall. I’m hoping to change this eventually. This network is blocked both from the other local networks and the Internet, aside from a few exceptions.
  • The Media network, this houses all the devices and servers which stream media around my house. This includes both my Kodi systems, an older HDHomerun and the Emby, tvheadend and Mopidy/Snapcast servers. For now the RPi driving my outdoor speakers is living in the NOT network. This is because it’s on wifi and I didn’t want to create another AP for it. Eventually that will be migrated over, once I run a cable into the roof for it. Having the media devices in a separate VLAN should allow me to do some traffic shaping in the future to prioritise their traffic (if needed, I don’t have any problems right now). This network isn’t blocked from any of the others and has Internet access.
  • The DMZ, this hosts any services which are available from outside my network. It’s blocked from all local networks, but has Internet access.
  • The Guest network, which is tied directly to a specific WLAN only for guest devices. This allows me to provide Internet access to guests without giving them access to anything else. Blocked from all the local networks, but obviously has Internet access.
  • The Infrastructure network, this one is new and I’m still migrating devices over to it. The idea is that it will house all the network infrastructure devices, including switches, access points and the physical host servers. Everything in here will have a static IP address. Right now the firewall is open. I will probably lock it down so that admin can only be performed from trusted devices.
  • The Servers network, this one is also new and so far empty. Eventually it will contain all the internal server VMs (i.e. non-DMZ, media or HA). The idea is that I can use firewall rules to control access to these from other parts of the network. Most definitely a work in progress.
  • The Work network, which houses my company workstation and any other devices used for work, since I work from home. This is blocked from the other networks, except for a few ports. It obviously has Internet access.

Phew… that’s quite a …few(!).

VLANs With pfSense

VLANs with pfSense are fairly easy to configure. It’s basically a two step process:

  1. Create the VLAN in Interfaces->Assignments->VLANs
  2. Add a net interface which uses the VLAN in Interfaces->Assignments
smarthome network
The pfSense VLANs page
smarthome network
The pfSense interfaces page, this maps VLANs to interfaces used in firewall rules

The pfSense documentation gives a better overview of this.

Once your VLANs and interfaces are available you should be able to configure the firewall rules to control traffic between them. You’re probably going to want to block access to the local networks from your secure VLANs and potentially allow Internet access. If you want to also deny Internet access just deny access from that network to all destinations.

smarthome network
The firewall rules for my IOT network

OpenWRT: VLANs and Multiple APs

The main reason for using OpenWRT on my wireless APs is to unlock capabilities of the underlying hardware that aren’t available in the stock firmware. Specifically, this will allow you to create multiple wireless access points and assign them to different VLANs.

smarthome network
Multiple WLAN Access Points in OpenWRT

The basic process here is to create a bridge interface. This can be used to group your VLAN and the wireless network together. Adding the VLANs themselves is pretty trivial. There is even a nice GUI editor which shows each port and the corresponding VLANs.

smarthome network
The OpenWRT VLAN assignments interface is the best I’ve found so far

The main gotcha of this setup is to make sure that the dnsmasq service is disabled in System->Startup to prevent it interfering with the DNS and DHCP from the firewall. You can also check the “Ignore Interface” DHCP setting in the interface config for each interface.

You can also delete the default created WAN interface to allow you to use the fifth switch port on the router as another port on the network. This should also disable NATing between the ports which you also don’t need with this setup. Basically, once you are finished tweaking all the settings the device will only act as an access point (for multiple wireless networks) and managed switch.

Special Considerations

There aren’t too many issues that you should run into with this setup, once you get all the pieces into place. There were a few minor things which tripped me up however (two of these are specific to the switch I’m using):

  • My TL-SG1024DE switch is a little funny about what VLAN it’s management interface will be available on. It seems like it’s VLAN 1 or nothing. Additionally it adds VLAN 1 as untagged to every port. I eventually just made VLAN 1 into my Infrastructure network to work around this.
  • The TL-SG1024DE switch also requires you to set the PVID setting on each port, even if the port will never receive any untagged frames. I just set it to whatever the primary VLAN of the port is.
  • I held off moving the Chromecast into the IOT network for a long time, since I was worried about the discovery not working across subnets. As it turns out, I shouldn’t have worried. All you have to do is install and enable the Avahi package in pfSense and you’re good to go. This video helped me out, though the settings seem to have been simplified since then (see the screenshot below for my settings).
smarthome network
Avahi settings in pfSense

Core Network and Topology

I’m not going to cover how I setup the main switch on my network, since the purpose of this article is to focus on the Open Source components. Your configuration will also vary depending on what switch you have.

I will say a few words on the physical topology of my smarthome network. My firewall and main switch are located in my ‘rack’. The firewall machine has two network interfaces. One is used as the WAN port and connected to my fibre ONT. The other is the main VLAN trunk to the switch. In my case my pfSense install is actually installed as a virtual machine hosted by Proxmox which runs on the host machine. This is somewhat irrelevant to this discussion and probably deserves a post all of its own.

From the main switch, connections go out to the various rooms in the house. In three of these I have additional switches where I need the extra ports. These are the two OpenWRT devices and the TL-SG105E switch. This layout also gives nice wireless coverage around the house. The uplinks to each of these are configured as VLAN trunks for whichever networks are required at the other end.

Conclusions

Hopefully this post has shown you that it’s possible to create a fully featured and secure smarthome network using Open Source components. I’ve probably glossed over tons of the details of my setup in the process of writing this article. If I put them all in, it would probably be three times as long! If you have any questions, please ask them in the feedback channels. I’m also not a professional network engineer, so feel free to provide improvements and constructive criticism!

My network is a constant work in progress. However, once this latest round of tidying up is complete I think I’ll be in good shape for quite a while. Hopefully this network will happily support all the future projects I have planned for my smarthome!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

traccar home assistant

Self-Hosted GPS Tracking with Traccar and Home Assistant

This post may contain affiliate links. Please see the disclaimer for more information.

One of the outstanding issues I’ve had with my Home Automation system in recent months has been to fix my presence detection. I was using a combination of Owntracks and SNMP to query my Pfsense firewall for devices on the network.

This worked well until my Owntracks setup broke due to internal network changes. This was due to moving separating my HA/MQTT server from the reverse proxy. In turn, this meant that the MQTT server wasn’t able to renew it’s Let’s Encrypt certificate.

The solution to this would have been to use DNS validation to get the certificate issued. However, my DNS provider (Namecheap) don’t allow API access unless you have a large enough account with them.

Eventually I will migrate my DNS to a supported provider and get internal TLS working. However, it’s quite a bit of work to migrate over. Also, DNS is pretty critical to the operation of, well, everything – so I want to take my time and get it right. In the meantime I was looking into other options for GPS based presence detection.

GPS Based Presence Options

Aside from using Owntracks in MQTT mode, it also supports HTTP mode (which is now actually the recommended mode). This is directly supported in HASS. I hadn’t switched over to it because I wanted to try out the Owntracks recorder for logging position data over time. Sending data to both HASS and the recorder would be difficult via HTTP. However, it’s exactly the use case that MQTT is made for!

While I was dithering around not doing very much about this problem, another alternative cropped up: Traccar. Traccar is a self-hosted GPS tracking system which supports a multitude of different devices and has mobile apps available for both major OSes. The main plus point for me is that it is a stand-alone server which I could host in my DMZ. There is also a Home Assistant integration for Traccar.

Installation

It seems to be rather badly documented, but there is an official Docker image for Traccar on Docker Hub. When reading that documentation I initially thought I had to build the image myself. Don’t! Just use the official one (unless you have a good reason not to).

I followed along the instructions in the Readme, until it came to the final docker run command, where I wanted to put it into docker-compose. Here’s what I came up with:

This is pretty much the command in the docs translated over, except for the second to last line. This mounts the data volume needed for persisting the default H2 database files to the host. I’m not sure why this isn’t mentioned in the docs. Perhaps they expect that you will use Mysql for the database, but I didn’t want to do that for my initial test setup.

traccar home assistant
The Traccar Web UI

After running the docker-compose up -d command I had Traccar working on port 8082 of my server and was able to log in as the default user (admin and password admin!). The first thing I did after logging in was change that password and disable user registration, before exposing my instance via the reverse proxy.

traccar home assistant
Deselect “Registration” in the Server Settings dialog (Right Hand Gear Icon->Server)

I found that the server memory usage was reasonably high, which seems to be due to the memory options passed to the Java VM in the dockerfile. There doesn’t seem to be any way to change this except for building a custom image.

Reverse Proxy Setup

I’m intending to migrate the reverse proxy on my home network to Traefik at some point after my previous success with it. However, for now it’s still running good old Nginx. I couldn’t find an example Nginx config for Traccar, so I copied my HASS one and modified it to suit:

Adding Devices

Once your server is up and running, adding devices is easy. Just click the plus icon next to devices and give your device a name. For the ID field, the Android app will generate a six-digit identifier, which you can copy over. This ID is the only security token used to update the position, so you may like to use something more secure. I recommend generating a pseudo-random string with pwgen and using that.

Set up on the app side is pretty trivial too. Just enter the full URL to the server e.g. https://traccar.example.com and make sure the device identifier matches the one you entered on the server. Toggling the service status will then start sending location updates. The Android app seems to put an annoying persistent notification in the notification pull down. For some reason this doesn’t go into the ongoing section (which would make it less annoying). I just hid the notifications from the Traccar app via the relevant Android setting and the app still sends location updates.

As an aside, you can easily send location updates from the command line with curl, for example I made the screenshot above by artificially positioning a device at New Plymouth’s Wind Wand:

Make sure to update the hostname and device ID to match your setup and update the other fields to reflect the position you want to log.

Integrating Traccar with Home Assistant

The integration with HASS also proved to be relatively easy and works well. There were a couple of things which tripped me up, which I’ll come to shortly. First we need an entry for Traccar in our Home Assistant config:

The port and ssl variables are required for the above setup with the reverse proxy, aside from that it’s reasonably obvious.

After adding that to my config file, I expected my devices just to show up in HASS. They didn’t. This turned out to be due to two problems. The first of these was that I had created a user in Traccar specifically for Home Assistant (which you can give read-only permissions, nicely). However, this user didn’t have access to my devices. The solution is to grant access via the ‘Devices’ panel of the user management panel (the icon looks like two little picture frames).

The second issue tripped me up for a bit longer. Even after I set the permissions correctly, I still couldn’t see my devices in the HASS UI. It turns out they were being added to the known_devices.yaml on the server, but not being enabled by default. It wasn’t until I logged into the server and checked this file that I noticed this. This is more problematic if you are editing your config locally and deploying it via git since the change obviously won’t be made to your local copy.

In the end I added the entry in my local copy and deployed it to the server:

Once that was done for each device, the Traccar device_tracker entities appeared in Home Assistant just fine.

Battery Usage Issues

Now comes the big potential problem: the power usage of the Traccar Android app. The first day I had this installed my phone was down to 15% by 9-10pm. However, the results have been less worrying over the last couple of days. I’m not sure why it was so bad for the first day, although there were a few points that were different from my subsequent usage:

  • I was using the version from F-Droid. After noticing the battery issue I switched to the version from the play store. I don’t know if that version uses some proprietary Google Services API, or if they are identical. However, it is a possible reason for the difference.
  • I had changed a couple of the default settings – I lengthened the frequency setting to 15 minutes and changed the distance setting to 50 meters. I’m not sure what effect/interactions these have since I can’t find any documentation about them. I’ve since gone back to the default settings.
  • I moved around quite a bit that first day (some driving around town and a long walk) with less moving in the subsequent two days. If this is really the reason for the battery usage then that would be disappointing. Obviously not moving is not a solution!

This is all based on three days of usage, so I’m going to continue monitoring the situation and potentially testing other variations in settings. I’ve asked about the battery usage on the Traccar forum, but have so far had no answer. One worrying issue is that the Android battery usage panel shows the app as keeping the device awake for long periods.

traccar home assistant
Power usage from the Traccar app

There is another option to the Traccar app, since OwnTracks can also be used. However, at this point you may as well just use the HASS integration directly, unless you want the extra features of the Traccar server.

Conclusion

Traccar seems like a pretty solid piece of software, though due to the power issue I’m not completely sold on it yet. The location updates are certainly more frequent than OwnTracks was (at least over MQTT). The server side UI is nice and responsive and there are quite a few capabilities that I haven’t explored yet. For these reasons I’m going to stick with it for now and continue testing.

Next Steps

Obviously I need to solve the battery issues in order to go much further. If the usage proves acceptable after further testing I’m also going to see if I can dial down the memory usage of the server by editing the Dockerfile.

I also have quite a bit of work to do on the HASS presence detection front. I’d specifically like to try a combination of Bayesian Sensors and the “Not so Binary” approach.

Hopefully I can come up with something which is both responsive to changes and also robust against false readings. I’ll be sure to write a post about that when I do. However, that’s it for now. Thanks for reading!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.