Simple Automated Video Transcoding…

This post has been sitting in my drafts since 2012. I still use the system described below (occasionally) and it seemed a shame not to post the approach for others to use.

I had a ton of transcoding of videos for storage on my home server/mythbox. My tool of choice has been HandBrake, specifically the CLI version. As I had a fair few videos to get through I wanted to set up some automated system for doing the transcoding. Here is the simple bash script I came up with:

#!/bin/sh

HANDBRAKE_OPTS="-e x264 -q 20.0 -a 1 -E ac3 -B 160 -6 auto -R Auto,Auto -D 0.0 -f mkv --detelecine --decomb --loose-anamorphic -m -2 -T -x b-adapt=2:rc-lookahead=50"
MAILTO=me@webworxshop.com
MAILFROM=transcoder@webworxshop.com
BASEDIR=/mnt/media/transcoding
#BASEDIR=.
INDIR=$BASEDIR/queue
OUTDIR=$BASEDIR/mkvs
DONEDIR=$BASEDIR/done

if [ -e $BASEDIR/.lock ]
then
        echo "Lock file exists, exiting."
        exit 1
fi
touch $BASEDIR/.lock

echo $INDIR
for dir in $(find $INDIR/ -type d | grep VIDEO_TS); do

        toks=(`echo $dir | tr "/" "\n"`)
        HandBrakeCLI -i $dir --main-feature -o $OUTDIR/${toks[4]}.mkv $HANDBRAKE_OPTS
        mv $INDIR/${toks[4]} $DONEDIR/${toks[4]}

        # send the email
        echo "To: $MAILTO" > msg.txt
        echo "From: $MAILFROM" >> msg.txt
        echo "Subject: Transcode Complete" >> msg.txt
        echo "" >> msg.txt
        echo "Transcoding ${toks[4]} completed sucessfully!" >> msg.txt
        cat msg.txt | msmtp $MAILTO
        rm msg.txt

        rm $BASEDIR/.lock

        exit 0;
done

The script requires that you set it up in a directory (/mnt/media/transcoding  on my system) with three sub-directories (queue , mkvs  and done ). The script will transcode one video from the queue  directory on each run. The idea is that the script should be run from cron several times a day when your machine isn’t doing very much else (I run mine overnight and when I’m at work). When the script is done it will send you an email to tell you and move the source file into the done  directory. The transcoded file will be dropped into the mkvs  directory. Obviously, the script will just exit if there’s nothing in the queue. This means that once the system is set up all you have to do is drop new videos into the queue  directory and they will be transcoded automatically.

Let me know if you find this useful, or suggest improvements in the comments.

Monitor Dynamic DNS Status with Nagios

For anyone running services on their home network a Dynamic DNS setup is a must have. But what happens when your Dynamic DNS client fails to update one day, when you’re going on a trip and you end up locked out of your network? If you’re running Nagios as your monitoring solution then you can easily detect this situation. This post will show you how and provide a Nagios plugin for doing just this.

The basic idea is to compare the DNS result for your local network FQDN with your external IP address. To retrieve our external address we use a 3rd party service, which being outside our network can see our external IP. In my case I use ifconfig.co, which conveniently has the ability to return its result in JSON for easy consumption by any number of tools. DNS lookup of our FQDN is provided by the Python socket.gethostbyname  function. This gives us too addresses which, if everything is working, will be identical. If our Dynamic DNS client it having issues, the addresses will be different.

Anyway, on to the code (we’re going to need the Python requests module, so install it with pip install requests):

#!/usr/bin/env python3

import requests
import socket
import sys
import argparse

STATUS_OK = 0
STATUS_WARNING = 1
STATUS_CRITICAL = 2
STATUS_UNKNOWN = 3

def format_output(status, msg):
    if status == STATUS_OK:
        print("OK - {}".format(msg))
    elif status == STATUS_WARNING:
        print("WARNING - {}".format(msg))
    elif status == STATUS_CRITICAL:
        print("CRITICAL - {}".format(msg))
    elif status == STATUS_UNKNOWN:
        print("UNKNOWN - {}".format(msg))
    sys.exit(status)

def main():
    parser = argparse.ArgumentParser(description="Nagios plugin to check that external IP matches DNS IP")
    parser.add_argument('-H', '--hostname', metavar='HOST', default='',
        help="DNS hostname to look up")
    args = parser.parse_args()

    req = requests.get("https://ifconfig.co/json")
    if req.status_code != 200:
        format_output(STATUS_UNKNOWN, "Unable to determine external IP, status code = {}".format(req.status_code))
        
    ext_ip = req.json()['ip']
    dns_ip = socket.gethostbyname(args.hostname)

    if ext_ip != dns_ip:
        format_output(STATUS_CRITICAL, "DNS IP ({}) does not match external IP ({})".format(dns_ip, ext_ip))
    else:
        format_output(STATUS_OK, "DNS IP correct. External IP is {}".format(ext_ip))

if __name__ == "__main__":
    main()

This is a fairly basic Nagios plugin that implements the approach described above. The only slightly tricky thing is output formatting and return code conventions, which must be exactly correct for Nagios to interpret the results of your plugin. This convention is documented in the Nagios plugin API documentation (I love this approach as an example of Unixy design).

To use this with nagios, put the plugin in the nagios plugins directory (/usr/local/nagios/libexec/  in my case) and make it executable (chmod +x). Then you need to update your config to add a new command in your objects/commands.cfg  file:

# 'check_dynamic_ip' command definition
define command {
       command_name    check_dynamic_ip
       command_line    $USER1$/check_dynamic_ip.py -H "$ARG1$"
}

You will also need a corresponding service check in your server.cfg  file:

define service{
        use                             generic-service
        host_name                       hostname
        service_description             Dynamic IP Status
        check_command                   check_dynamic_ip!my.domain.com
}

Then simply restart Nagios (sudo systemctl restart nagios.service) and you’re done.

Now you can enjoy knowing when your going to be locked out of your network 😉

My Self-Hosted Life

This post may contain affiliate links. Please see the disclaimer for more information.

For those that know me, I’ve made no secret of the fact that I believe that you are better off doing something yourself than outsourcing the task to someone else, especially in areas that you are interested in or have some expertise. For me this has particular value in the case of my computing. As a result, I have taken the decision to self-host as much of my online services as possible, rather than relying on the cloud (since that’s just someone else’s computer). I’ve been working on this for years (actually the whole time this blog has been dark and before) and at this stage I’m mostly there: almost all of my digital life is provided by Open Source software, running under my control.

This post will detail what I’m using and how it all fits together. I’m not going to go into technical specifics since otherwise this post would be huge, perhaps I’ll focus on some of that in future posts (feel free to make requests in the comments). Also, please note that my setup is by no means finished and probably never will be, it’s an ongoing project and it has become pretty much my main hobby to install and maintain this stuff.

In the Cloud

I’m going to start right here, with this blog, since that was where the whole thing really started. This blog existed well before my undertaking to self-host. In the early days it lived on a shared hosting plan provided by Dreamhost. The site has always run WordPress, although I’ve toyed with the idea of moving to a static site over the years, I’ve just never quite managed it. In 2011 I moved the site to a shiny new VPS provided by Linode, where it has lived ever since. There is also a Piwik install for tracking website stats (which I’ve blogged about before).

The main motivation behind the VPS was to install and configure my own mail server setup, something which I ranted about shortly after. This setup has be serving myself and various family members well since then, with really very little maintenance on my part (almost everything is automated).

There have been various other uses for the VPS over time, many of which haven’t stuck. Probably the most successful has been an installation of TT-RSS, which started life on my home server and at some point moved to the VPS for convenience of access. I’ve also dabbled with various chat applications, mainly XMPP based, but they’ve never really been that useful due to the network effect of no-one else using them! At this stage email has become my primary form of communication.

You might say that this is a bit of a cop out, since this all runs on a virtual machine, which itself runs on someone else’s computer. I would agree, however it’s a nice middle ground between going all out with your own servers and running everything in the cloud. To me the reality that the VPS is in the cloud is obscured by the ability to control every detail of its running software. Its also pretty nice for services which I want to be reliable, since Linode almost never skips a beat.

At Home

So the VPS is one thing and is really used for critical services or stuff that needs to be accessible to the wider Internet (like this site), but the real magic happens on my home servers (yes, there is more than one). My main server (now on its second hardware iteration) started life as a MythTV system and still does a great job in this respect. Many other services have been added over time, such as an MQTT broker (mosquitto), git server (gitolite+gitweb), a calendar/contacts server (Radicale) and file synchronisation (Syncthing). At some point I also switched out the MythTV frontend and replaced it with XBMC (now Kodi).

In the last couple of years I’ve been moving further down the home automation route, rather than just sensing and logging via MQTT. I’ve finally settled on Home Assistant as my automation controller and UI, along with an instance of Node-RED to do some miscellaneous processing. This all runs on the main server, with a Raspberry Pi 2 in the garage functioning as what I like to call ‘the gateway’ (it has a couple of radios and some sensors connected and runs another instance of Node-RED to shuttle this data to MQTT). In addition I have my home CCTV set up using a couple of webcams and MotionEye. One of the cameras is located remotely and connected to another Raspberry Pi (this time an old model B) and streams back to the main server with mjpg-streamer.

I also run a pfsense based firewall to protect my network and provide remote VPN access. This runs on an old netbook with an extra USB ethernet adapter. The internal network is partitioned using VLANs to provide a separate firewalled subnet for the home automation gear, some of which is cheap Chinese stuff which needs to be forcibly prevented from talking to the cloud. The networking gear consists of two TP-Link routers, flashed with OpenWRT which provides nice VLAN support. These have been configured to just provide switching and wireless access points and delegate all the firewalling, DNS and DHCP stuff to the firewall.

Within the last year or so I’ve been working on streamlining the management of all of this. The principle focus of this has been monitoring all the services I’ve got running. For this I’ve settled on Nagios, which I run in a separate VM hosted on the main home server. Although complex to set up, I can’t talk highly enough of Nagios, it’s brilliant and it saves me so much time just by knowing what is going on on my network. Email notifications from Nagios of course go via my own mail server! I’ve also played around with collectd, InfluxDB and Grafana for performance graphing, although I’ve yet to deploy this to everything.

Conclusion and The Future

So that was a probably non-exhaustive list of my self-hosting activities. I’m sure I’ve probably forgotten many things and of course there are the huge amounts of supporting software that I haven’t mentioned. As I said, I’m now at the stage where this meets almost all my computing needs although there are a few areas where I want to improve.

The main thing is automating and persisting my configuration, since I’m still mostly doing things manually. For this I’ve settled on a combination of Ansible and Docker. I’ve played extensively with both but haven’t really made much progress with deploying them for much more than testing purposes.

I’m also constantly evaluating new software to fill gaps in my ecosystem. I’m currently looking at Rocket.Chat and Hubot to provide a chat based interface for remote administration, but don’t have a usable system yet. I’m also toying with the idea of a Gitlab server to replace the gitolite+gitweb system and to utilise the CI in my automation strategy, but I’ve heard it requires a bit in terms of resources (incidently gitlab.com is really the only 3rd party service I heavily use).

That I am able to do this at all is a testament to the power of Free and Open Source software and cheap commodity hardware. I find it pretty awesome to think that almost every interaction I have online utilises my own infrastructure and that it works tirelessly for me 24/7.

I’m only just getting started documenting my setup here, for instance this post hasn’t touched on any of the client applications I use on my phone and desktop machines. I’m also going to do some more technical posts on various aspects as time goes on, so please stay tuned (or even subscribe to the RSS feed or mailing list!).

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.

Reviving this Blog

Its been almost four years since I updated this blog and in that time I’ve been busy with life and family. I’ve still been working on blog worthy stuff, but all my spare time has been taken up with actual projects, rather than writing about them. Most likely this has just been a matter of priorities. I could have made time to blog, but wasn’t interested enough to do so. I’ve always kept the site running and software updated and I’ve watched the daily hits go down from several hundred to single digits.

Recently I’ve been thinking that I would like to get back into it. Its taken me a little while to set aside the time, but this post is the start of a new and (hopefully) sustained run of writing. How long this continues will really depend on the response I get, if I see people reading and responding to what I’m producing then I will feel justified in setting aside time for it.

I’m making this a little bit of a fresh start and with that in mind I’ve done some work on the site. All the old content will remain in place, since it still gets a few hits. However, I’ve updated the theme and added the option for readers to subscribe by email, since this seems pretty popular nowadays (you can still subscribe by RSS and that’s never going away). Also, the site is now only accessible by HTTPS thanks to Let’s Encrypt.

Content-wise I’ve tried to get a head start and currently have two further posts written and ready for publishing. These will be posted later this week and I’ll try to keep up the momentum. In terms of topics I’ll be covering there will be lots of stuff about self-hosting your own cloud services, some embedded stuff and general software and Linux stuff. I have a list of posts I want to write and I’m open to suggestions in the comments.

Let’s see how this goes…

Tiny MQTT Broker with OpenWRT

This post may contain affiliate links. Please see the disclaimer for more information.

So yet again I’ve been really lax at posting, but meh. I’ve still been working on various projects aimed at home automation – this post is a taster of where I’m going…

MQTT (for those that haven’t heard about it) is a real time, lightweight, publish/subscribe protocol for telemetry based applications (i.e. sensors). It’s been described as “RSS for the Internet of Things” (a rather poor description in my opinion).

The central part of MQTT is the broker: clients connect to brokers in order to publish data and receive data in feeds to which they are subscribed. Multiple brokers can be fused together in a heirarchical structure, much like the mounting of filesystems in a unix-like system.

I’ve been considering using MQTT for the communication medium in my planned home automation/sensor network projects. I wanted to set up a heirarchical system with different brokers for different areas of the house, but hadn’t settled on a hardware platform. Until now…

…enter the TP-Link MR3020 ‘travel router’, which is much like the TL-WR703N which I’ve seen used in several hardware hacks recently:

It's a Tiny MQTT Broker!

It’s a Tiny MQTT Broker!

I had to ask a friend in Hong Kong to send me a couple of these (they aren’t available in NZ) – thanks Tony! (UPDATE 2019: Of course now you can get these shipped direct, something I didn’t know about in 2012). Once I received them installing OpenWRT was easy (basically just upload through the exisiting web UI, follow the instructions on the wiki page I linked to above). I then configured the wireless adapter in station mode so that it would connect to my existing wireless network and added a cheap 8GB flash drive to expand the available storage (the device only has 4MB of on-board flash, of which ~900KB is available after installing OpenWRT). I followed the OpenWRT USB storage howto for this and to my relief found that the on-board flash had enough space for the required drivers (phew!).

Once the hardware type stuff was sorted with the USB partitioned (1GB swap, 7GB /opt) and mounting on boot, I was able to install Mosquitto, the Open Source MQTT broker with the following command:

$ opkg install mosquitto -d opt

The -d option allows the package manager to install to a different destination, in this case /opt. Destinations are configured in /etc/opkg.conf.

It took a little bit of fiddling to get mosquitto to start at boot, mainly because of the custom install location. In the end I just edited the paths in /opt/etc/init.d/mosquitto to point to the correct locations (I changed the APP and CONF settings). I then symlinked the script to /etc/rc.d/S50mosquitto to start it at boot.

That’s about as far as I’ve got, apart from doing a quick test with mosquitto_pub/mosquitto_sub to test everything works. I haven’t tried mounting the broker under the master broker running on my home server yet.

The next job is to investigate the serial port on the device in order to attach an Arduino clone which I soldered up a while ago. That will be a story for another day, hopefully in the not-to-distant future!

If you liked this post and want to see more, please consider subscribing to the mailing list (below) or the RSS feed. You can also follow me on Twitter. If you want to show your appreciation, feel free to buy me a coffee.