Skip to main content

Posts for year 2020

You should probably know about LetsEncrypt DNS challenge validation

Everyone knows the basic way to renew a LetsEncrypt cert. Open port 80 and let LetsEncrypt connect to your server. But what if you don't want to open your network or you limit access to a handful of IP addresses? Well you can just use the DNS challenge validation, no need for web servers and no need for port wrangling.

For example I use the certbot-dns-cloudflare for my work intranet allowing it to remain VPN only.

Another great option is to use acme.sh as it supports a massive list of dns providers and the ever popular duckdns out of the box.

Given in the past I found the most fragile part of my LetsEncrypt setup was making sure port 80 was accessible to LetsEncrypt I personally use this method even if I have a network accessible from the wider internet.


Splitting a Facebook event calendar

Node-RED is a flow-based programming tool, originally developed by IBM’s Emerging Technology Services team and now a part of the JS Foundation. My friends make liberal use of Facebook events, unfortunately I find the events interface impossible to navigate. Luckily they do have a .ics available, unluckily events you haven't accepted are mixed with events you have.

So I made a simple flow that splits accepted events from tentative events. This way I can subscribe to this .ics in my Nextcloud instance and give tentative events a different color. I find a calendar is much more usable.

I used the node-red-contrib-httpauth node. The main bit is contained in a function node, mainly because I couldn't figure out how to sanely do this with the split node.

Use is simple:

  1. Update the http auth node with your preferred user:pasword
  2. Add your facebook event calendar url to the http request node
  3. subscribe to user:[email protected]/nod-red/facebook/accepted or user:[email protected]/nod-red/facebook/tentative

[{"id":"30abd373.bd5524","type":"http in","z":"ce798f74.64a9d8","name":"","url":"/facebook/:request","method":"get","upload":false,"swaggerDoc":"","x":160,"y":300,"wires":[["77e1feab.382658"]]},{"id":"705a7847.bc5d","type":"debug","z":"ce798f74.64a9d8","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","x":430,"y":400,"wires":[]},{"id":"77e1feab.382658","type":"node-red-contrib-httpauth","z":"ce798f74.64a9d8","name":"","file":"","cred":"","authType":"Basic","realm":"","username":"","password":"","hashed":false,"x":240,"y":340,"wires":[["116bd38e.4b7dcc","705a7847.bc5d"]]},{"id":"e1172439.6bf4b8","type":"comment","z":"ce798f74.64a9d8","name":"Secure with Basic Auth","info":"","x":200,"y":380,"wires":[]},{"id":"116bd38e.4b7dcc","type":"switch","z":"ce798f74.64a9d8","name":"","property":"req.params.request","propertyType":"msg","rules":[{"t":"eq","v":"accepted","vt":"str"},{"t":"eq","v":"tentative","vt":"str"},{"t":"else"}],"checkall":"true","repair":false,"outputs":3,"x":430,"y":300,"wires":[["eaff2195.a68b38"],["eaff2195.a68b38"],["83c3be58.0fc9a8"]]},{"id":"4d794bb0.d8ef74","type":"function","z":"ce798f74.64a9d8","name":"Split Calendar","func":"msg.payload = msg.payload.toString('utf8');\nmsg.payload = msg.payload.replace(\"END:VCALENDAR\", \"\");\nmsg.payload = msg.payload.split(/(?=BEGIN:VEVENT)/g);\nmsg.calendar = msg.payload[0];\n\nmsg.payload.forEach(function(part, index){\n   if (part.includes(\"PARTSTAT:ACCEPTED\") && (msg.req.params.request == \"accepted\")){\n       msg.calendar += part;\n   } else if (part.includes(\"PARTSTAT:TENTATIVE\") && (msg.req.params.request == \"tentative\") ){\n       msg.calendar += part;\n   }\n});\n\nmsg.calendar += \"END:VCALENDAR\";\nreturn msg;","outputs":2,"noerr":0,"x":1000,"y":300,"wires":[["6b67410a.701ab"],[]]},{"id":"13bdf419.e491ec","type":"http request","z":"ce798f74.64a9d8","name":"","method":"GET","ret":"bin","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":830,"y":300,"wires":[["4d794bb0.d8ef74"]]},{"id":"ab19b6e9.4b1ed8","type":"comment","z":"ce798f74.64a9d8","name":"Facebook event calendar URL","info":"","x":890,"y":340,"wires":[]},{"id":"83c3be58.0fc9a8","type":"http response","z":"ce798f74.64a9d8","name":"404","statusCode":"404","headers":{},"x":430,"y":360,"wires":[]},{"id":"6b67410a.701ab","type":"change","z":"ce798f74.64a9d8","name":"","rules":[{"t":"move","p":"calendar","pt":"msg","to":"payload","tot":"msg"},{"t":"set","p":"headers['content-type']","pt":"msg","to":"text/calendar","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":1180,"y":300,"wires":[["d1e9086f.645b78"]]},{"id":"d1e9086f.645b78","type":"http response","z":"ce798f74.64a9d8","name":"Return","statusCode":"200","headers":{},"x":1330,"y":300,"wires":[]},{"id":"eaff2195.a68b38","type":"change","z":"ce798f74.64a9d8","name":"Set Browser User Agent","rules":[{"t":"set","p":"headers.User-Agent","pt":"msg","to":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":630,"y":300,"wires":[["13bdf419.e491ec"]]}]

Docker Volume Backups

DOCKER

Backups are always an issue. I plan to switch to ZFS for the snapshotting and remote sync features, until then I have take the useful volume-backup and broken it until it works with rdiff-backup

Build the container

First you have to clone the repo and build the container

git clone https://github.com/dugite-code/volume-backup.git
cd volume-backup
docker image build -t vbackup:1.0 .

Backup

Now you can run the container mounting the [volume-name] at /volume and your [backup-dir] at /backup

docker run -v [volume-name]:/volume -v [backup-dir]:/backup --rm vbackup:1.0 backup

Additional rdiff-backup options and be passed via the -o switch and a quoted option for example -o "--exclude ignore.me"

Restore

To restore you must supply some form of options i.e. -o "-r 10D" or restore backup from 10 Days ago

docker run -v [volume-name]:/volume -v [backup-dir]:/backup --rm vbackup:1.0 restore -o "-r 10D"

Trimming old files

With incremental backups it's important to occasionally trim old files that just don't exist anymore. Like Restore you must provide some form of option.

Prior to doing a backup I will run this command to remove files older than 20 Backups

docker run -v [backup-dir]:/backup --rm vbackup:1.0 remote -v -o "--remove-older-than 20B"

I hope you found this helpful. It's not a very clean script, I had to hack apart the reference script quite a bit in order to get it all working. But it serves it's purpose quite well.


I am a Docker Convert

DOCKER

I've changed my mind quite a bit when it comes to docker. I used to be a big believer in virtual machines, I still am, but for individual "applications" Docker makes a fair bit of sense.

Reasons to I use Docker

Simplicity

Docker is the simplest way to replicate a developer's environment on your own computer. No more dealing with differing distro's varying update cycles and the conflicting packages causing edge case issues, because everything is in it's own little box. Nice and predictable.

This saves you time setting things up because at least all the components are included. Configuration is still a pain on some projects, but at least your not missing any metaphorical screws.

The biggest example for this was my mailserver. I used modoboa a great simple mailserver package. The issues were having things brake from system package updates and just updating the package itself was damned complicated. I learnt a lot from these breakages, so much so when I switched to docker I switched to using docker-mailserver a image that has no Web GUI for configuration.

Updates, while problematic to monitor in docker are now a simple painless affair.

Lightweight

Unlike a virtual machine you don't need to replicate everything in a container. This makes it easier to have more services that conflict with each other running side by side. I used to have one dedicated NUC for my mailserver and another for all my other services. I've now condensed it all onto the single NUC with better overall performance thanks to docker.

Portability

One of the biggest advantages to docker is portability. If you take your raw data and docker-compose files throw them onto a completely separate machine and within a few minutes you are up and running again. For virtual machines this would take significant work and, in my experience often fails.

The Issues I have with docker

The pre-built images

The Alpine image root issue last year, where the base image used to build a large number of docker images shipped with a vulnerability, made it obvious you need an actively maintained update cycle.

If the project you are using doesn't provide a docker image or even a dockerfile you will often find pre-built images on docker-hub. The big question you need to ask is if you can trust these images. Check the source repository and decide if it would make more sense to build the image yourself.

Keeping pre-built images up-to date

One of the biggest issues people have with docker is the lack of update tracking. Thankfully this can be overcome using the Watchtower image.

I set watchtower to monitor only mode because automatic updates are sometimes a terrible idea.

watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    restart: unless-stopped
    environment:
      - WATCHTOWER_POLL_INTERVAL=86400 #Poll every 24 hours
      - WATCHTOWER_MONITOR_ONLY=true
      - WATCHTOWER_NOTIFICATIONS=gotify
      - WATCHTOWER_NOTIFICATION_GOTIFY_URL=https://example.tld/gotify/
      - WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN=###########

Importantly for locally built images add the disable label to their docker-compose files, or you will constantly get notifications saying (info): Unable to update container /examplecontainer. Proceeding to next.

  labels:
   - com.centurylinklabs.watchtower.enable=false