Posts for year 2020

The Depressing Age of the Walled Garden

Sigh I remember being excited for Android. The age of a popular Linux device was upon us! I had moved to Reddit and was seeing new and interesting things and opinions every day, The Internet and Tech was vibrant! I no longer feel this way.

Android became heavily dependent on Google

Due to the quirks of the Mobile SOC's updating was... complicated. Add to that OEM customizations and you got a recipe for Google to swoop in and "fix" the issue by making key components, that used to part of the AOSP, proprietary. Where once we had the assurance of open development we now have even more black boxes of code being downloaded onto our mobile devices.

Android got DRM

Wildvine drm snuck onto our phones. Now you can have a legitimate bit of general purpose hardware where some services don't work as well as the hardware supports (Looking at you Netflix) simply because the market share of that device wasn't great enough for them to "certify" it's use.

The Walled Gardens of the Internet

At some point the Internet seamed to shrink. Once there was new (Admittedly sometimes terrible) sites to explore every day. Now it feels like everyone is siloed into Facebook, Twitter and Reddit. Reddit introduced Anti user "optimizations" to their Mobile website in the idiotic attempt to push users into their app. In 2020 a website no longer wants to be a website built on open technologies, they want it locked down like Twitter and Facebook in an Application.

IOT locked in the cloud

I was promised a smart home when I was a child. Now we have insecure IOT devices that are, for all intents and purposes, owned by someone else. The software can randomly be killed like Google did for some Nest devices and the open API's can be locked down with little to no notice. Sure we have the hacky open ecosystem for the ESP8266 and the ESP32 is great for DIY projects. However the ESP32 introduces flash encryption and secure boot. Meaning that even in this wonderful open hardware hacking space, the future is a locked down dystopia

It's all a war against General Purpose computing

Now you see things like Apple's M1 ARM based chip replacing x86 chips for their computers. They claim it's about user experience and performance but there is no denying that by moving to ARM from x86 gives them the same locked down hardware control they enjoy on the iPhone and iPad. From the articles I have read we will also begin to see them begin to slowly port the same software controls over as well, will we soon see the death of alternative browser Engines on their computers? unless they get hit by a good anti-trust charge or too I wouldn't be surprised if it happens in just a couple of years.

Now I read that Microsoft's Pluton hardware coming to our General purpose x86 CPU's. Cryptographic technology originally employed in the XBOX for DRM


Why I still host my emails myself

Electronic mail (email or e-mail) is a method of exchanging messages (mail) between people using electronic devices.

Recently I have seen a fair amount of talk about email in general and self-hosting email ( Email is not broken, Email is broken and Why I no longer host my emails myself ). I started selfhosting email using kolab almost 7 years ago now and I would never go back to the bad old days of hosted email.

Using a well respected email stack/package like mailinabox, modoboa, mailcow or docker-mailserver is easy. They come with all the bells and whistles with very little low level stuff you need to configure yourself.

Some of the Advantages

Sieve Filters

Why so many providers refuse to provide something as simple and powerful as Sieve filters I'll never know. See wikipedia for an overview but basically it's the filters you have in Outlook and Thunderbird but on the server and with more options.

Backups

Many will tell you this is a disadvantage to selfhosting your own emails. I vehemently disagree, being able to backup your own emails without having to deal with the IMAP protocol saves you from countless headaches. You have to do backups for your install but you also get to backup for when you inevitably delete that one important email from 5 years ago.

Infinite email addresses

People pay for this service and praise google for allowing the . in email addresses allowing you to classify incoming emails based on the address used. Some nice examples are [email protected], [email protected]. If you have multiple accounts you can also do . addresses like [email protected] or [email protected]

No storage limits

1 gig email boxes? silly in this day and age. I just buy a new hard drive. Hosted email gets prohibitively expensive fast because the cloud is not meant for bulk storage.

Some of the Disadvantages

Spam

Honestly this is a solved problem for the most part, the big issue most people have is they don't train the filter. I more or less use: Dovecot: Anti spam With Sieve

  • With infinite email addresses and Sieve filters you can easily move untrusted emails into specific folders
  • Spamassassin when properly trained catches a large number of spam emails
  • Using Postscreen for greylisting along with the great Postwhite script I found I could drop drive-by spam emails by a massive amount whilst still receiving the majority of important emails quickly.

Deliverability

The main argument against selfhosting email is deliverability and set-up time. Deliverability is a pain thanks to Google and Microsoft ignoring standards and generally being bad digital neighbors.

The simplest solution is to go with a partial selfhosted solution. Receive all your emails on your own server but send your emails through a SMTP relay such as Mailgun or Amazon SES

Or You can follow best practices and after warming up your IP address with a good volume of emails send them yourself.

  • Don't send from a residential IP
  • rDNS (PTR)
  • DKIM - dkim.org
  • SPF - spfwizard.net
  • Valid SSL (i.e. Letsencrypt)
  • Add a MTA-STS record - Tutorial.
  • DMARC - dmarc.org
  • Send emails dual format, "Plain and Rich (html) text" when possible Thunderbird. - Google is picky about this one.
  • Avoid formatted links like google link instead use unformatted links https://google.com
  • Sign up for Google postmaster tools
  • Sign up for Microsoft's SNDS

TOTP with sudo (Google Auth)

I was reading the posts over on lobste.rs and saw this post: Is sudo almost useless?. Typically I see sudo as a safety belt to protect you from doing something stupid with administrator privileges rather than a security shield. But that doesn't mean it can't be both

As with ssh, outlined in my previous post TOTP with SSH (Google Auth), you can certainly boost your sudo usefulness security wise by throwing 2FA via google-authenticator-libpam on top of it.

Install google-authenticator-libpam

On debian/ubuntu:

    sudo apt update && sudo apt install google-authenticator-libpam

Set-up your secret keys

We now need to create the secret key, this should not be kept in the user folder, after all what is the point of 2FA if the user we are authenticating can just read the secret files. In my case I keep them in the root dir

Replace the variable ${USER} if/when you create a key for a user other than the active one.

sudo google-authenticator -s /root/.sudo_totp/${USER}/.google_authenticator
sudo chmod 600 -R /root/.sudo_totp/

You will see a QR code/secret key that you can scan with a TOTP app like andotp, authy, google authenticator or in my case I added it to my yubikey. There are also your emergency scratch codes that you should record somewhere safe.

Enable in PAM

You now need to let PAM know it should be checking the codes. There are two ways to do this, Mandatory and Only if secret key exists. I have it as Mandatory any user using sudo MUST have a secret key

In /etc/pam.d/sudo add the following configuration lines to the end of the file.

# Use Google Auth -- Mandatory
auth required pam_google_authenticator.so secret=/root/.sudo_totp/${USER}/.google_authenticator user=root

# Use Google Auth -- Only if secret key exists
# auth required pam_google_authenticator.so secret=/root/.sudo_totp/${USER}/.google_authenticator user=root nullok

Bonus do this for su as well

You can do the same thing for su as well however obviously the user variable will be root rather than the user attempting to elevate their privilege's.

Setup the key as before, just for the root user

sudo google-authenticator -s /root/.google_authenticator
sudo chmod 600 -R /root/.google_authenticator

In /etc/pam.d/su add the following configuration lines to the end of the file.

# Use Google Auth -- Mandatory
auth required pam_google_authenticator.so secret=/root/.google_authenticator user=root

Node-red - Phonetrack HomeAssistant Bridge

Node-RED is a flow-based programming tool, originally developed by IBM’s Emerging Technology Services team and now a part of the JS Foundation.

I threw together a quick way to bridge GPS data from the android app GPS Logger between HomeAssistant and the Nextcloud APP Phonetrack.

Note: Returns the highest http response code thrown by either service, this can result in the GPS logger submitting multiple times if there is any issues with either Nextcloud or HomeAssistant

Extra Nodes used

Configuration

Node-Red

In HomeAssistant follow the instructions found here to obtain the GPS Logger web hook url and add this to the config node

In Nextcloud after creating a tacking session click the link icon and fetch the link labled GpsLogger GET and POST link : and add this to the config node

Edit the http auth Node with your desired credentials.

GPS Logger

Go to Logging details -> Log to custom URL -> URL and add your Node-Red url: https://example.tld/node-red/gps_logger?latitude=%LAT&longitude=%LON&device=[Your_Device_Name_Here]&accuracy=%ACC&battery=%BATT&speed=%SPD&direction=%DIR&altitude=%ALT&provider=%PROV&activity=%ACT&timestamp=%TIMESTAMP - Note: edit the url and the [Your_Device_Name_Here]

Go to Logging details -> Log to custom URL -> Basic Authentication add the username and password you set in the http auth node

[{"id":"f5b2b1b.1f8895","type":"http in","z":"4a1f60d7.aaf398","name":"GPS Logger endpoint","url":"/gps_logger","method":"post","upload":false,"swaggerDoc":"","x":180,"y":180,"wires":[["d18bbbaf.bd8138"]]},{"id":"c8badb69.d6dc78","type":"debug","z":"4a1f60d7.aaf398","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","x":1210,"y":240,"wires":[]},{"id":"2b58f523.f2713a","type":"http request","z":"4a1f60d7.aaf398","name":"Home Assistant","method":"POST","ret":"txt","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":660,"y":260,"wires":[["b894f1e0.b35c28"]]},{"id":"7b5c470a.a86d7","type":"change","z":"4a1f60d7.aaf398","name":"Build HomeAssistant Query","rules":[{"t":"set","p":"headers","pt":"msg","to":"{}","tot":"json"},{"t":"set","p":"headers.content-type","pt":"msg","to":"application/x-www-form-urlencoded","tot":"str"},{"t":"set","p":"url","pt":"msg","to":"homeassistant","tot":"flow"},{"t":"set","p":"payload","pt":"msg","to":"\"latitude=\" & req.query.latitude & \"&longitude=\" & req.query.longitude & \"&device=\" & req.query.device & \"&accuracy=\" & req.query.accuracy & \"&battery=\" & req.query.battery & \"&speed=\" & req.query.speed & \"&direction=\" & req.query.direction & \"&altitude=\" & req.query.altitude & \"&provider=\" & req.query.provider  & \"&activity=\" & req.query.activity","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":440,"y":260,"wires":[["2b58f523.f2713a"]]},{"id":"ee593f24.f06ca8","type":"change","z":"4a1f60d7.aaf398","name":"Set Phonetrack URL","rules":[{"t":"set","p":"url","pt":"msg","to":"$flowContext(\"phonetrack\") & req.query.device & \"?lat=\" & req.query.latitude & \"&lon=\" & req.query.longitude & \"&acc=\" & req.query.accuracy & \"&speed=\" & req.query.speed & \"&bearing=\" & req.query.direction & \"&timestamp=\" & req.query.timestamp & \"&battery=\" & req.query.battery","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":420,"y":220,"wires":[["f9487911.1c6ed8"]]},{"id":"f9487911.1c6ed8","type":"http request","z":"4a1f60d7.aaf398","name":"PhoneTrack","method":"POST","ret":"txt","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":650,"y":220,"wires":[["b894f1e0.b35c28"]]},{"id":"1b9f05f9.a0fb4a","type":"http response","z":"4a1f60d7.aaf398","name":"","statusCode":"","headers":{},"x":1210,"y":280,"wires":[]},{"id":"d9a79cbe.4e6668","type":"config","z":"4a1f60d7.aaf398","name":"URLS","properties":[{"p":"phonetrack","pt":"flow","to":"https://example.tld/apps/phonetrack/log/gpslogger/__phonetrackid__/","tot":"str"},{"p":"homeassistant","pt":"flow","to":"https://home.example.tld/api/webhook/__webhookkey__","tot":"str"}],"active":true,"x":150,"y":140,"wires":[]},{"id":"b894f1e0.b35c28","type":"join","z":"4a1f60d7.aaf398","name":"","mode":"custom","build":"array","property":"statusCode","propertyType":"msg","key":"url","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"2","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":850,"y":240,"wires":[["fc27dbec.975ca8"]]},{"id":"fc27dbec.975ca8","type":"change","z":"4a1f60d7.aaf398","name":"","rules":[{"t":"set","p":"statusCode","pt":"msg","to":"$max(statusCode.$number())","tot":"jsonata"},{"t":"set","p":"payload","pt":"msg","to":"","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":1000,"y":240,"wires":[["35d1687f.3e79d"]]},{"id":"51a8ee1e.cb9c38","type":"comment","z":"4a1f60d7.aaf398","name":"Set URLS for HomeAssistant and PhoneTrack","info":"","x":250,"y":100,"wires":[]},{"id":"35d1687f.3e79d","type":"switch","z":"4a1f60d7.aaf398","name":"","property":"statusCode","propertyType":"msg","rules":[{"t":"gt","v":"200","vt":"str"},{"t":"else"}],"checkall":"true","repair":false,"outputs":2,"x":1030,"y":280,"wires":[["c8badb69.d6dc78","1b9f05f9.a0fb4a"],["1b9f05f9.a0fb4a"]]},{"id":"d18bbbaf.bd8138","type":"node-red-contrib-httpauth","z":"4a1f60d7.aaf398","name":"","file":"","cred":"","authType":"Basic","realm":"","username":"","password":"","hashed":false,"x":220,"y":220,"wires":[["ee593f24.f06ca8","7b5c470a.a86d7"]]}]

You should probably know about LetsEncrypt DNS challenge validation

Everyone knows the basic way to renew a LetsEncrypt cert. Open port 80 and let LetsEncrypt connect to your server. But what if you don't want to open your network or you limit access to a handful of IP addresses? Well you can just use the DNS challenge validation, no need for web servers and no need for port wrangling.

For example I use the certbot-dns-cloudflare for my work intranet allowing it to remain VPN only.

Another great option is to use acme.sh as it supports a massive list of dns providers and the ever popular duckdns out of the box.

Given in the past I found the most fragile part of my LetsEncrypt setup was making sure port 80 was accessible to LetsEncrypt I personally use this method even if I have a network accessible from the wider internet.


Splitting a Facebook event calendar

Node-RED is a flow-based programming tool, originally developed by IBM’s Emerging Technology Services team and now a part of the JS Foundation.

My friends make liberal use of Facebook events, unfortunately I find the events interface impossible to navigate. Luckily they do have a .ics available, unluckily events you haven't accepted are mixed with events you have.

So I made a simple flow that splits accepted events from tentative events. This way I can subscribe to this .ics in my Nextcloud instance and give tentative events a different color. I find a calendar is much more usable.

I used the node-red-contrib-httpauth node. The main bit is contained in a function node, mainly because I couldn't figure out how to sanely do this with the split node.

Use is simple:

  1. Update the http auth node with your preferred user:pasword
  2. Add your facebook event calendar url to the http request node
  3. subscribe to user:[email protected]/nod-red/facebook/accepted or user:[email protected]/nod-red/facebook/tentative

[{"id":"30abd373.bd5524","type":"http in","z":"ce798f74.64a9d8","name":"","url":"/facebook/:request","method":"get","upload":false,"swaggerDoc":"","x":160,"y":300,"wires":[["77e1feab.382658"]]},{"id":"705a7847.bc5d","type":"debug","z":"ce798f74.64a9d8","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","x":430,"y":400,"wires":[]},{"id":"77e1feab.382658","type":"node-red-contrib-httpauth","z":"ce798f74.64a9d8","name":"","file":"","cred":"","authType":"Basic","realm":"","username":"","password":"","hashed":false,"x":240,"y":340,"wires":[["116bd38e.4b7dcc","705a7847.bc5d"]]},{"id":"e1172439.6bf4b8","type":"comment","z":"ce798f74.64a9d8","name":"Secure with Basic Auth","info":"","x":200,"y":380,"wires":[]},{"id":"116bd38e.4b7dcc","type":"switch","z":"ce798f74.64a9d8","name":"","property":"req.params.request","propertyType":"msg","rules":[{"t":"eq","v":"accepted","vt":"str"},{"t":"eq","v":"tentative","vt":"str"},{"t":"else"}],"checkall":"true","repair":false,"outputs":3,"x":430,"y":300,"wires":[["eaff2195.a68b38"],["eaff2195.a68b38"],["83c3be58.0fc9a8"]]},{"id":"4d794bb0.d8ef74","type":"function","z":"ce798f74.64a9d8","name":"Split Calendar","func":"msg.payload = msg.payload.toString('utf8');\nmsg.payload = msg.payload.replace(\"END:VCALENDAR\", \"\");\nmsg.payload = msg.payload.split(/(?=BEGIN:VEVENT)/g);\nmsg.calendar = msg.payload[0];\n\nmsg.payload.forEach(function(part, index){\n   if (part.includes(\"PARTSTAT:ACCEPTED\") && (msg.req.params.request == \"accepted\")){\n       msg.calendar += part;\n   } else if (part.includes(\"PARTSTAT:TENTATIVE\") && (msg.req.params.request == \"tentative\") ){\n       msg.calendar += part;\n   }\n});\n\nmsg.calendar += \"END:VCALENDAR\";\nreturn msg;","outputs":2,"noerr":0,"x":1000,"y":300,"wires":[["6b67410a.701ab"],[]]},{"id":"13bdf419.e491ec","type":"http request","z":"ce798f74.64a9d8","name":"","method":"GET","ret":"bin","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":830,"y":300,"wires":[["4d794bb0.d8ef74"]]},{"id":"ab19b6e9.4b1ed8","type":"comment","z":"ce798f74.64a9d8","name":"Facebook event calendar URL","info":"","x":890,"y":340,"wires":[]},{"id":"83c3be58.0fc9a8","type":"http response","z":"ce798f74.64a9d8","name":"404","statusCode":"404","headers":{},"x":430,"y":360,"wires":[]},{"id":"6b67410a.701ab","type":"change","z":"ce798f74.64a9d8","name":"","rules":[{"t":"move","p":"calendar","pt":"msg","to":"payload","tot":"msg"},{"t":"set","p":"headers['content-type']","pt":"msg","to":"text/calendar","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":1180,"y":300,"wires":[["d1e9086f.645b78"]]},{"id":"d1e9086f.645b78","type":"http response","z":"ce798f74.64a9d8","name":"Return","statusCode":"200","headers":{},"x":1330,"y":300,"wires":[]},{"id":"eaff2195.a68b38","type":"change","z":"ce798f74.64a9d8","name":"Set Browser User Agent","rules":[{"t":"set","p":"headers.User-Agent","pt":"msg","to":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":630,"y":300,"wires":[["13bdf419.e491ec"]]}]

Docker Volume Backups

Backups are always an issue. I plan to switch to ZFS for the snapshotting and remote sync features, until then I have take the useful volume-backup and broken it until it works with rdiff-backup

Build the container

First you have to clone the repo and build the container

git clone https://github.com/dugite-code/volume-backup.git
cd volume-backup
docker image build -t vbackup:1.0 .

Backup

Now you can run the container mounting the [volume-name] at /volume and your [backup-dir] at /backup

docker run -v [volume-name]:/volume -v [backup-dir]:/backup --rm vbackup:1.0 backup

Additional rdiff-backup options and be passed via the -o switch and a quoted option for example -o "--exclude ignore.me"

Restore

To restore you must supply some form of options i.e. -o "-r 10D" or restore backup from 10 Days ago

docker run -v [volume-name]:/volume -v [backup-dir]:/backup --rm vbackup:1.0 restore -o "-r 10D"

Trimming old files

With incremental backups it's important to occasionally trim old files that just don't exist anymore. Like Restore you must provide some form of option.

Prior to doing a backup I will run this command to remove files older than 20 Backups

docker run -v [backup-dir]:/backup --rm vbackup:1.0 remote -v -o "--remove-older-than 20B"

I hope you found this helpful. It's not a very clean script, I had to hack apart the reference script quite a bit in order to get it all working. But it serves it's purpose quite well.


I am a Docker Convert

I've changed my mind quite a bit when it comes to docker. I used to be a big believer in virtual machines, I still am, but for individual 'applications' Docker makes a fair bit of sense.

Reasons to I use Docker

Simplicity

Docker is the simplest way to replicate a developer's environment on your own computer. No more dealing with differing distro's varying update cycles and the conflicting packages causing edge case issues, because everything is in it's own little box. Nice and predictable.

This saves you time setting things up because at least all the components are included. Configuration is still a pain on some projects, but at least your not missing any metaphorical screws.

The biggest example for this was my mailserver. I used modoboa a great simple mailserver package. The issues were having things brake from system package updates and just updating the package itself was damned complicated. I learnt a lot from these breakages, so much so when I switched to docker I switched to using docker-mailserver a image that has no Web GUI for configuration.

Updates, while problematic to monitor in docker are now a simple painless affair.

Lightweight

Unlike a virtual machine you don't need to replicate everything in a container. This makes it easier to have more services that conflict with each other running side by side. I used to have one dedicated NUC for my mailserver and another for all my other services. I've now condensed it all onto the single NUC with better overall performance thanks to docker.

Portability

One of the biggest advantages to docker is portability. If you take your raw data and docker-compose files throw them onto a completely separate machine and within a few minutes you are up and running again. For virtual machines this would take significant work and, in my experience often fails.

The Issues I have with docker

The pre-built images

The Alpine image root issue last year, where the base image used to build a large number of docker images shipped with a vulnerability, made it obvious you need an actively maintained update cycle.

If the project you are using doesn't provide a docker image or even a dockerfile you will often find pre-built images on docker-hub. The big question you need to ask is if you can trust these images. Check the source repository and decide if it would make more sense to build the image yourself.

Keeping pre-built images up-to date

One of the biggest issues people have with docker is the lack of update tracking. Thankfully this can be overcome using the Watchtower image.

I set watchtower to monitor only mode because automatic updates are sometimes a terrible idea.

watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    restart: unless-stopped
    environment:
      - WATCHTOWER_POLL_INTERVAL=86400 #Poll every 24 hours
      - WATCHTOWER_MONITOR_ONLY=true
      - WATCHTOWER_NOTIFICATIONS=gotify
      - WATCHTOWER_NOTIFICATION_GOTIFY_URL=https://example.tld/gotify/
      - WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN=###########

Importantly for locally built images add the disable label to their docker-compose files, or you will constantly get notifications saying (info): Unable to update container /examplecontainer. Proceeding to next.

  labels:
   - com.centurylinklabs.watchtower.enable=false