Technology (old posts, page 3)

As I Tinker I Learn, Somtimes I Even Write It Down.

Caving and switching to Amazon SES

Electronic mail (email or e-mail) is a method of exchanging messages (mail) between people using electronic devices.

For almost 7 years I have run my own email server sending and receiving emails on it's own IP address. A few years ago I switched to Digital Ocean as a VPS to reduce deliverability issues, recently the IP block I was in got blacklisted by zen.spamhaus. The age of my 100% independent email server is now over.

Big players make it hard for small servers

As outlined with this blog post Google is eating out email and the resulting discussion on Hacker News the big players appear to have no interest following the standard rules when it comes to emails. All the tools are designed for big users, Google and Microsofts's postmaster tools don't even register any statistics for a domain unless your sending over 200 emails a day.

Unfortunately we find ourselves in the same position when it comes to something as simple as an SMTP relay service. If you search online for what an SMTP relay is you will only see the marketing material that claims it is a tool for large marketers to deliver massive volumes of emails into the inbox of their customers. Not what it actually is, a forwarding service that allows multiple email servers to send from one IP address to outsource reputation management.

Clearly the main advantage of an SMTP relay should be small time servers can pool together to achieve the volume the big players want to see to be able to judge if your emails are "worthy" of being treated fairly.

Imagine my frustration when trying to find any service that directly caters to this and finding that they don't appear to exist. It's all pitched as a service for marketing email delivery, if you're not sending thousands of emails you are not the target market anymore.

Can I use Mailgun for my personal email address?

It’s not recommended.

There are plenty of hosted email services better suited for this than Mailgun; Rackspace Email, Gmail / Google Apps, Outlook, etc.

Mailgun is meant to be a tool for developers and their applications.

Fair enough if you don't have a personal email server but if you have a server used by a family? Not clear. Then when you do sign up for any SMTP service they ask for your Business name, website and business use.

In the end as I technically have donations listed on the about page of this blog I simply used that and the Transactional emails from my Nextcloud instance as a sample of what I would be sending.

Getting out of the sandbox was a thing

I initially tried signing up for SendGrid but they immediately lock new accounts and you have to contact support to even login, let alone send an email. So after they straight up ignored what I wrote in the support email I signed up with AWS then immediately discovered that even though I could log in and set things up I was isolated in a sandbox not allowed to play with the other children.

Even though when signing up I selected an Individual account for side projects they still wanted a business use. After I finished banging my head against the desk I submitted what info I could. After four emails where among other things I updated my Privacy policy page to include Opt-in/out language, I was denied.

We reviewed your request and determined that your use of Amazon SES could have a negative impact on our service. We are denying this request to prevent other Amazon SES customers from experiencing interruptions in service.

There was one chance. In the correspondence they asked very specifically

How do you handle bounces and complaints?

Given they appeared to be concerned with negative service impacts I discovered a method of Automating handling this based on this xenforo plugin. Using Node-red as an endpoint I can receive a notification of a bounce/complaint and shut down the server for manual review.

Informing Amazon of this change was enough to get through the bureaucratic hurdles .

Making the switch was easy at least

Thankfully this was the least of my issues. First I created an SMTP identity in AWS, added that to my postfix postfix-relaymap.cf and postfix-sasl-password.cf as outlined in docker-mailserver documentation. Add the Amazon domain key, SPF record and CNAME records for DKIM and I was off to the races.


Minimizing my selfhosted attack surface

Tweeking my linux server

I'm always fairly wary of opening my selfhosted services up to the internet, just how sure am I that the developer has done the right due-diligence? Thankfully it's relatively simple to at least limit parts of a service accessible to the open internet with Nginx and allow and deny options.


Update

Note: If you want a docker container to access a protected service you will have to set the subnet in your docker-compose file as below:

networks:
  node-red-network:
    ipam:
      config:
        - subnet: "172.16.0.0/24"

Update 2

A more generic change you can do is set the default address pools in docker's /etc/docker/daemon.json file. You then just have to whitelist 172.16.0.0/16 subnets

{
  "default-address-pools":
  [
    {"base":"172.16.0.0/16","size":24}
  ]
}

First you should store this in a file, that way you can then include it multiple times, this will make it trivial to update in the future. Create the file include_whitelist in your nginx folder, adding your own allow options between satisfy any; and deny all;.

satisfy any;

# allow localhost
allow 127.0.0.1;

# allow a single address
# allow 000.000.000.000;

# allow LAN network
allow 192.168.0.0/24;

# allow WLAN network
allow 192.168.2.0/24;

# allow VPN network
allow 10.1.4.0/24;

# drop rest of the world
deny all;

You then have to include the file in your nginx config. Here I am using TT-RSS as an example, Note that I'm excluding the API and the public.php by having it in a separate location block without including the include_whitelist. This allows me to keep accessing TT-RSS on my mobile phone through the mobile application.

  location ^~ /tt-rss/ {
      include /etc/nginx/include_whitelist;

      access_log off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_pass http://127.0.0.1:8280/tt-rss/;

  }

  location ^~ /tt-rss/api {

      access_log off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_pass http://127.0.0.1:8280/tt-rss/api;

  }

  location ^~ /tt-rss/public.php {

      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_pass http://127.0.0.1:8280/tt-rss/public.php;

  }

For Node-Red I wanted an API endpoint for Tasker on my phone. Thankfully this is just as easy to define in Node-red as it is in nginx. In Node-Red open your GET node and just add another folder.

Add and extra folder to your Node-Red endpoints

An example of the Node-Red nginx configuration. Again just like the TT-RSS example above, I have excluded an api subdirectory by having a separate location block.

  location ^~ /node-red/ {
    include /etc/nginx/include_whitelist;

    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_pass http://127.0.0.1:1880/node-red/;
  }

  location ^~ /node-red/api/ {
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_pass http://127.0.0.1:1880/node-red/api/;
  }

Now only your API endpoints are globally available. If like me you use a firewall, throwing a convenient geo-block up in front you can lower the exposure a bit more.


Removing 3g modem from a Kindle Paperwhite 2

Why do I need the cloud?

Do I need the cloud connectivity on my kindle everywhere I go? Probably not.

A few years ago I received an old Kindle 3g Paperwhite 2 (2013) 6th from my mother. She wasn't a fan of the screen and decided she preferred a full tablet for reading while I quickly fell in love with the e-ink screen. However my use-case involves mainly e-novels/websites I scrape and upload via Calibre. Given how privacy conscious I have become, and how much of a power drain a 3g modem can be, I've basically had it in airplane mode since I received it.

But walking to my desktop to sync books when the device has wifi capability felt wasteful. Searches on the internet proved fruitless on ways to disable the 3g.

Modular hardware

While searching for a teardown video to determine what the internals even looked like I stumbled upon this video for an older kindle model where it was pointed out how modular Kindle hardware really is. After seeing the wifi paperwhite internals and seeing this video I felt it was likely they were still utilizing this modular design method (after all, if it works), so I cracked open my kindle following this ifixit guide.

Well that was simpler than it looked

Thankfully removing the 3g modem is a non-destructive process (apart from the glue when removing the bezel).

Kindle paperwhite 2 internals
  1. Is the 3g antenna - I didn't remove this.
  2. Is the 3g modem - Held in by 4 screws and attached with a tiny connection plug.
  3. Is simcard slot - Simply push the tray lid in the direction of the arrow to open.
3g Modem removed
  1. Is the 3g antenna connector - This is a simple push connector. A firm pull up away from the board will disconnect this without damage. I used some tape to secure the wire for re-assembly
  2. A loose metal washer, I secured this back in place with the screw I removed previously.
  3. A loose metal washer, I secured this back in place with the screw I removed previously.

Re-assembly

This was simply a matter of re-placing the board, screws and lightly clamping the bezel back in place. The glue on the bezel was, thankfully, still sticky enough to re-attach without any issue. As the bezel has no mechanical stresses on it I don't see this ever being and issue.

My next steps

My next steps where to jailbreak/root my kindle to install a couple of useful tools (i.e. a firewall). I plan to go over this in detail along with my remote book managment solution in another post so Stay tuned.


The Depressing Age of the Walled Garden

Sigh I remember being excited for Android. The age of a popular Linux device was upon us! I had moved to Reddit and was seeing new and interesting things and opinions every day, The Internet and Tech was vibrant! I no longer feel this way.

Android became heavily dependent on Google

Due to the quirks of the Mobile SOC's updating was... complicated. Add to that OEM customizations and you got a recipe for Google to swoop in and "fix" the issue by making key components, that used to part of the AOSP, proprietary. Where once we had the assurance of open development we now have even more black boxes of code being downloaded onto our mobile devices.

Android got DRM

Wildvine drm snuck onto our phones. Now you can have a legitimate bit of general purpose hardware where some services don't work as well as the hardware supports (Looking at you Netflix) simply because the market share of that device wasn't great enough for them to "certify" it's use.

The Walled Gardens of the Internet

At some point the Internet seamed to shrink. Once there was new (Admittedly sometimes terrible) sites to explore every day. Now it feels like everyone is siloed into Facebook, Twitter and Reddit. Reddit introduced Anti user "optimizations" to their Mobile website in the idiotic attempt to push users into their app. In 2020 a website no longer wants to be a website built on open technologies, they want it locked down like Twitter and Facebook in an Application.

IOT locked in the cloud

I was promised a smart home when I was a child. Now we have insecure IOT devices that are, for all intents and purposes, owned by someone else. The software can randomly be killed like Google did for some Nest devices and the open API's can be locked down with little to no notice. Sure we have the hacky open ecosystem for the ESP8266 and the ESP32 is great for DIY projects. However the ESP32 introduces flash encryption and secure boot. Meaning that even in this wonderful open hardware hacking space, the future is a locked down dystopia

It's all a war against General Purpose computing

Now you see things like Apple's M1 ARM based chip replacing x86 chips for their computers. They claim it's about user experience and performance but there is no denying that by moving to ARM from x86 gives them the same locked down hardware control they enjoy on the iPhone and iPad. From the articles I have read we will also begin to see them begin to slowly port the same software controls over as well, will we soon see the death of alternative browser Engines on their computers? unless they get hit by a good anti-trust charge or too I wouldn't be surprised if it happens in just a couple of years.

Now I read that Microsoft's Pluton hardware coming to our General purpose x86 CPU's. Cryptographic technology originally employed in the XBOX for DRM


Why I still host my emails myself

Electronic mail (email or e-mail) is a method of exchanging messages (mail) between people using electronic devices.

Recently I have seen a fair amount of talk about email in general and self-hosting email ( Email is not broken, Email is broken and Why I no longer host my emails myself ). I started selfhosting email using kolab almost 7 years ago now and I would never go back to the bad old days of hosted email.

Using a well respected email stack/package like mailinabox, modoboa, mailcow or docker-mailserver is easy. They come with all the bells and whistles with very little low level stuff you need to configure yourself.

Some of the Advantages

Sieve Filters

Why so many providers refuse to provide something as simple and powerful as Sieve filters I'll never know. See wikipedia for an overview but basically it's the filters you have in Outlook and Thunderbird but on the server and with more options.

Backups

Many will tell you this is a disadvantage to selfhosting your own emails. I vehemently disagree, being able to backup your own emails without having to deal with the IMAP protocol saves you from countless headaches. You have to do backups for your install but you also get to backup for when you inevitably delete that one important email from 5 years ago.

Infinite email addresses

People pay for this service and praise google for allowing the . in email addresses allowing you to classify incoming emails based on the address used. Some nice examples are [email protected], [email protected]. If you have multiple accounts you can also do . addresses like [email protected] or [email protected]

No storage limits

1 gig email boxes? silly in this day and age. I just buy a new hard drive. Hosted email gets prohibitively expensive fast because the cloud is not meant for bulk storage.

Some of the Disadvantages

Spam

Honestly this is a solved problem for the most part, the big issue most people have is they don't train the filter. I more or less use: Dovecot: Anti spam With Sieve

  • With infinite email addresses and Sieve filters you can easily move untrusted emails into specific folders
  • Spamassassin when properly trained catches a large number of spam emails
  • Using Postscreen for greylisting along with the great Postwhite script I found I could drop drive-by spam emails by a massive amount whilst still receiving the majority of important emails quickly.

Deliverability

The main argument against selfhosting email is deliverability and set-up time. Deliverability is a pain thanks to Google and Microsoft ignoring standards and generally being bad digital neighbors.

The simplest solution is to go with a partial selfhosted solution. Receive all your emails on your own server but send your emails through a SMTP relay such as Mailgun or Amazon SES

Or You can follow best practices and after warming up your IP address with a good volume of emails send them yourself.

  • Don't send from a residential IP
  • rDNS (PTR)
  • DKIM - dkim.org
  • SPF - spfwizard.net
  • Valid SSL (i.e. Letsencrypt)
  • Add a MTA-STS record - Tutorial.
  • DMARC - dmarc.org
  • Send emails dual format, "Plain and Rich (html) text" when possible Thunderbird. - Google is picky about this one.
  • Avoid formatted links like google link instead use unformatted links https://google.com
  • Sign up for Google postmaster tools
  • Sign up for Microsoft's SNDS

TOTP with sudo (Google Auth)

I was reading the posts over on lobste.rs and saw this post: Is sudo almost useless?. Typically I see sudo as a safety belt to protect you from doing something stupid with administrator privileges rather than a security shield. But that doesn't mean it can't be both

As with ssh, outlined in my previous post TOTP with SSH (Google Auth), you can certainly boost your sudo usefulness security wise by throwing 2FA via google-authenticator-libpam on top of it.

Install google-authenticator-libpam

On debian/ubuntu:

    sudo apt update && sudo apt install google-authenticator-libpam

Set-up your secret keys

We now need to create the secret key, this should not be kept in the user folder, after all what is the point of 2FA if the user we are authenticating can just read the secret files. In my case I keep them in the root dir

Replace the variable ${USER} if/when you create a key for a user other than the active one.

sudo google-authenticator -s /root/.sudo_totp/${USER}/.google_authenticator
sudo chmod 600 -R /root/.sudo_totp/

You will see a QR code/secret key that you can scan with a TOTP app like andotp, authy, google authenticator or in my case I added it to my yubikey. There are also your emergency scratch codes that you should record somewhere safe.

Enable in PAM

You now need to let PAM know it should be checking the codes. There are two ways to do this, Mandatory and Only if secret key exists. I have it as Mandatory any user using sudo MUST have a secret key

In /etc/pam.d/sudo add the following configuration lines to the end of the file.

# Use Google Auth -- Mandatory
auth required pam_google_authenticator.so secret=/root/.sudo_totp/${USER}/.google_authenticator user=root

# Use Google Auth -- Only if secret key exists
# auth required pam_google_authenticator.so secret=/root/.sudo_totp/${USER}/.google_authenticator user=root nullok

Bonus do this for su as well

You can do the same thing for su as well however obviously the user variable will be root rather than the user attempting to elevate their privilege's.

Setup the key as before, just for the root user

sudo google-authenticator -s /root/.google_authenticator
sudo chmod 600 -R /root/.google_authenticator

In /etc/pam.d/su add the following configuration lines to the end of the file.

# Use Google Auth -- Mandatory
auth required pam_google_authenticator.so secret=/root/.google_authenticator user=root

Node-red - Phonetrack HomeAssistant Bridge

Node-RED is a flow-based programming tool, originally developed by IBM’s Emerging Technology Services team and now a part of the JS Foundation.

I threw together a quick way to bridge GPS data from the android app GPS Logger between HomeAssistant and the Nextcloud APP Phonetrack.

Note: Returns the highest http response code thrown by either service, this can result in the GPS logger submitting multiple times if there is any issues with either Nextcloud or HomeAssistant

Extra Nodes used

Configuration

Node-Red

In HomeAssistant follow the instructions found here to obtain the GPS Logger web hook url and add this to the config node

In Nextcloud after creating a tacking session click the link icon and fetch the link labled GpsLogger GET and POST link : and add this to the config node

Edit the http auth Node with your desired credentials.

GPS Logger

Go to Logging details -> Log to custom URL -> URL and add your Node-Red url: https://example.tld/node-red/gps_logger?latitude=%LAT&longitude=%LON&device=[Your_Device_Name_Here]&accuracy=%ACC&battery=%BATT&speed=%SPD&direction=%DIR&altitude=%ALT&provider=%PROV&activity=%ACT&timestamp=%TIMESTAMP - Note: edit the url and the [Your_Device_Name_Here]

Go to Logging details -> Log to custom URL -> Basic Authentication add the username and password you set in the http auth node

[{"id":"f5b2b1b.1f8895","type":"http in","z":"4a1f60d7.aaf398","name":"GPS Logger endpoint","url":"/gps_logger","method":"post","upload":false,"swaggerDoc":"","x":180,"y":180,"wires":[["d18bbbaf.bd8138"]]},{"id":"c8badb69.d6dc78","type":"debug","z":"4a1f60d7.aaf398","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","x":1210,"y":240,"wires":[]},{"id":"2b58f523.f2713a","type":"http request","z":"4a1f60d7.aaf398","name":"Home Assistant","method":"POST","ret":"txt","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":660,"y":260,"wires":[["b894f1e0.b35c28"]]},{"id":"7b5c470a.a86d7","type":"change","z":"4a1f60d7.aaf398","name":"Build HomeAssistant Query","rules":[{"t":"set","p":"headers","pt":"msg","to":"{}","tot":"json"},{"t":"set","p":"headers.content-type","pt":"msg","to":"application/x-www-form-urlencoded","tot":"str"},{"t":"set","p":"url","pt":"msg","to":"homeassistant","tot":"flow"},{"t":"set","p":"payload","pt":"msg","to":"\"latitude=\" & req.query.latitude & \"&longitude=\" & req.query.longitude & \"&device=\" & req.query.device & \"&accuracy=\" & req.query.accuracy & \"&battery=\" & req.query.battery & \"&speed=\" & req.query.speed & \"&direction=\" & req.query.direction & \"&altitude=\" & req.query.altitude & \"&provider=\" & req.query.provider  & \"&activity=\" & req.query.activity","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":440,"y":260,"wires":[["2b58f523.f2713a"]]},{"id":"ee593f24.f06ca8","type":"change","z":"4a1f60d7.aaf398","name":"Set Phonetrack URL","rules":[{"t":"set","p":"url","pt":"msg","to":"$flowContext(\"phonetrack\") & req.query.device & \"?lat=\" & req.query.latitude & \"&lon=\" & req.query.longitude & \"&acc=\" & req.query.accuracy & \"&speed=\" & req.query.speed & \"&bearing=\" & req.query.direction & \"&timestamp=\" & req.query.timestamp & \"&battery=\" & req.query.battery","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":420,"y":220,"wires":[["f9487911.1c6ed8"]]},{"id":"f9487911.1c6ed8","type":"http request","z":"4a1f60d7.aaf398","name":"PhoneTrack","method":"POST","ret":"txt","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":650,"y":220,"wires":[["b894f1e0.b35c28"]]},{"id":"1b9f05f9.a0fb4a","type":"http response","z":"4a1f60d7.aaf398","name":"","statusCode":"","headers":{},"x":1210,"y":280,"wires":[]},{"id":"d9a79cbe.4e6668","type":"config","z":"4a1f60d7.aaf398","name":"URLS","properties":[{"p":"phonetrack","pt":"flow","to":"https://example.tld/apps/phonetrack/log/gpslogger/__phonetrackid__/","tot":"str"},{"p":"homeassistant","pt":"flow","to":"https://home.example.tld/api/webhook/__webhookkey__","tot":"str"}],"active":true,"x":150,"y":140,"wires":[]},{"id":"b894f1e0.b35c28","type":"join","z":"4a1f60d7.aaf398","name":"","mode":"custom","build":"array","property":"statusCode","propertyType":"msg","key":"url","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"2","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":850,"y":240,"wires":[["fc27dbec.975ca8"]]},{"id":"fc27dbec.975ca8","type":"change","z":"4a1f60d7.aaf398","name":"","rules":[{"t":"set","p":"statusCode","pt":"msg","to":"$max(statusCode.$number())","tot":"jsonata"},{"t":"set","p":"payload","pt":"msg","to":"","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":1000,"y":240,"wires":[["35d1687f.3e79d"]]},{"id":"51a8ee1e.cb9c38","type":"comment","z":"4a1f60d7.aaf398","name":"Set URLS for HomeAssistant and PhoneTrack","info":"","x":250,"y":100,"wires":[]},{"id":"35d1687f.3e79d","type":"switch","z":"4a1f60d7.aaf398","name":"","property":"statusCode","propertyType":"msg","rules":[{"t":"gt","v":"200","vt":"str"},{"t":"else"}],"checkall":"true","repair":false,"outputs":2,"x":1030,"y":280,"wires":[["c8badb69.d6dc78","1b9f05f9.a0fb4a"],["1b9f05f9.a0fb4a"]]},{"id":"d18bbbaf.bd8138","type":"node-red-contrib-httpauth","z":"4a1f60d7.aaf398","name":"","file":"","cred":"","authType":"Basic","realm":"","username":"","password":"","hashed":false,"x":220,"y":220,"wires":[["ee593f24.f06ca8","7b5c470a.a86d7"]]}]

You should probably know about LetsEncrypt DNS challenge validation

Everyone knows the basic way to renew a LetsEncrypt cert. Open port 80 and let LetsEncrypt connect to your server. But what if you don't want to open your network or you limit access to a handful of IP addresses? Well you can just use the DNS challenge validation, no need for web servers and no need for port wrangling.

For example I use the certbot-dns-cloudflare for my work intranet allowing it to remain VPN only.

Another great option is to use acme.sh as it supports a massive list of dns providers and the ever popular duckdns out of the box.

Given in the past I found the most fragile part of my LetsEncrypt setup was making sure port 80 was accessible to LetsEncrypt I personally use this method even if I have a network accessible from the wider internet.


Splitting a Facebook event calendar

Node-RED is a flow-based programming tool, originally developed by IBM’s Emerging Technology Services team and now a part of the JS Foundation.

My friends make liberal use of Facebook events, unfortunately I find the events interface impossible to navigate. Luckily they do have a .ics available, unluckily events you haven't accepted are mixed with events you have.

So I made a simple flow that splits accepted events from tentative events. This way I can subscribe to this .ics in my Nextcloud instance and give tentative events a different color. I find a calendar is much more usable.

I used the node-red-contrib-httpauth node. The main bit is contained in a function node, mainly because I couldn't figure out how to sanely do this with the split node.

Use is simple:

  1. Update the http auth node with your preferred user:pasword
  2. Add your facebook event calendar url to the http request node
  3. subscribe to user:[email protected]/nod-red/facebook/accepted or user:[email protected]/nod-red/facebook/tentative

[{"id":"30abd373.bd5524","type":"http in","z":"ce798f74.64a9d8","name":"","url":"/facebook/:request","method":"get","upload":false,"swaggerDoc":"","x":160,"y":300,"wires":[["77e1feab.382658"]]},{"id":"705a7847.bc5d","type":"debug","z":"ce798f74.64a9d8","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","x":430,"y":400,"wires":[]},{"id":"77e1feab.382658","type":"node-red-contrib-httpauth","z":"ce798f74.64a9d8","name":"","file":"","cred":"","authType":"Basic","realm":"","username":"","password":"","hashed":false,"x":240,"y":340,"wires":[["116bd38e.4b7dcc","705a7847.bc5d"]]},{"id":"e1172439.6bf4b8","type":"comment","z":"ce798f74.64a9d8","name":"Secure with Basic Auth","info":"","x":200,"y":380,"wires":[]},{"id":"116bd38e.4b7dcc","type":"switch","z":"ce798f74.64a9d8","name":"","property":"req.params.request","propertyType":"msg","rules":[{"t":"eq","v":"accepted","vt":"str"},{"t":"eq","v":"tentative","vt":"str"},{"t":"else"}],"checkall":"true","repair":false,"outputs":3,"x":430,"y":300,"wires":[["eaff2195.a68b38"],["eaff2195.a68b38"],["83c3be58.0fc9a8"]]},{"id":"4d794bb0.d8ef74","type":"function","z":"ce798f74.64a9d8","name":"Split Calendar","func":"msg.payload = msg.payload.toString('utf8');\nmsg.payload = msg.payload.replace(\"END:VCALENDAR\", \"\");\nmsg.payload = msg.payload.split(/(?=BEGIN:VEVENT)/g);\nmsg.calendar = msg.payload[0];\n\nmsg.payload.forEach(function(part, index){\n   if (part.includes(\"PARTSTAT:ACCEPTED\") && (msg.req.params.request == \"accepted\")){\n       msg.calendar += part;\n   } else if (part.includes(\"PARTSTAT:TENTATIVE\") && (msg.req.params.request == \"tentative\") ){\n       msg.calendar += part;\n   }\n});\n\nmsg.calendar += \"END:VCALENDAR\";\nreturn msg;","outputs":2,"noerr":0,"x":1000,"y":300,"wires":[["6b67410a.701ab"],[]]},{"id":"13bdf419.e491ec","type":"http request","z":"ce798f74.64a9d8","name":"","method":"GET","ret":"bin","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":830,"y":300,"wires":[["4d794bb0.d8ef74"]]},{"id":"ab19b6e9.4b1ed8","type":"comment","z":"ce798f74.64a9d8","name":"Facebook event calendar URL","info":"","x":890,"y":340,"wires":[]},{"id":"83c3be58.0fc9a8","type":"http response","z":"ce798f74.64a9d8","name":"404","statusCode":"404","headers":{},"x":430,"y":360,"wires":[]},{"id":"6b67410a.701ab","type":"change","z":"ce798f74.64a9d8","name":"","rules":[{"t":"move","p":"calendar","pt":"msg","to":"payload","tot":"msg"},{"t":"set","p":"headers['content-type']","pt":"msg","to":"text/calendar","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":1180,"y":300,"wires":[["d1e9086f.645b78"]]},{"id":"d1e9086f.645b78","type":"http response","z":"ce798f74.64a9d8","name":"Return","statusCode":"200","headers":{},"x":1330,"y":300,"wires":[]},{"id":"eaff2195.a68b38","type":"change","z":"ce798f74.64a9d8","name":"Set Browser User Agent","rules":[{"t":"set","p":"headers.User-Agent","pt":"msg","to":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":630,"y":300,"wires":[["13bdf419.e491ec"]]}]

Docker Volume Backups

Backups are always an issue. I plan to switch to ZFS for the snapshotting and remote sync features, until then I have take the useful volume-backup and broken it until it works with rdiff-backup

Build the container

First you have to clone the repo and build the container

git clone https://github.com/dugite-code/volume-backup.git
cd volume-backup
docker image build -t vbackup:1.0 .

Backup

Now you can run the container mounting the [volume-name] at /volume and your [backup-dir] at /backup

docker run -v [volume-name]:/volume -v [backup-dir]:/backup --rm vbackup:1.0 backup

Additional rdiff-backup options and be passed via the -o switch and a quoted option for example -o "--exclude ignore.me"

Restore

To restore you must supply some form of options i.e. -o "-r 10D" or restore backup from 10 Days ago

docker run -v [volume-name]:/volume -v [backup-dir]:/backup --rm vbackup:1.0 restore -o "-r 10D"

Trimming old files

With incremental backups it's important to occasionally trim old files that just don't exist anymore. Like Restore you must provide some form of option.

Prior to doing a backup I will run this command to remove files older than 20 Backups

docker run -v [backup-dir]:/backup --rm vbackup:1.0 remote -v -o "--remove-older-than 20B"

I hope you found this helpful. It's not a very clean script, I had to hack apart the reference script quite a bit in order to get it all working. But it serves it's purpose quite well.