Technology (old posts, page 2)

As I Tinker I Learn, Somtimes I Even Write It Down.

I am a Docker Convert

I've changed my mind quite a bit when it comes to docker. I used to be a big believer in virtual machines, I still am, but for individual 'applications' Docker makes a fair bit of sense.

Reasons to I use Docker

Simplicity

Docker is the simplest way to replicate a developer's environment on your own computer. No more dealing with differing distro's varying update cycles and the conflicting packages causing edge case issues, because everything is in it's own little box. Nice and predictable.

This saves you time setting things up because at least all the components are included. Configuration is still a pain on some projects, but at least your not missing any metaphorical screws.

The biggest example for this was my mailserver. I used modoboa a great simple mailserver package. The issues were having things brake from system package updates and just updating the package itself was damned complicated. I learnt a lot from these breakages, so much so when I switched to docker I switched to using docker-mailserver a image that has no Web GUI for configuration.

Updates, while problematic to monitor in docker are now a simple painless affair.

Lightweight

Unlike a virtual machine you don't need to replicate everything in a container. This makes it easier to have more services that conflict with each other running side by side. I used to have one dedicated NUC for my mailserver and another for all my other services. I've now condensed it all onto the single NUC with better overall performance thanks to docker.

Portability

One of the biggest advantages to docker is portability. If you take your raw data and docker-compose files throw them onto a completely separate machine and within a few minutes you are up and running again. For virtual machines this would take significant work and, in my experience often fails.

The Issues I have with docker

The pre-built images

The Alpine image root issue last year, where the base image used to build a large number of docker images shipped with a vulnerability, made it obvious you need an actively maintained update cycle.

If the project you are using doesn't provide a docker image or even a dockerfile you will often find pre-built images on docker-hub. The big question you need to ask is if you can trust these images. Check the source repository and decide if it would make more sense to build the image yourself.

Keeping pre-built images up-to date

One of the biggest issues people have with docker is the lack of update tracking. Thankfully this can be overcome using the Watchtower image.

I set watchtower to monitor only mode because automatic updates are sometimes a terrible idea.

watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    restart: unless-stopped
    environment:
      - WATCHTOWER_POLL_INTERVAL=86400 #Poll every 24 hours
      - WATCHTOWER_MONITOR_ONLY=true
      - WATCHTOWER_NOTIFICATIONS=gotify
      - WATCHTOWER_NOTIFICATION_GOTIFY_URL=https://example.tld/gotify/
      - WATCHTOWER_NOTIFICATION_GOTIFY_TOKEN=###########

Importantly for locally built images add the disable label to their docker-compose files, or you will constantly get notifications saying (info): Unable to update container /examplecontainer. Proceeding to next.

  labels:
   - com.centurylinklabs.watchtower.enable=false

FeedIron Updated

FeedIron, Reforge your feeds

I've done another major update for the TT-RSS plugin FeedIron. This update is mostly structural changes as I'm trying to Modularize things and reduce the amount of spaghetti code.

However I have moved the community submitted recipes to a separate repository. As the plugin uses the Github API this will affect current and old versions of the plugin once I fully remove the recipes from the main repo. I plan to do this early next year.

You will either need to update or edit the following line in RecipeManager.php

private $recipes_location = array(array("url"=>"https://api.github.com/repos/m42e/ttrss_plugin-feediron/contents/recipes", "branch"=>"master"), array("url"=>"https://api.github.com/repos/mbirth/ttrss_plugin-af_feedmod/contents/mods", "branch"=>"master"));

with

private $recipes_location = array(array("url"=>"https://api.github.com/repos/feediron/feediron-recipes/contents/general", "branch"=>"master"), array("url"=>"https://api.github.com/repos/mbirth/ttrss_plugin-af_feedmod/contents/mods", "branch"=>"master"));

Given that I'm still a hobbyist coder I'm hoping I haven't made too many mistakes. As always I encourage pull requests and any feedback


Node-Red Website Alerts

Node-RED is a flow-based programming tool, originally developed by IBM’s Emerging Technology Services team and now a part of the JS Foundation.

A recent question on /r/selfhosted about what selfhosted service you are missing sparked my interest. A user by the name forthedatahorde mentioned they wanted the ability to monitor arbitrary websites. I figured Node-Red is just the tool to comfortably fill this gap.

I made two options. One using readability.js for a bit of a generic solution and one using css selectors for more targeted watching needs.

The Readability.js Version:

  1. Fetches the site with a HTTP get request.

  2. Runs it through readability.js

  3. Hashes the text

  4. Compares it with an old hash

  5. Emails the change

Required the packages node-red-contrib-md5 and node-red-contrib-readability

image03

[{"id":"c8beb17e.4513c8","type":"http request","z":"1a1be165.968ed7","name":"","method":"GET","ret":"txt","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":310,"y":100,"wires":[["8a94f0a4.a1f488"]]},{"id":"ae660213.061188","type":"readability","z":"1a1be165.968ed7","name":"","x":310,"y":180,"wires":[["597a982a.8c9bb"]]},{"id":"8a94f0a4.a1f488","type":"switch","z":"1a1be165.968ed7","name":"","property":"statusCode","propertyType":"msg","rules":[{"t":"eq","v":"200","vt":"str"}],"checkall":"true","repair":false,"outputs":1,"x":290,"y":140,"wires":[["ae660213.061188"]]},{"id":"c361fc4a.44f6b8","type":"change","z":"1a1be165.968ed7","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"payload.content","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":500,"y":300,"wires":[["4e96af67.1febe"]]},{"id":"597a982a.8c9bb","type":"md5","z":"1a1be165.968ed7","name":"","fieldToHash":"payload.text","fieldTypeToHash":"msg","hashField":"md5","hashFieldType":"msg","x":470,"y":140,"wires":[["8e8bc713.5693f"]]},{"id":"ed8cd02.da0d03","type":"switch","z":"1a1be165.968ed7","name":"","property":"md5","propertyType":"msg","rules":[{"t":"neq","v":"old_hash","vt":"msg"}],"checkall":"true","repair":false,"outputs":1,"x":610,"y":180,"wires":[["d52d318b.d1ac58"]]},{"id":"5cfa0e17.f15bf","type":"change","z":"1a1be165.968ed7","name":"","rules":[{"t":"set","p":"url","pt":"msg","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":310,"y":60,"wires":[["c8beb17e.4513c8"]]},{"id":"4ab25748.add748","type":"change","z":"1a1be165.968ed7","name":"","rules":[{"t":"set","p":"topic","pt":"msg","to":"url","tot":"msg"},{"t":"change","p":"topic","pt":"msg","from":"^(.*)$","fromt":"re","to":"Update to $1 detected","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":500,"y":260,"wires":[["c361fc4a.44f6b8"]]},{"id":"4e96af67.1febe","type":"e-mail","z":"1a1be165.968ed7","server":"smtp.gmail.com","port":"465","secure":true,"tls":true,"name":"","dname":"","x":690,"y":300,"wires":[]},{"id":"b7b6adc.1b5c3d","type":"inject","z":"1a1be165.968ed7","name":"rammstein.de","topic":"","payload":"https://www.rammstein.de/de/","payloadType":"str","repeat":"21600","crontab":"","once":false,"onceDelay":0.1,"x":120,"y":60,"wires":[["5cfa0e17.f15bf"]]},{"id":"8e8bc713.5693f","type":"function","z":"1a1be165.968ed7","name":"Get Hash","func":"try{\n    msg.old_hash = flow.get(msg.url);\n} catch(e) {\n    msg.old_hash = \"0\";\n}\nreturn msg;","outputs":1,"noerr":0,"x":620,"y":140,"wires":[["ed8cd02.da0d03"]]},{"id":"d52d318b.d1ac58","type":"function","z":"1a1be165.968ed7","name":"Set Hash","func":"flow.set(msg.url, msg.md5);\nreturn msg;","outputs":1,"noerr":0,"x":480,"y":220,"wires":[["4ab25748.add748"]]}]

The CSS Selector Version:

  1. Fetches the site with a HTTP get request.

  2. Filters the resulting HTML with a css selector

  3. Hashes the html

  4. Compares it with an old hash

  5. Emails the change

Required the packages node-red-contrib-md5

image02

[{"id":"c8beb17e.4513c8","type":"http request","z":"1a1be165.968ed7","name":"","method":"GET","ret":"txt","paytoqs":false,"url":"","tls":"","proxy":"","authType":"","x":350,"y":160,"wires":[["8a94f0a4.a1f488"]]},{"id":"8a94f0a4.a1f488","type":"switch","z":"1a1be165.968ed7","name":"","property":"statusCode","propertyType":"msg","rules":[{"t":"eq","v":"200","vt":"str"}],"checkall":"true","repair":false,"outputs":1,"x":330,"y":200,"wires":[["28487abf.9cf7be"]]},{"id":"597a982a.8c9bb","type":"md5","z":"1a1be165.968ed7","name":"","fieldToHash":"payload","fieldTypeToHash":"msg","hashField":"md5","hashFieldType":"msg","x":330,"y":280,"wires":[["8e8bc713.5693f"]]},{"id":"ed8cd02.da0d03","type":"switch","z":"1a1be165.968ed7","name":"","property":"md5","propertyType":"msg","rules":[{"t":"neq","v":"old_hash","vt":"msg"}],"checkall":"true","repair":false,"outputs":1,"x":330,"y":360,"wires":[["d52d318b.d1ac58"]]},{"id":"5cfa0e17.f15bf","type":"change","z":"1a1be165.968ed7","name":"","rules":[{"t":"set","p":"url","pt":"msg","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":350,"y":120,"wires":[["c8beb17e.4513c8"]]},{"id":"4ab25748.add748","type":"change","z":"1a1be165.968ed7","name":"","rules":[{"t":"set","p":"topic","pt":"msg","to":"url","tot":"msg"},{"t":"change","p":"topic","pt":"msg","from":"^(.*)$","fromt":"re","to":"Update to $1 detected","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":520,"y":400,"wires":[["9a394414.9956a"]]},{"id":"b7b6adc.1b5c3d","type":"inject","z":"1a1be165.968ed7","name":"rammstein.de","topic":"","payload":"https://www.rammstein.de/de/","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":110,"y":80,"wires":[["6500331b.56333c"]]},{"id":"8e8bc713.5693f","type":"function","z":"1a1be165.968ed7","name":"Get Hash","func":"try{\n    msg.old_hash = flow.get(msg.url);\n} catch(e) {\n    msg.old_hash = \"0\";\n}\nreturn msg;","outputs":1,"noerr":0,"x":340,"y":320,"wires":[["ed8cd02.da0d03"]]},{"id":"d52d318b.d1ac58","type":"function","z":"1a1be165.968ed7","name":"Set Hash","func":"flow.set(msg.url, msg.md5);\nreturn msg;","outputs":1,"noerr":0,"x":500,"y":360,"wires":[["4ab25748.add748"]]},{"id":"28487abf.9cf7be","type":"html","z":"1a1be165.968ed7","name":"","property":"payload","outproperty":"payload","tag":"","ret":"html","as":"single","x":330,"y":240,"wires":[["b8739927.661e18"]]},{"id":"6500331b.56333c","type":"change","z":"1a1be165.968ed7","name":"Selector css","rules":[{"t":"set","p":"select","pt":"msg","to":".resume-view-news","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":110,"y":120,"wires":[["5cfa0e17.f15bf"]]},{"id":"b8739927.661e18","type":"change","z":"1a1be165.968ed7","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"payload[0]","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":520,"y":240,"wires":[["597a982a.8c9bb"]]},{"id":"9a394414.9956a","type":"e-mail","z":"1a1be165.968ed7","server":"smtp.gmail.com","port":"465","secure":true,"tls":true,"name":"","dname":"","x":710,"y":400,"wires":[]}]

I hope someone finds this helpful/interesting. Let me know if you have a better solution in the comments down below.


Nextcry and Nextcloud security

About a month ago there was an urgent security notice from the Nexcloud devs regarding a flaw in Nginx php-fpm and the associated Nextcloud config. Unfortunately we are now seeing it being exploited in the wild.

The Nextcloud devs have confirmed that it doesn't appear to be an issue with Nextcloud itself and that patching and updating is highly advised.

This brings to mind some extra security measures I do for Nextcloud on top of my standard server checklist

Have extra security advice? Let me know in the comments down below!


App Passwords for docker-mailserver

Recently I got rid of my virtual IPFire firewall and setup a Netgate SG1100 as my home firewall. I did this mainly because the NIC on the IPFire host NUC was starting to fail, also we use Pfsense at work and it's good to be able to tinker on a common platform. As my email server was virtualized on the same host NUC as my firewall I switched my virtual modoboa email server install to the docker-mailserver project. This makes my mail server more portable than the old virtual machine was.

I then setup app specific passwords for my email following this guide Below is the changes I needed to do for the docker image.

Adding this to the docker-mailserver docker-compose.yml

    volumes:
    ###################################
    #### Dovecot App Passwords Mod ####
    ###################################
    - /opt/mail/custom/dovecot/10-auth.conf:/etc/dovecot/conf.d/10-auth.conf:ro
    - /opt/mail/custom/dovecot/auth-appspecificpasswd.conf.ext:/etc/dovecot/conf.d/auth-appspecificpasswd.conf.ext:ro
    - /opt/mail/custom/dovecot/app_specific_passwd:/etc/dovecot/app_specific_passwd:ro

The /opt/mail/custom/dovecot/10-auth.conf file

auth_mechanisms = plain login
!include auth-passwdfile.inc
!include auth-appspecificpasswd.conf.ext

The /opt/mail/custom/dovecot/auth-appspecificpasswd.conf.ext file

passdb {

  driver = passwd-file

  args = scheme=SHA512-CRYPT username_format=%u /etc/dovecot/app_specific_passwd

}

The /opt/mail/custom/dovecot/app_specific_passwd file (example)

K9emaillapp:{SHA512-CRYPT}123456789...::::::user=foo

Assuming your docker-mailserver is called mail you can get the format you passwords for the app_specific_passwd file by using:

docker exec -it mail doveadm pw -s SHA512-CRYPT

You can now user the username K9emaillapp and the associated password to log in to your email account


Fixing a Patreon feed's cover artwork

Antennapod

With the recent Pocketcast PR blunder I finally decided to jump back to open source Antennapod. This has been painless especially with the introduction of the Remove silence feature, the main feature that kept me with Pocketcasts for so long.

The only issue I had was a Private feed with broken cover artwork, this was frustrating but it looks to be an issue on Patreons end. Thankfully Node Red is available to rescue the situation!

The flow is really simple, the only additional node I have added is the node-red-contrib-httpauth.

  1. On a HTTP request, fetch the RSS feed.
  2. Convert from XML to and Object.
  3. Replace msg.payload.rss.channel[0].image[0].url[0] with a good url from the podcasters website.
  4. Create the txt/xml headers
  5. Return the fixed RSS Feed

Node-Red Flow

[{"id":"8d2a4ad1.4599a","type":"http request","z":"d437ad18.0999c","name":"","method":"GET","ret":"txt","paytoqs":false,"url":"https://www.patreon.com/rss/yaddayada","tls":"","proxy":"","authType":"","x":367.5,"y":31,"wires":[["4c5cdedd.6772c8"]]},{"id":"4c5cdedd.6772c8","type":"xml","z":"d437ad18.0999c","name":"XML To Object","property":"payload","attr":"","chr":"","x":230.5,"y":78,"wires":[["adbd8c2e.7fbcd"]]},{"id":"7475a34e.9f953c","type":"http in","z":"d437ad18.0999c","name":"rss","url":"/mystupidRSS","method":"get","upload":false,"swaggerDoc":"","x":69.5,"y":31,"wires":[["26c3aede.f41d4a"]]},{"id":"26c3aede.f41d4a","type":"node-red-contrib-httpauth","z":"d437ad18.0999c","name":"","file":"","cred":"","authType":"Basic","realm":"","username":"","password":"","hashed":false,"x":213.5,"y":31,"wires":[["8d2a4ad1.4599a"]]},{"id":"56731f49.b7f77","type":"xml","z":"d437ad18.0999c","name":"Object to XML","property":"payload","attr":"","chr":"","x":647.5,"y":79,"wires":[["5aa22446.612044"]]},{"id":"5aa22446.612044","type":"change","z":"d437ad18.0999c","name":"Set Headers","rules":[{"t":"set","p":"headers","pt":"msg","to":"{}","tot":"json"},{"t":"set","p":"headers.content-type","pt":"msg","to":"text/xml","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":219,"y":131,"wires":[["8df07127.e9d0c8"]]},{"id":"8df07127.e9d0c8","type":"http response","z":"d437ad18.0999c","name":"","statusCode":"","headers":{},"x":391,"y":131,"wires":[]},{"id":"adbd8c2e.7fbcd","type":"change","z":"d437ad18.0999c","name":"Replace Cover Image","rules":[{"t":"set","p":"payload.rss.channel[0].image[0].url[0]","pt":"msg","to":"https://example.com/cover_art.png","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":440.5,"y":79,"wires":[["56731f49.b7f77"]]}]

Something neat I did with FitNotes and Tasker

FitNotes App for Android

I use the fitness tracking app FitNotes on Android. It's a great application that I have happily used for years. The greatest issue I had with it was manually entering my body weight. Well I finally got myself into gear and fixed that issue using the fantastic staple of Android automation, Tasker

Using a Xiaomi Mi Smart Scale that I hooked up to my home server using my python gatttool wrapper I Dump it's weight data into a Google spreadsheet.

Using my Simple API (Because Google's own API is a pain) and the helper task getformatteddate, I pull a unix timestamp and the weight onto my Phone. I then run a INSERT SQL command on FitNote's database (using root of course)

I also did a bulk body weight record import via .csv using sqlitebrowser. Now I no longer have to manually enter my body weight, should have done this years ago.

Here is my Tasker task if you are interested:

    Healthapi (44)
        A1: Flash [ Text:%date %time Long:Off ]
        A2: Flash [ Text:Updating Health Report Long:Off ]
        A3: HTTP Get [ Server:Port:https://script.google.com Path:/macros/s/myprivatesheet/exec Attributes:key=myapikey Cookies: User Agent: Timeout:20 Mime Type: Output File: Trust Any Certificate:Off ]
        A4: Variable Set [ Name:%data To:%HTTPD Recurse Variables:Off Do Maths:Off Append:Off ]
        A5: Variable Set [ Name:%newline To:
     Recurse Variables:Off Do Maths:Off Append:Off ]
        A6: Variable Split [ Name:%data Splitter:%newline Delete Base:Off ]
        A7: Variable Split [ Name:%data1 Splitter:, Delete Base:Off ]
        A8: Read File [ File:Tasker/lastdate.dat To Var:%lastdate Continue Task After Error:On ]
        A9: If [ %lastdate neq %data11 ]
        A10: Write File [ File:Tasker/lastdate.dat Text:%data11 Append:Off Add Newline:Off ]
        A11: Perform Task [ Name:getFormattedDate Priority:%priority Parameter 1 (%par1):%data11 Parameter 2 (%par2):yyyy-mm-dd Return Value Variable:%date Stop:Off ]
        A12: Perform Task [ Name:getFormattedDate Priority:%priority Parameter 1 (%par1):%data11 Parameter 2 (%par2):hh:nn:ss Return Value Variable:%time Stop:Off ]
        A13: Variable Set [ Name:%measurement_id To:1 Do Maths:Off Append:On ]
        A14: Variable Set [ Name:%value To:%data12 Do Maths:Off Append:On ]
        A15: Variable Set [ Name:%query To:INSERT INTO MeasurementRecord (measurement_id, date, time, value, comment) VALUES ("%measurement_id", "%date", "%time", "%value",""); Do Maths:Off Append:Off ]
        A16: SQL Query [ Mode:Raw File:/data/data/com.github.jamesgay.fitnotes/databases/database.db Table: Columns: Query:%query Selection Parameters: Order By: Output Column Divider: Variable Array:%test Use Root:On ]

I hope you found this interesting, if only in the abstract "hey that's a thing you can totally do" kind of way. If you want me to write a complete how-to let me know in the comments down below.


SSH Login Notifications with Gotify

Gotify is a simple server for sending and receiving messages

Inspired by this post I decided to add a notification on my phone every time an ssh session began on my servers. Seeing as I make use of Gotify for selfhosted push notifications I used that rather than signal.

First I created created the file /usr/local/bin/sshnotif. At the top you can add your own token and Gotify url

Update: I had to push the current time back a full minute in order to improve consistency. I'll defiantly want to revisit this at a later date

#!/bin/bash

exec &> /dev/null #Hide output

Gotify_URL='https://example.tld/gotify'
Gotify_Token='gotify-app-token'

notify()
{

        now=$(date -d "-60 seconds" +%s) #Get current time minus 60 seconds
        end=$((SECONDS+30)) #Set 30s Timeout for loop

        while [ $SECONDS -lt $end ]; do

                SSHdate=$(date -d "$(who |grep pts|tail -1 | awk '{print $3, $4}')" +%s) #Check for the latest SSH session

                if [ $SSHdate -ge $now ]; then #Once who is updated continue with sending Notification

                        title="SSH Login for $(/bin/hostname -f)"
                        message="$(/usr/bin/who | grep pts)"

                        /usr/bin/curl -X POST -s \
                                -F "title=${title}" \
                                -F "message=${message}" \
                                -F "priority=5" \
                                "${Gotify_URL}/message?token=${Gotify_Token}"

                        break
                fi
        done

}

notify & #Run in background to prevent holding up the login process

Run the command chmod +x /usr/local/bin/sshnotif

In the file /etc/pam.d/sshd add the following line

# note optional is set to prevent ssh login failure
session optional pam_exec.so /usr/local/bin/sshnotif

I now get a nice notification with all the open SSH sessions listed. Unlike the post on 8192.one I didn't want any IP address resolution using an online service. I plan on integrating the MaxMind GeoLite2 database at some point. However as I already have Graylog set up to do this it's not a high priority for me.

Thanks for the shoutout: https://zerosec.xyz/posts/gotify-notifications/


Getting a QNAP NAS to Log to my Graylog instance

Running old embedded devices is a pain not to mention a major security risk. But if you are like me and are stuck with it sometimes you can take solace in software repo projects like Entware. In this case I needed to centralize all the disparate system logs on the network so I could find issues BEFORE they cause real trouble. The problem is the QNAP NAS I had could only send system logs over unencrypted UDP.

That's just not good enough, especially as I want to use client certs down the line. The simplest solution I found was to install syslog-ng to redirect the logs securely.

Note: I'm using a letsencrypt cert to make my life simpler

Setting up the NAS

Install Entware by downloading the .qpkg file, navigating to the NAS in the web browser and then selecting the install manually option in the app center.

Manualy install the .qpkg file

SSH into the NAS and install syslog-ng

opkg update
opkg install syslog-ng

Configure syslog-ng by editing /opt/etc/syslog-ng.conf

# Important set the right config file version
@version: 3.20

options {
};

# Listen to local syslog connection
source localhostudp {
        udp( ip("127.0.0.1") port(1514) );
};

# Forward to remote graylog server over tls to port 1514
# To Implement Client Cert
destination graylog_loghost {
        network(
                "example.com" port(1514)
                transport("tls")
                tls( ca_dir("/opt/sbin/cadir") )
        );
};

# Enable both source and destination
log {
        source(localhostudp);
        destination(graylog_loghost);
};

Set up the Letsencrypt CA by downloading the TrustID X3 Root Certificate (formallyu known as DST Root CA X3). We then need to discover the hash of the certificate using openssl. Syslog-ng requires as simlink named with the certificate hash.

The hash should be 2e5ac55d

/opt/sbin/cadir
wget https://github.com/letsencrypt/website/raw/master/static/certs/trustid-x3-root.pem

openssl x509 -noout -hash -in trustid-x3-root.pem

ln -s /opt/sbin/cadir/trustid-x3-root.pem /opt/sbin/cadir/2e5ac55d.0

Via the web admin, set the NAS to log to 127.0.0.1 with the local port 1514. This can be found in Systems Logs in the Systems Settings category.

Control Panel -> System Logs -> Syslog Client Management

Ensure syslog-ng isn't running then test in the foreground for any errors

/opt/etc/init.d/S01syslog-ng stop

/opt/sbin/syslog-ng -Fvde

If no errors appear you can then start syslog-ng

/opt/etc/init.d/S01syslog-ng start

Graylog Notes

Graylog doesn't appear to directly accept the format sent via syslog-ng. While it is possible to change the format in syslog-ng I didn't figure out the best way to do it. My solution was to set the input to Raw/Plaintext TCP and then run a GROK pattern extractor when matching the conn log string

%{DATA} qlogd\[9147\]: %{DATA:facility}: Users: %{DATA:NAS_user}, Source IP: %{IP:NAS_src}, Computer name: %{DATA:NAS_id}, Connection type: %{DATA:NAS_connection}, Accessed resources: %{DATA:NAS_resource}, Action: %{GREEDYDATA:NAS_action}

LookOut fix version

Lookout!

In august 2018 I took over maintenance of the Thunderbird addon [Lookout-fix-version][e7c45433]. I soon set up a the Github Organization TB-throwback so that future development can be expanded and transferred easier if I stop work on it.

It's been an interesting experience managing a small project that's over 11 years old. Especially with all the changes and rapid development Thunderbird has been going experiencing now it's separated from Mozilla.

Why did I take over?

I needed to move my office away from Outlook 2010. I had no budget to upgrade the office software, but I couldn't allow the company to keep limping along with a 9 year old product.

Thunderbird to the rescue! Except...

TNEF files, supposedly a thing of the past. Even Microsoft recommends you NOT to send such files. But we have to work with people who don't upgrade and pay the lowest bidder to configure their exchange servers.

Unfortunately the original Lookout was at this point unmaintained and severly out of date similarly Lookout+ and Lookout-fix-version hadn't seen any updates in a long time. luckily Oleksandr was still contactable via the support email and was happy to add me as a developer on the ATN page.

My first change was a simple modification to the preferences css to fix changes in Thunderbird 59. I've since been working on adding debugging, improving performance, squashing bugs and generally attempting to learn how everything is strung together.

Original TNEF file with extracted attachments

I plan on porting the addon to a webextension in the coming months to ensure we have this useful addon for many years to come.