The Hugo Switch

I've decided to shake things up a bit, and I'm switching my blog over from Nikola to Hugo.

Page Bundles and Obsidian

Page bundles offer great convenience for storing my blog posts in Obsidian. With Hugo, they group content along with associated resources like images and files. With this portability, I can store my posts within my note structure. While Nikola could provide similar capabilities to some extent, it's worth exploring alternatives, especially as I would have to rebuild my site anyway.

Better Asset Pipelines

One of the things that really drew me to Hugo is its asset pipeline. With image scaling, format conversion (hello, webp! Everyone hates you, but Google's bots sure don't), CSS and JS minification, Hugo makes my life easier. Plus, Hugo plays nice with deployment tools like purgecss, reducing the chances of mistakes with automation.

I'm even using the module.mounts feature to import my CSS and JS from a node_modules folder, making it easier to keep them up to date.

I've Just Learned a Lot

I'll admit it—I've made my fair share of mistakes along the way with this blog. From opting for a basic Bootstrap theme with little to no dark mode support, to not optimizing my HTML as thoroughly as I could have, I've certainly had some learning moments. But it's all part of the learning experience, so I'll take this opportunity to start fresh.


Using fdupes to cleanup my file server

The overall problem:

Like many of us, I am guilty of copying files haphazardly, promising myself that I'll organize them later. This has built up to a significant problem over the years, particularly with old smartphone backups. I had a bad habit of dumping photo folder backups onto my server, with each dump containing even more dumps of old photos, resulting in multiple levels of duplication. Using the command-line tool called fdupes I've only just managed to get some of it under control.

fdupes is a command-line application designed to find and identify duplicate files within a directory or a set of directories. It employs various techniques to compare file contents and determine duplicates, enabling efficient cleanup and reclamation of storage space.

To streamline the review process and make sure I know what's about to happen before deleting any files, I created a simple bash wrapper script. This script acts as a nice safety belt, preventing accidental fat finger deletions.

Avoiding the rm -rf Pitfall:

As many of us have learned the hard way, the rm -rf command can have disastrous consequences if misused (goodbye email server with 2000 emails). A simple typo or a wrong path can result in irreversible data loss. To mitigate this risk, the bash wrapper script avoids using rm -rf altogether. Instead, it leverages the safer alternative of moving duplicate files to a temporary trash directory for review and then subsequent manual deletion.

#!/bin/bash

TRASH="/tmp/trash"

find_files() {
  fdupes -rn "$*" > ${file}
  echo "Duplicate files have listed in ${file}"
}

remove_files() {
  echo "Reading from ${file}"
  echo ""
  read -p "Type yes to continue" choice
  case "$choice" in
    yes )
      mkdir -p "${TRASH}"
      while IFS= read -r line; do
        mv "${line}" "${TRASH}"
      done < ${file}
      echo "Duplicate files have been moved to ${TRASH}"
      exit ;;
    * )
      echo "Exiting"
      exit ;;
  esac
}

while getopts "rf:" option; do
  case "${option}" in
    r)
      remove=true  ;;
    f)
      file="${OPTARG:-dupes.txt}"  ;;
  esac
done
shift $((OPTIND - 1))

case "$remove" in
  true )
    remove_files
  false )
    find_files $*
esac

Understanding the Script:

The script utilizes the fdupes command-line tool to identify duplicate files within a given directory or set of directories. Here's how it works:

Finding Duplicate Files:
  • The find_files function invokes the fdupes command with the -rn flags, instructing it to recursively search for duplicates and list the results in a specified file.
  • If no file name is provided as an argument, the script will use the default file name dupes.txt to store the duplicate file list.
  • After the duplicates are found, the script informs us that the duplicate files have been listed in the dupes.txt file.
Removing Duplicate Files:
  • The remove_files function allows us to decide whether to remove the duplicates. Make sure to review the dupes.txt file before running.
  • If no file name is provided as an argument, the script will still refer to the default dupes.txt file to read the duplicate file list.
  • After printing the file listing the duplicates, the script prompts us to confirm our decision by typing "yes."
  • If confirmed, the script creates a temporary trash directory and proceeds to move the duplicate files to it.
  • Finally, it provides a message confirming that the duplicate files have been successfully moved to the trash directory.

Using the Script:

To utilize the script effectively, follow these steps:

  • Copy the script into a text editor and save it as ddup.sh.
  • Open a terminal and navigate to the directory containing the script.
  • Make the script executable by running the command: chmod +x ddup.sh.
Execute the script with appropriate options:
  • To find duplicate files: ./ddup.sh <directory>
  • To remove duplicate files: ./ddup.sh -r

Note: If you don't specify a file name using the -f argument, it will default to using the dupes.txt file for listing duplicate files.


Making a Lazy Dark Theme with Darkreader

Dark Reader is a web browser plugin the analyzes web pages and aims to reduce eyestrain while browsing the web.

I have been using the Dark Reader plugin in firefox for quite some time now and I quite like it. However, there is one drawback - it has made me procrastinate building a dark theme for my blog. As it turns out, exporting and using the Dark Reader generated theme for your website is pretty damn easy.

Frist you need the plugin installed in your browser, you can get it from the Firefox Add-ons page here

Next You have to switch darkreader to the "new design" (the old design looks more modern IMHO) from the dev tools popup in order to access the export button found in the settings dialogue.

  • Click Dark Reader icon.
  • Click Dev tools (in the bottom-right corner).
  • Click Preview new design.
  • Click on settings in the popup
  • Click on Manage Settings
  • Click on Export Dynamic Theme

Source: Question: Export CSS?

Instructions

Once you have the css, oen it up in an editor and wrap it in the css media rule for the dark perferd color scheme. Then it's a simple matter of adding it to your webpage as you would any other css file, boom a dark theme for your website with little to no effort.

@media (prefers-color-scheme: dark) {
    /*
                         _______
                        /       \
                       .==.    .==.
                      ((  ))==((  ))
                     / "=="    "=="\
                    /____|| || ||___\
        ________     ____    ________  ___    ___
        |  ___  \   /    \   |  ___  \ |  |  /  /
        |  |  \  \ /  /\  \  |  |  \  \|  |_/  /
        |  |   )  /  /__\  \ |  |__/  /|  ___  \
        |  |__/  /  ______  \|  ____  \|  |  \  \
     ___|_______/__/ ____ \__\__|___\__\__|___\__\_______
    |  ___  \ |  ____/ /    \   |  ___  \ |  ____|  ___  \
    |  |  \  \|  |___ /  /\  \  |  |  \  \|  |___|  |  \  \
    |  |__/  /|  ____/  /__\  \ |  |   )  |  ____|  |__/  /
    |  ____  \|  |__/  ______  \|  |__/  /|  |___|  ____  \
    |__|   \__\____/__/      \__\_______/ |______|__|   \__\
                    https://darkreader.org
    */

    /*! Dark reader generated CSS | Licensed under MIT https://github.com/darkreader/darkreader/blob/main/LICENSE */

    [...]
}

Playing with chatGTP

ChatGTP, impressive if a bit dumb

ChatGTP has been taking the internet by storm with its impressive ability to spit out reasonable sounding text, but it is all that great? is it clippy back from the dead to take revenge on us all?

Key take-aways:

  • It takes all character out of your sentences.
  • It has a habit of dropping key details for a more generic sentence.
  • The longer your inputs are the more likely it's spit out nonsense. Giving it small sections at a time to work on is better.
  • Everything it produces need to be proofread.
  • It really, really likes the work additionally for some reason.

Biggest criticism

test

ChatGTP has a very strange ability to "corporatize" just about just about anything and remove everything remotely human about a sentence.

Take this for example:

Me: reword and expand: chat gtp is able to produce quite good sentences but has all the contextual intelligence of a flat worm.

ChatGTP: While ChatGPT is capable of generating high-quality sentences through natural language processing, its contextual understanding is still limited compared to human intelligence. In other words, while ChatGPT can produce coherent and grammatically correct responses, it lacks the depth of understanding and nuance that humans possess. This is because ChatGPT relies on a vast database of pre-existing text to generate its responses, rather than having the ability to truly comprehend the meaning behind language like a human brain can. Therefore, while ChatGPT can provide helpful information and engage in conversation, it should be used as a tool to assist human intelligence, rather than a replacement for it.

If that paragraph doesn’t scream corporate press release maybe you need to take a day off from work and let your brain detox.

Final thoughts

ChatGTP is a tool, that, while amazing a first glance has it's issues and bugs. It's certainly a step above clippy and has it's place helping to flesh out emails, memos and cover letters but it's formal and stiff. An article on New Yorker titled "ChatGPT Is a Blurry JPEG of the Web" was close, Chat GTP is actually a smudge tool for text.

Now let's see Chat GTP's take on this article:

Me: write a article regarding the pros and cons of chatgtp in this style: [...]

ChatGTP: ChatGTP has been making waves on the internet with its ability to produce coherent and grammatically correct sentences through natural language processing. However, is it really as impressive as it appears to be? This article will take a closer look at the pros and cons of ChatGTP.

One of the most significant drawbacks of ChatGTP is its tendency to remove all character from your sentences. It has a habit of dropping key details to produce more generic responses, which can result in it missing the point entirely. Additionally, the longer the input, the more likely it is to produce nonsense. Therefore, giving it smaller sections to work on is generally more effective.

Another major drawback of ChatGTP is that everything it produces needs to be proofread. While it can generate coherent and grammatically correct sentences, it can also produce awkward phrasing or sentences that don't quite make sense. This means that anything produced by ChatGTP should always be double-checked to ensure accuracy and clarity.

The biggest criticism of ChatGTP, however, is its tendency to "corporatize" just about anything and remove all human qualities from a sentence. It has a knack for producing sentences that sound like corporate press releases, which can make it difficult to engage in a natural conversation.

In conclusion, while ChatGTP is an impressive tool, it is not as impressive as it initially appears to be. It has some significant drawbacks, including removing character from sentences, producing generic responses, and its tendency to "corporatize" language. Therefore, it should be used as a tool to assist human intelligence, rather


Readable Nginx configs

Configure your linux server

A recent project announcement on the subredit /r/selfhosted reminded me to post about a simple trick I've started using to make the configuration of the webserver Nginx a little more ergonomic.

Nginx allows you to include files inline in your configs to make re-using code simple. An example would be all your ssl proxy settings as per generated using the Mozilla ssl-config generator.

simply add this config to a file like /etc/nginx/include.d/include.ssl_sec with your cert paths modified and include it in your config:

upstream example_service {
  server 127.0.0.1:8080;
  keepalive 32;
}

server {
  server_name example.tld;

  #Mozilla modern tls config
  include /etc/nginx/include.d/include.ssl_sec;

  location / {
    #Common Proxy settings
    include /etc/nginx/include.d/include.proxy_settings;

    proxy_pass http://example_service/;
  }
}

Now you have a nice easy config file that can be easily used as a template for new services. Adding additional configurations to files really makes it quick and easy to deploy new services without needing complicated projects like Nginx Proxy Manager


Eking out some Nextcloud performance

Tweeking my linux server

Nextcloud is notorious in the selfhosted community of being difficult for some people to achieve a decent level of performance. After enabling the basic caching with both APCu and Redis there are several options to trim some fat. Once all the easy stuff is taken care of the hidden bottlenecks is where I am focusing my efforts. So far I have had some success by switching to UNIX sockets in my dockerised Nextcloud deployment.

Generally I've found:

  • Shipping file logging off to syslog made a noticeable visual difference over logging to the nextcloud.log file.
  • Using postgresql has been often touted as a decent option for easy performance gains.
  • Using the preview generator app alongside using Imaginary makes images less of an issue for general browsing.

But what else can you do after that? Trying to find bottlenecks in your setup. Be it spinning rust vs SSD vs M.2 drives there are usually some form of low hanging fruit you can find that is causing issues. A big potential issue is of course your abstraction layer, in my case docker. Docker adds some minor overheads to any service, a trade off for simplifying deployment and replication, one of these overheads is the networking stack. My understanding is that Docker's networking when not in host mode acts as a NAT, even when one container is talking to another. One method of bypass networking overhead between local services is the use of unix sockets.

In researching how to achieve this I found @jonbaldie's post on How to Connect to Redis with Unix Sockets in Docker. A few modifications and I was ready to test and verify that this made a difference.

Setup

These are the modifications done to my docker-compose file. Note that I have made a few modifications to avoid the need to set the folders and sockets permissions as 777. This is mainly handled by modifying the container user group id to the www-data group from the Nextcloud app container.

version: '2'

services:
    #Temporary busybox container to set correct permissions to shared socket folder
    tmp:
      image: busybox
      command: sh -c "chown -R 33:33 /tmp/docker/ && chmod -R 770 /tmp/docker/"
      volumes:
        - /tmp/docker/

    db:
      container_name: nextcloud_db
      image: postgres:14-alpine
      restart: always
      volumes:
        - ./volumes/postgresql:/var/lib/postgresql/data
        - /etc/localtime:/etc/localtime:ro
        - /etc/timezone:/etc/timezone:ro
      env_file:
        - db.env
      # Unix socket modifications
      # Run as a member of the www-data GID 33 group but keep postgres uid as 70
      user: "70:33"
      # Add the /tmp/docker/ socket folder to postgres
      command: postgres -c unix_socket_directories='/var/run/postgresql/,/tmp/docker/'
      depends_on:
        - tmp
      # Add shared volume from Temporary busybox container
      volumes_from:
        - tmp

    redis:
      container_name: nextcloud_redis
      image: redis:alpine
      restart: always
      volumes:
        - /etc/localtime:/etc/localtime:ro
        - /etc/timezone:/etc/timezone:ro
      # Unix socket modifications
        - ./volumes/redis.conf:/etc/redis.conf
      # Run redis with custom config
      command: redis-server /etc/redis.conf
      # Run as a member of the www-data GID 33 group but keep redis uid as 999
      user: "999:33"
      depends_on:
        - tmp
      # Add shared volume from Temporary busybox container
      volumes_from:
        - tmp

    app:
      container_name: nextcloud_app
      image: nextcloud:apache
      restart: always
      ports:
        - 127.0.0.1:9001:80
      volumes:
        - ./volumes/nextcloud:/var/www/html
        - ./volumes/php.ini:/usr/local/etc/php/conf.d/zzz-custom.ini
        - /etc/localtime:/etc/localtime:ro
        - /etc/timezone:/etc/timezone:ro
      depends_on:
        - db
        - redis
      # Unix socket modifications
      # Add shared volume from Temporary busybox container
      volumes_from:
        - tmp

This is the redis.conf file that tells it to only listen to the unix socket, and what permissions to use on said socket. Note I have a password enabled here, this is not really need it if not exposed publicly but I've used it just for best practice.

# 0 = do not listen on a port
port 0

# listen on localhost only
bind 127.0.0.1

# create a unix domain socket to listen on
unixsocket /tmp/docker/redis.sock

# set permissions for the socket
unixsocketperm 770

requirepass [password]

Finally the Nextcloud config I updated to reflect the connection changes

'dbtype' => 'pgsql',
'dbhost' => '/tmp/docker/',
'dbname' => 'nextcloud',
'dbuser' => 'nextcloud',
'dbpassword' => '{password}',

'memcache.local' => '\\OC\\Memcache\\APCu',
'memcache.distributed' => '\\OC\\Memcache\\Redis',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' =>
array (
  'host' => '/tmp/docker/redis.sock',
  'port' => 0,
  'dbindex' => 0,
  'password' => '{password}',
  'timeout' => 1.5,
),

Verifying the changes made a difference.

There is not much point in doing this without verification, otherwise we are all just participating in a cargo cult seeking performance enlightenment. With that in mind I set out to do some very basic benchmarks to ensure the performance gain I felt when navigating my Nextcloud install was in fact happening.

I did all my testing inside my Nextcloud container to better simulate a real-world result. I modified the redis.conf temporarily to allow both socket connections and TCP IP connections, then I had to install the redis-tools and postgresql-contrib packages to get the tools required.

# 0 = do not listen on a port
# port 0
port 6379

# listen on localhost only
# bind 127.0.0.1
bind 0.0.0.0
sudo docker exec -it nextcloud_app bash

apt update && apt install redis-tools && apt install postgresql-contrib

I then performed the same tests as @jonbaldie's using the commands time redis-benchmark -a [password] -h redis -p 6379 and time redis-benchmark -a [password] -s /tmp/docker/redis.sock

REDIS TCP (s) UNIX (s) % Diff
Real 242.8 165.5 32%
User 63.4 60.9 4%
Sys 132.1 70.6 47%
Total 438.4 297.1 32%

As you can see on my system I saw a staggering 32% difference compared to @jonbaldie's 13%. Clearly the Redis socket is a very worthwhile modification.

Using some of what I learned from reading this article I now wanted to test my Postgres database using it's benchmarking tool pgbench. I did a quick database backup just in case, but it shouldn't harm the Nextcloud db as it's only adding the tables pgbench_accounts, pgbench_branches, pgbench_tellers and pgbench_history to perform the tests.

First test the testing tables initialisation

pgbench -h db -i -p 5432 -U nextcloud -d nextcloud

...

done in 1.85 s (drop tables 0.00 s, create tables 0.13 s, client-side generate 0.60 s, vacuum 0.60 s, primary keys 0.51 s)

Then I Ran 3 tests using the command pgbench -h db -c 10 -p 5432 -U nextcloud -d nextcloud simulating 10 clients.

Postgres TCP 1 2 3 Average
latency average 265.887 333.644 280.873 293.468
tps (including connections establishing) 37.60993 29.972067 35.603308 34.3951016666667
tps (excluding connections establishing) 38.089613 30.24576 35.997626 34.7776663333333

Clean up inbeteween tests

psql -h /tmp/docker/ -i -U nextcloud -d nextcloud

DROP TABLE pgbench_accounts, pgbench_branches, pgbench_tellers, pgbench_history;

First test the testing tables initialisation

pgbench -h /tmp/docker/ -i -U nextcloud -d nextcloud

...

done in 1.42 s (drop tables 0.00 s, create tables 0.11 s, client-side generate 0.68 s, vacuum 0.25 s, primary keys 0.38 s).

Then I Ran 3 tests using the command pgbench -h /tmp/docker/ -c 10 -U nextcloud -d nextcloud simulating 10 clients.

Postgres UNIX 1 2 3 Average
latency average 291.566 290.129 222.446 268.047
tps (including connections establishing) 34.297528 34.467479 44.954712 37.906573
tps (excluding connections establishing) 34.397523 34.570084 45.137941 38.0351826666667

My results show a much more modest performance difference with the database. But it's still an unambiguous improvement so well worth the minor amount of effort.

% Diff
latency average 9.00%
tps (including connections establishing) 10.00%
tps (excluding connections establishing) 9.00%
testing tables initialisation 23.00%

Finding, testing and minimising bottlenecks is possibly the most difficult task for any selfhosting admin. I hope you found this of use in your own bottleneck hunting journey.


Authentik Gotifiy Login Notifications

SSO all the things

Continuing with my journy of utilising Authentik for my SSO. After reading a rather good comment by /u/internallogictv over on the reddit /r/selfhosted, I wanted to add a few more protections. The simplest of which is to send myself a notification whenever a login or a failed login occurs.

Step 1

First things first we create a new application in gotify in order to generate a token for authentik use. Select the Apps tab and press the Create Application button.

Gotify create an application

Step 2

Create a new gotify property mapping in the Admin Interface -> Customisation -> Property Mappings.

I've built this so a login failed is set to the maximum gotify priority level regardless of the user group. For successful logins I divide the levels based on the group gotify-users. I algo create a geo uri for mapping applications on android. You will be able to click the notification and it will open the city co-ordinates, although you may have to skip this if you don't have the geoipupdate container configured.

try:
    # Get the login failed username
    event_user = notification.event.context["username"]
except:
    # Get the login succeeded username
    event_user = notification.event.user["username"]

if notification.event.action == "login_failed":
    priority = 7
    severity = "warning"
elif ak_is_group_member( ak_user_by(username=event_user), name="gotify-users" ): # Check if the user belongs to group
    priority = 1
    severity = notification.severity
else: # default notification settings
    priority = 0
    severity = notification.severity

# Build a geo uri for opening a mapping applications from the gotify notification.
geo_uri = f"geo:{notification.event.context['geo']['lat']},{notification.event.context['geo']['long']}?q={notification.event.context['geo']['lat']},{notification.event.context['geo']['long']}"

title = f"{severity} from authentik {notification.event.action.replace('_', ' ')}".capitalize()

message = f"New {notification.event.action.replace('_', ' ')} for {event_user} was detected coming from {notification.event.context['geo']['city']} {notification.event.context['geo']['country']} from the IP address: {str(notification.event.client_ip)}".capitalize()

# Build the gotify payload
gotify_payload = {
    "title": title,
    "message": message,
    "priority": priority,
    "extras": { "client::notification": { "click": { "url": geo_uri } }},
}

return gotify_payload

Step 3

Create a new notification transport Admin Interface -> Events -> Notification Transports using Webhook (generic) your gotify message url with the token created in step one https://example.tld/gotify/message?token=yourtokenhere

Step 4

Finally we create the notification rule that actually calls the Notification transport. Admin Interface -> Events -> Notification Rules Create a new rule login-notification sending to the group of your choice (This dosn't really matter but it will display an ugly json string as notification on the web UI). Select the Gotify notification transport you created and set the Severity to Notice.

Now we have to create the policies authentik-core-login and authentik-core-login-failed to the event. Expand the login-notification event and press Create Policy. Select Event Matcher Policy, name it authentik-core-login enable the Execution Logging option, select the Login action and authentik Core App. Finish and repeat for the Login Failed action.

Now you should be receiving Login and Login Failed notifications from your Authentik instance over Gotify. I Hope I'll be able to update this to pull different tokens from the user/group attributes in the future to better separate notifications to individual users/admins.


Node-Red SSO with Authentik

Node-RED is a flow-based programming tool, originally developed by IBM’s Emerging Technology Services team and now a part of the JS Foundation.

Following my last post regarding SSO with Authentik I thought I should post my passportjs configuration for Node-Red and OpenidConnect. Currently User accounts work, however I haven't gotten group based permissions setup yet.

Note This guide is based off the Gitea integration guide from the Authentik docs.

Preparation

The following placeholders will be used:

authentik.company is the FQDN of authentik.

nodered.company is the FQDN of nodered.

Step 1

In authentik, create an OAuth2/OpenID Provider (under Resources/Providers) with these settings:

note

Only settings that have been modified from default have been listed.

Protocol Settings

Name: nodered
Signing Key: Select any available key

note

Take note of the Client ID and Client Secret, you'll need to give them to nodered in Step 3.

Step 2

In authentik, create an application (under Resources/Applications) which uses this provider. Optionally apply access restrictions to the application using policy bindings. note

Only settings that have been modified from default have been listed.

Name: nodered
Slug: nodered-slug
Provider: nodered

Step 3

note

We are assuming node-red is installed under docker

Navigate to the node-red data volume data/node_modules/. Alternatively enter the docker container sudo docker exec -it nodered bash and cd /data/node_modules

Use npm to install passport-openidconnect npm install passport-openidconnect

Edit the node-red settings.js file /data/settings.js

adminAuth: {
type:"strategy",
strategy: {
        name: "openidconnect",
        label: 'Sign in with authentik',
        icon:"fa-cloud",
        strategy: require("passport-openidconnect").Strategy,
        options: {
                issuer: 'https://authentik.company/application/o/<application-slug>/',
                authorizationURL: 'https://authentik.company/application/o/authorize/',
                tokenURL: 'https://authentik.company/application/o/token/',
                userInfoURL: 'https://authentik.company/application/o/userinfo/',
                clientID: '<Client ID (Key): Step 2>',
                clientSecret: '<Client Secret: Step 2>',
                callbackURL: 'https://nodered.company/auth/strategy/callback/',
                scope: ['email', 'profile', 'openid'],
                proxy: true,
        verify: function(issuer, profile, done) {
                done(null, profile)
        }
      }
    },
    users: function(user) {
        return Promise.resolve({ username: user, permissions: "*" });
    }
},

SSO with Authentik

SSO all the things

A while back I wrote about minimising my attack surface by utilising default deny and whitelists in Nginx. Now I've gotten into the weeds with authentication and deployed an SSO (Signle sign-on) service on my selfhosted infrastructure.

What is Authentik?

Authentik is a SSO (Single Sign on) provider, much like with Google's services you sign in once and then you can access all your services. This has been a big bugbear with selfhosted applications, with Roundcubemail TTRSS plugin, auto authentication for Tiny Tiny RSS against an IMAP Server and Codiad External Authentication via IMAP to name a few work arounds to the issue I have hacked together over the years.

Most importantly for my use case is the single pane of glass to access my services:

A nice dashboard really brings it all together

The Issues

Introducing a SSO system introduces complexity and potential problems so it's not all smooth sailing, passwords are a thing still as they are simple and reliable and understandable.

New Project new problems, limited reviews

Authentik's first beta release was in Jan 2020 so it's very new and has had a few teething issues and quite a few bugs. I highly recommend utilising additional security methods in front of authentik (IDS/IPS, Geo Blocking and ideally using a VPN to access) until it reaches maturity.

Poor Documentation

Quite frankly the documentation isn't great if you are attempting to figure out HOW it’s supposed to work. Thankfully they have integration guides included in the docs that covers the gaps, so some reading between the lines is needed for a while yet.

Limited compatibility

Not everything has SSO support (SAML, Oauth/OpenidConnect or reverse Proxy Authentication), thankfully this isn't as hard to deal with as it once was:

The main issue I have faced is with HomeAssistant. The developers have been reluctant/resistant to adding additional authentication methods to the project. There is the hass-auth-header project created by the developer of Authentik, however the HomeAssistant Android app is frustratingly a major sticking point.


New maintainers needed for the Thunderbird Lookout Fix-version add-on

Lookout!

In august 2018 I took over maintenance of the Thunderbird addon Lookout-fix-version. I soon set up a the Github Organization TB-throwback so that future development can be expanded and transferred easier if I stop work on it.

LookOut Fix-Version is a plugin which allows Thunderbird to interface with Microsoft’s mail tools by decoding metadata and attachments encapsulated/embedded in a TNEF (Transport Neutral Encapsulation Format) encoded attachment (aka winmail.dat).

Transport Neutral Encapsulation Format or TNEF is a proprietary email attachment format used by Microsoft Outlook and Microsoft Exchange Server. An attached file with TNEF encoding is most often named winmail.dat or win.dat, and has a MIME type of Application/MS-TNEF. The official (IANA) media type, however, is application/vnd.ms-tnef. Source: https://en.wikipedia.org/wiki/Transport_Neutral_Encapsulation_Format

Unfortunately I had to stop working on Lookout a while ago because my employer switched us back to Outlook leaving me with no company time to maintain the project any more. The last major release of Thunderbird I received a lot of help from the TB team's John Bieling to bootstrap a workaround to get things rolling again, now Thunderbird 91 is coming and it's another case of the old xul add-ons not working in the webextension world.

After receiving no takers to my maintainer request over on the mailing list I have decided if I am unable to locate new maintainers I will archive the project on Github. I hope the Thunderbird devs finally assign Bug 77811 to someone as I know that proper integration for decoding TNEF emails was added to the roadmap, however in the meantime you are interested in updating the project please let me know as I'll actively be monitoring the Github project and the issue calling for new maintainers