Posts for year 2022

Readable Nginx configs

Configure your linux server

A recent project announcement on the subredit /r/selfhosted reminded me to post about a simple trick I've started using to make the configuration of the webserver Nginx a little more ergonomic.

Nginx allows you to include files inline in your configs to make re-using code simple. An example would be all your ssl proxy settings as per generated using the Mozilla ssl-config generator.

simply add this config to a file like /etc/nginx/include.d/include.ssl_sec with your cert paths modified and include it in your config:

upstream example_service {
  server 127.0.0.1:8080;
  keepalive 32;
}

server {
  server_name example.tld;

  #Mozilla modern tls config
  include /etc/nginx/include.d/include.ssl_sec;

  location / {
    #Common Proxy settings
    include /etc/nginx/include.d/include.proxy_settings;

    proxy_pass http://example_service/;
  }
}

Now you have a nice easy config file that can be easily used as a template for new services. Adding additional configurations to files really makes it quick and easy to deploy new services without needing complicated projects like Nginx Proxy Manager


Eking out some Nextcloud performance

Tweeking my linux server

Nextcloud is notorious in the selfhosted community of being difficult for some people to achieve a decent level of performance. After enabling the basic caching with both APCu and Redis there are several options to trim some fat. Once all the easy stuff is taken care of the hidden bottlenecks is where I am focusing my efforts. So far I have had some success by switching to UNIX sockets in my dockerised Nextcloud deployment.

Generally I've found:

  • Shipping file logging off to syslog made a noticeable visual difference over logging to the nextcloud.log file.
  • Using postgresql has been often touted as a decent option for easy performance gains.
  • Using the preview generator app alongside using Imaginary makes images less of an issue for general browsing.

But what else can you do after that? Trying to find bottlenecks in your setup. Be it spinning rust vs SSD vs M.2 drives there are usually some form of low hanging fruit you can find that is causing issues. A big potential issue is of course your abstraction layer, in my case docker. Docker adds some minor overheads to any service, a trade off for simplifying deployment and replication, one of these overheads is the networking stack. My understanding is that Docker's networking when not in host mode acts as a NAT, even when one container is talking to another. One method of bypass networking overhead between local services is the use of unix sockets.

In researching how to achieve this I found @jonbaldie's post on How to Connect to Redis with Unix Sockets in Docker. A few modifications and I was ready to test and verify that this made a difference.

Setup

These are the modifications done to my docker-compose file. Note that I have made a few modifications to avoid the need to set the folders and sockets permissions as 777. This is mainly handled by modifying the container user group id to the www-data group from the Nextcloud app container.

version: '2'

services:
    #Temporary busybox container to set correct permissions to shared socket folder
    tmp:
      image: busybox
      command: sh -c "chown -R 33:33 /tmp/docker/ && chmod -R 770 /tmp/docker/"
      volumes:
        - /tmp/docker/

    db:
      container_name: nextcloud_db
      image: postgres:14-alpine
      restart: always
      volumes:
        - ./volumes/postgresql:/var/lib/postgresql/data
        - /etc/localtime:/etc/localtime:ro
        - /etc/timezone:/etc/timezone:ro
      env_file:
        - db.env
      # Unix socket modifications
      # Run as a member of the www-data GID 33 group but keep postgres uid as 70
      user: "70:33"
      # Add the /tmp/docker/ socket folder to postgres
      command: postgres -c unix_socket_directories='/var/run/postgresql/,/tmp/docker/'
      depends_on:
        - tmp
      # Add shared volume from Temporary busybox container
      volumes_from:
        - tmp

    redis:
      container_name: nextcloud_redis
      image: redis:alpine
      restart: always
      volumes:
        - /etc/localtime:/etc/localtime:ro
        - /etc/timezone:/etc/timezone:ro
      # Unix socket modifications
        - ./volumes/redis.conf:/etc/redis.conf
      # Run redis with custom config
      command: redis-server /etc/redis.conf
      # Run as a member of the www-data GID 33 group but keep redis uid as 999
      user: "999:33"
      depends_on:
        - tmp
      # Add shared volume from Temporary busybox container
      volumes_from:
        - tmp

    app:
      container_name: nextcloud_app
      image: nextcloud:apache
      restart: always
      ports:
        - 127.0.0.1:9001:80
      volumes:
        - ./volumes/nextcloud:/var/www/html
        - ./volumes/php.ini:/usr/local/etc/php/conf.d/zzz-custom.ini
        - /etc/localtime:/etc/localtime:ro
        - /etc/timezone:/etc/timezone:ro
      depends_on:
        - db
        - redis
      # Unix socket modifications
      # Add shared volume from Temporary busybox container
      volumes_from:
        - tmp

This is the redis.conf file that tells it to only listen to the unix socket, and what permissions to use on said socket. Note I have a password enabled here, this is not really need it if not exposed publicly but I've used it just for best practice.

# 0 = do not listen on a port
port 0

# listen on localhost only
bind 127.0.0.1

# create a unix domain socket to listen on
unixsocket /tmp/docker/redis.sock

# set permissions for the socket
unixsocketperm 770

requirepass [password]

Finally the Nextcloud config I updated to reflect the connection changes

'dbtype' => 'pgsql',
'dbhost' => '/tmp/docker/',
'dbname' => 'nextcloud',
'dbuser' => 'nextcloud',
'dbpassword' => '{password}',

'memcache.local' => '\\OC\\Memcache\\APCu',
'memcache.distributed' => '\\OC\\Memcache\\Redis',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' =>
array (
  'host' => '/tmp/docker/redis.sock',
  'port' => 0,
  'dbindex' => 0,
  'password' => '{password}',
  'timeout' => 1.5,
),

Verifying the changes made a difference.

There is not much point in doing this without verification, otherwise we are all just participating in a cargo cult seeking performance enlightenment. With that in mind I set out to do some very basic benchmarks to ensure the performance gain I felt when navigating my Nextcloud install was in fact happening.

I did all my testing inside my Nextcloud container to better simulate a real-world result. I modified the redis.conf temporarily to allow both socket connections and TCP IP connections, then I had to install the redis-tools and postgresql-contrib packages to get the tools required.

# 0 = do not listen on a port
# port 0
port 6379

# listen on localhost only
# bind 127.0.0.1
bind 0.0.0.0
sudo docker exec -it nextcloud_app bash

apt update && apt install redis-tools && apt install postgresql-contrib

I then performed the same tests as @jonbaldie's using the commands time redis-benchmark -a [password] -h redis -p 6379 and time redis-benchmark -a [password] -s /tmp/docker/redis.sock

REDIS TCP (s) UNIX (s) % Diff
Real 242.8 165.5 32%
User 63.4 60.9 4%
Sys 132.1 70.6 47%
Total 438.4 297.1 32%

As you can see on my system I saw a staggering 32% difference compared to @jonbaldie's 13%. Clearly the Redis socket is a very worthwhile modification.

Using some of what I learned from reading this article I now wanted to test my Postgres database using it's benchmarking tool pgbench. I did a quick database backup just in case, but it shouldn't harm the Nextcloud db as it's only adding the tables pgbench_accounts, pgbench_branches, pgbench_tellers and pgbench_history to perform the tests.

First test the testing tables initialisation

pgbench -h db -i -p 5432 -U nextcloud -d nextcloud

...

done in 1.85 s (drop tables 0.00 s, create tables 0.13 s, client-side generate 0.60 s, vacuum 0.60 s, primary keys 0.51 s)

Then I Ran 3 tests using the command pgbench -h db -c 10 -p 5432 -U nextcloud -d nextcloud simulating 10 clients.

Postgres TCP 1 2 3 Average
latency average 265.887 333.644 280.873 293.468
tps (including connections establishing) 37.60993 29.972067 35.603308 34.3951016666667
tps (excluding connections establishing) 38.089613 30.24576 35.997626 34.7776663333333

Clean up inbeteween tests

psql -h /tmp/docker/ -i -U nextcloud -d nextcloud

DROP TABLE pgbench_accounts, pgbench_branches, pgbench_tellers, pgbench_history;

First test the testing tables initialisation

pgbench -h /tmp/docker/ -i -U nextcloud -d nextcloud

...

done in 1.42 s (drop tables 0.00 s, create tables 0.11 s, client-side generate 0.68 s, vacuum 0.25 s, primary keys 0.38 s).

Then I Ran 3 tests using the command pgbench -h /tmp/docker/ -c 10 -U nextcloud -d nextcloud simulating 10 clients.

Postgres UNIX 1 2 3 Average
latency average 291.566 290.129 222.446 268.047
tps (including connections establishing) 34.297528 34.467479 44.954712 37.906573
tps (excluding connections establishing) 34.397523 34.570084 45.137941 38.0351826666667

My results show a much more modest performance difference with the database. But it's still an unambiguous improvement so well worth the minor amount of effort.

% Diff
latency average 9.00%
tps (including connections establishing) 10.00%
tps (excluding connections establishing) 9.00%
testing tables initialisation 23.00%

Finding, testing and minimising bottlenecks is possibly the most difficult task for any selfhosting admin. I hope you found this of use in your own bottleneck hunting journey.


Authentik Gotifiy Login Notifications

SSO all the things

Continuing with my journy of utilising Authentik for my SSO. After reading a rather good comment by /u/internallogictv over on the reddit /r/selfhosted, I wanted to add a few more protections. The simplest of which is to send myself a notification whenever a login or a failed login occurs.

Step 1

First things first we create a new application in gotify in order to generate a token for authentik use. Select the Apps tab and press the Create Application button.

Gotify create an application

Step 2

Create a new gotify property mapping in the Admin Interface -> Customisation -> Property Mappings.

I've built this so a login failed is set to the maximum gotify priority level regardless of the user group. For successful logins I divide the levels based on the group gotify-users. I algo create a geo uri for mapping applications on android. You will be able to click the notification and it will open the city co-ordinates, although you may have to skip this if you don't have the geoipupdate container configured.

try:
    # Get the login failed username
    event_user = notification.event.context["username"]
except:
    # Get the login succeeded username
    event_user = notification.event.user["username"]

if notification.event.action == "login_failed":
    priority = 7
    severity = "warning"
elif ak_is_group_member( ak_user_by(username=event_user), name="gotify-users" ): # Check if the user belongs to group
    priority = 1
    severity = notification.severity
else: # default notification settings
    priority = 0
    severity = notification.severity

# Build a geo uri for opening a mapping applications from the gotify notification.
geo_uri = f"geo:{notification.event.context['geo']['lat']},{notification.event.context['geo']['long']}?q={notification.event.context['geo']['lat']},{notification.event.context['geo']['long']}"

title = f"{severity} from authentik {notification.event.action.replace('_', ' ')}".capitalize()

message = f"New {notification.event.action.replace('_', ' ')} for {event_user} was detected coming from {notification.event.context['geo']['city']} {notification.event.context['geo']['country']} from the IP address: {str(notification.event.client_ip)}".capitalize()

# Build the gotify payload
gotify_payload = {
    "title": title,
    "message": message,
    "priority": priority,
    "extras": { "client::notification": { "click": { "url": geo_uri } }},
}

return gotify_payload

Step 3

Create a new notification transport Admin Interface -> Events -> Notification Transports using Webhook (generic) your gotify message url with the token created in step one https://example.tld/gotify/message?token=yourtokenhere

Step 4

Finally we create the notification rule that actually calls the Notification transport. Admin Interface -> Events -> Notification Rules Create a new rule login-notification sending to the group of your choice (This dosn't really matter but it will display an ugly json string as notification on the web UI). Select the Gotify notification transport you created and set the Severity to Notice.

Now we have to create the policies authentik-core-login and authentik-core-login-failed to the event. Expand the login-notification event and press Create Policy. Select Event Matcher Policy, name it authentik-core-login enable the Execution Logging option, select the Login action and authentik Core App. Finish and repeat for the Login Failed action.

Now you should be receiving Login and Login Failed notifications from your Authentik instance over Gotify. I Hope I'll be able to update this to pull different tokens from the user/group attributes in the future to better separate notifications to individual users/admins.


Node-Red SSO with Authentik

Node-RED is a flow-based programming tool, originally developed by IBM’s Emerging Technology Services team and now a part of the JS Foundation.

Following my last post regarding SSO with Authentik I thought I should post my passportjs configuration for Node-Red and OpenidConnect. Currently User accounts work, however I haven't gotten group based permissions setup yet.

Note This guide is based off the Gitea integration guide from the Authentik docs.

Preparation

The following placeholders will be used:

authentik.company is the FQDN of authentik.

nodered.company is the FQDN of nodered.

Step 1

In authentik, create an OAuth2/OpenID Provider (under Resources/Providers) with these settings:

note

Only settings that have been modified from default have been listed.

Protocol Settings

Name: nodered
Signing Key: Select any available key

note

Take note of the Client ID and Client Secret, you'll need to give them to nodered in Step 3.

Step 2

In authentik, create an application (under Resources/Applications) which uses this provider. Optionally apply access restrictions to the application using policy bindings. note

Only settings that have been modified from default have been listed.

Name: nodered
Slug: nodered-slug
Provider: nodered

Step 3

note

We are assuming node-red is installed under docker

Navigate to the node-red data volume data/node_modules/. Alternatively enter the docker container sudo docker exec -it nodered bash and cd /data/node_modules

Use npm to install passport-openidconnect npm install passport-openidconnect

Edit the node-red settings.js file /data/settings.js

adminAuth: {
type:"strategy",
strategy: {
        name: "openidconnect",
        label: 'Sign in with authentik',
        icon:"fa-cloud",
        strategy: require("passport-openidconnect").Strategy,
        options: {
                issuer: 'https://authentik.company/application/o/<application-slug>/',
                authorizationURL: 'https://authentik.company/application/o/authorize/',
                tokenURL: 'https://authentik.company/application/o/token/',
                userInfoURL: 'https://authentik.company/application/o/userinfo/',
                clientID: '<Client ID (Key): Step 2>',
                clientSecret: '<Client Secret: Step 2>',
                callbackURL: 'https://nodered.company/auth/strategy/callback/',
                scope: ['email', 'profile', 'openid'],
                proxy: true,
        verify: function(issuer, profile, done) {
                done(null, profile)
        }
      }
    },
    users: function(user) {
        return Promise.resolve({ username: user, permissions: "*" });
    }
},

SSO with Authentik

SSO all the things

A while back I wrote about minimising my attack surface by utilising default deny and whitelists in Nginx. Now I've gotten into the weeds with authentication and deployed an SSO (Signle sign-on) service on my selfhosted infrastructure.

What is Authentik?

Authentik is a SSO (Single Sign on) provider, much like with Google's services you sign in once and then you can access all your services. This has been a big bugbear with selfhosted applications, with Roundcubemail TTRSS plugin, auto authentication for Tiny Tiny RSS against an IMAP Server and Codiad External Authentication via IMAP to name a few work arounds to the issue I have hacked together over the years.

Most importantly for my use case is the single pane of glass to access my services:

A nice dashboard really brings it all together

The Issues

Introducing a SSO system introduces complexity and potential problems so it's not all smooth sailing, passwords are a thing still as they are simple and reliable and understandable.

New Project new problems, limited reviews

Authentik's first beta release was in Jan 2020 so it's very new and has had a few teething issues and quite a few bugs. I highly recommend utilising additional security methods in front of authentik (IDS/IPS, Geo Blocking and ideally using a VPN to access) until it reaches maturity.

Poor Documentation

Quite frankly the documentation isn't great if you are attempting to figure out HOW it’s supposed to work. Thankfully they have integration guides included in the docs that covers the gaps, so some reading between the lines is needed for a while yet.

Limited compatibility

Not everything has SSO support (SAML, Oauth/OpenidConnect or reverse Proxy Authentication), thankfully this isn't as hard to deal with as it once was:

The main issue I have faced is with HomeAssistant. The developers have been reluctant/resistant to adding additional authentication methods to the project. There is the hass-auth-header project created by the developer of Authentik, however the HomeAssistant Android app is frustratingly a major sticking point.