Skip to main content

Posts about security

I shouldn't use sudo nano

Over on /r/linux a user going by /u/AlternOSx posted a short You should Know: YSK : Do not use 'sudo vim/nano/emacs..' to edit a file. Instead, set your $EDITOR and use sudoedit or sudo -e.

Long story short sudoedit copies the file you want to edit to /tmp/file.xxx and then opens it with an unprivileged instance of your editor of choice. It then overwrites the source file when you are finished editing, protecting from accidental privilege escalation of commands through your text editor.

Knowing this I came up with a quick way to enforce this best practice by added this function into my .bashrc file. Hopefully I can retrain myself not to use sudo nano all the time.

# Define the default editor in this case nano.
EDITOR=nano

# Catch calls to sudo.
function sudo() {
  if [[ $1 == "$EDITOR" ]]; then
    # The editor has been called

    if [ -w "$2" ]; then
      # If the file is writable by the current user just use the editor as normal.

      command $EDITOR "$2"
    else
      # The file is not writable use sudoedit.
      command sudoedit "$2"
    fi
  else
    # Use sudo as normal.
    command /usr/bin/sudo "[email protected]"
  fi
}

Minimizing my selfhosted attack surface

Tweeking my linux server I'm always fairly wary of opening my selfhosted services up to the internet, just how sure am I that the developer has done the right due-diligence? Thankfully it's relatively simple to at least limit parts of a service accessible to the open internet with Nginx and allow and deny options.


Update

Note: If you want a docker container to access a protected service you will have to set the subnet in your docker-compose file as below:

networks:
  node-red-network:
    ipam:
      config:
        - subnet: "172.16.0.0/24"

Update 2

A more generic change you can do is set the default address pools in docker's /etc/docker/daemon.json file. You then just have to whitelist 172.16.0.0/16 subnets

{
  "default-address-pools":
  [
    {"base":"172.16.0.0/16","size":24}
  ]
}

First you should store this in a file, that way you can then include it multiple times, this will make it trivial to update in the future. Create the file include_whitelist in your nginx folder, adding your own allow options between satisfy any; and deny all;.

satisfy any;

# allow localhost
allow 127.0.0.1;

# allow a single address
# allow 000.000.000.000;

# allow LAN network
allow 192.168.0.0/24;

# allow WLAN network
allow 192.168.2.0/24;

# allow VPN network
allow 10.1.4.0/24;

# drop rest of the world
deny all;

You then have to include the file in your nginx config. Here I am using TT-RSS as an example, Note that I'm excluding the API and the public.php by having it in a separate location block without including the include_whitelist. This allows me to keep accessing TT-RSS on my mobile phone through the mobile application.

  location ^~ /tt-rss/ {
      include /etc/nginx/include_whitelist;

      access_log off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_pass http://127.0.0.1:8280/tt-rss/;

  }

  location ^~ /tt-rss/api {

      access_log off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_pass http://127.0.0.1:8280/tt-rss/api;

  }

  location ^~ /tt-rss/public.php {

      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_pass http://127.0.0.1:8280/tt-rss/public.php;

  }

For Node-Red I wanted an API endpoint for Tasker on my phone. Thankfully this is just as easy to define in Node-red as it is in nginx. In Node-Red open your GET node and just add another folder.

Add and extra folder to your Node-Red endpoints

An example of the Node-Red nginx configuration. Again just like the TT-RSS example above, I have excluded an api subdirectory by having a separate location block.

  location ^~ /node-red/ {
    include /etc/nginx/include_whitelist;

    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_pass http://127.0.0.1:1880/node-red/;
  }

  location ^~ /node-red/api/ {
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_pass http://127.0.0.1:1880/node-red/api/;
  }

Now only your API endpoints are globally available. If like me you use a firewall, throwing a convenient geo-block up in front you can lower the exposure a bit more.


TOTP with sudo (Google Auth)

I was reading the posts over on lobste.rs and saw this post: Is sudo almost useless?. Typically I see sudo as a safety belt to protect you from doing something stupid with administrator privileges rather than a security shield. But that doesn't mean it can't be both

As with ssh, outlined in my previous post TOTP with SSH (Google Auth), you can certainly boost your sudo usefulness security wise by throwing 2FA via google-authenticator-libpam on top of it.

Install google-authenticator-libpam

On debian/ubuntu:

    sudo apt update && sudo apt install google-authenticator-libpam

Set-up your secret keys

We now need to create the secret key, this should not be kept in the user folder, after all what is the point of 2FA if the user we are authenticating can just read the secret files. In my case I keep them in the root dir

Replace the variable ${USER} if/when you create a key for a user other than the active one.

sudo google-authenticator -s /root/.sudo_totp/${USER}/.google_authenticator
sudo chmod 600 -R /root/.sudo_totp/

You will see a QR code/secret key that you can scan with a TOTP app like andotp, authy, google authenticator or in my case I added it to my yubikey. There are also your emergency scratch codes that you should record somewhere safe.

Enable in PAM

You now need to let PAM know it should be checking the codes. There are two ways to do this, Mandatory and Only if secret key exists. I have it as Mandatory any user using sudo MUST have a secret key

In /etc/pam.d/sudo add the following configuration lines to the end of the file.

# Use Google Auth -- Mandatory
auth required pam_google_authenticator.so secret=/root/.sudo_totp/${USER}/.google_authenticator user=root

# Use Google Auth -- Only if secret key exists
# auth required pam_google_authenticator.so secret=/root/.sudo_totp/${USER}/.google_authenticator user=root nullok

Bonus do this for su as well

You can do the same thing for su as well however obviously the user variable will be root rather than the user attempting to elevate their privilege's.

Setup the key as before, just for the root user

sudo google-authenticator -s /root/.google_authenticator
sudo chmod 600 -R /root/.google_authenticator

In /etc/pam.d/su add the following configuration lines to the end of the file.

# Use Google Auth -- Mandatory
auth required pam_google_authenticator.so secret=/root/.google_authenticator user=root

Nextcry and Nextcloud security

About a month ago there was an urgent security notice from the Nexcloud devs regarding a flaw in Nginx php-fpm and the associated Nextcloud config. Unfortunately we are now seeing it being exploited in the wild.

The Nextcloud devs have confirmed that it doesn't appear to be an issue with Nextcloud itself and that patching and updating is highly advised.

This brings to mind some extra security measures I do for Nextcloud on top of my standard server checklist

Have extra security advice? Let me know in the comments down below!


SSH Login Notifications with Gotify

Inspired by this post I decided to add a notification on my phone every time an ssh session began on my servers. Seeing as I make use of Gotify for selfhosted push notifications I used that rather than signal.

First I created created the file /usr/local/bin/sshnotif. At the top you can add your own token and Gotify url

Update: I had to push the current time back a full minute in order to improve consistency. I'll defiantly want to revisit this at a later date

#!/bin/bash

exec &> /dev/null #Hide output

Gotify_URL='https://example.tld/gotify'
Gotify_Token='gotify-app-token'

notify()
{

        now=$(date -d "-60 seconds" +%s) #Get current time minus 60 seconds
        end=$((SECONDS+30)) #Set 30s Timeout for loop

        while [ $SECONDS -lt $end ]; do

                SSHdate=$(date -d "$(who |grep pts|tail -1 | awk '{print $3, $4}')" +%s) #Check for the latest SSH session

                if [ $SSHdate -ge $now ]; then #Once who is updated continue with sending Notification

                        title="SSH Login for $(/bin/hostname -f)"
                        message="$(/usr/bin/who | grep pts)"

                        /usr/bin/curl -X POST -s \
                                -F "title=${title}" \
                                -F "message=${message}" \
                                -F "priority=5" \
                                "${Gotify_URL}/message?token=${Gotify_Token}"

                        break
                fi
        done

}

notify & #Run in background to prevent holding up the login process

Run the command chmod +x /usr/local/bin/sshnotif

In the file /etc/pam.d/sshd add the following line

# note optional is set to prevent ssh login failure
session optional pam_exec.so /usr/local/bin/sshnotif

I now get a nice notification with all the open SSH sessions listed. Unlike the post on 8192.one I didn't want any IP address resolution using an online service. I plan on integrating the MaxMind GeoLite2 database at some point. However as I already have Graylog set up to do this it's not a high priority for me.


Server set-up Checklist

Configure your linux server I often see questions on /r/selfhosted on how to secure a server. Here is a quick checklist of things you might want to look into.

Follow best practices for the basics

Lock down the Server

  • Disable root login via SSH
  • Close all unused incoming ports via UFW/iptables
  • Limit outgoing ports as well as incoming using UFW/iptables
  • Watch for credential stuffing/brute force attacks with Fail2ban

Backup your configs/files

  • Securely encrypted backup via Duplicity
  • External Backup to external drive.
  • Remote backup, either via a regularly swapped out external drive or via the cload

Set up monitoring services to let you know when something goes wrong

Here are a few extra things you can do to bolster your ssh security

Useful resources


Dugite is this video of you?

Ohhh a virus!

This morning I received a link in Facebook messenger from an old friend, someone I haven't spoken to in years. It was obviously a phishing link, the title was wrong "Dugite is this a video of you?" and the preview image was blurry, like it was stuck not loading.

Needless to say I didn't click and messaged them back saying "Probably should change your password?"

What Interests me is the reaction when this friend posted on their wall, within minuets, not to open any messages from them and they "HAVE A VIRUS ON MY PHONE". Out of 7 people 3 opened the obviously shady link.

Not one of these people who opened this link mentioned they will now need to change their password, in fact one person even said it all should be ok once it's gone through their entire address book. The non-technical people, to me at least, seem to be treating phishing and malware like you would the common cold. It'll pass, fact of life and not a real concern.

They of course should be very concerned with the majority of web browsing occurring on the mobile the amount of data that is potentially stored on a phone is astronomically large. From your banking app to your photos and everything you can gain access through your email accounts is at risk. Mobile operating systems are, thankfully, very locked down with each app operating somewhat isolated from the others, but that's not a guarantee.

I fear the next decade of tech breaches are going to get ugly and there just isn't a technical solution to user apathy.


Locking your ssh port with fwknop

UPDATED Thanks to [email protected] in the comments for the bits I missed

In my last post I described how I decrypt my home server remotely with ssh. Today I would like to share how I like to lock/unlock my ssh port with an encrypted port knocking implementation fwknop

The issue with port knocking

On the face of it port knocking looks like a good idea. Lock down your ssh port until you need it, avoiding any zero day issues with the ssh protocols. The problem is this, port knocking is sent in the clear over the network. Anyone looking can see your knock "code", much like if you had a secret door knock some one around the corner could heard the pattern of your knocks.

This is where fwknop comes in, it's SPA (Single Packet Authorization) cannot be re-sent it is one time only. Not to mention it's faster as you are only sending the one packet.

The main issue I had with fwknop is by default you have to specify the source IP address you want to be able to access your server. I found this to be quite painful to set-up, so I found a simple way around the issue.

Note: this only works if you are blocking ports by default. I use UFW to simplify that process. See this Digital Ocean tutorial on the basics of UFW

Server Side:

In Debian based systems fwknop is split into fwknop-client and fwknop-server. We will want both of them

sudo apt install fwknop-server fwknop-client

First we need to enable the fwknop server in the /etc/default/fwknop-server file. by changing the line START_DAEMON="no"

# Default settings for fwknopd.

# Change it to yes if you would like fwknopd to be started at boot time.
#
# START_DAEMON="no"
START_DAEMON="yes"

# Add any options you would like to pass to the daemon when started
# For example if you would like to add an override file for your setup, this
# can be achieved this way:
#
#     DAEMON_ARGS="--override-config /root/fwknopd.override.conf"
DAEMON_ARGS=""

Next we need to set up the basic config rules on the server found in /etc/fwknop/fwknopd.conf Debian and Ubuntu have changed the default interface name from eth0 to enp3s0 so we have to set that. We can also change the listening port here.

PCAP_INTF               enp3s0;

# change your port to your desired listening port.
PCAP_FILTER                 udp port 62201;

Now we use fwknop to generate our key's. We could use GpG here, but I didn't feel the extra encryption brings much to the table as we are only opening the ssh port and I have public key authentication and TOTP enabled.

fwknop -A tcp/22 -D example.tld --key-gen --use-hmac --save-rc-stanza

You can now find the KEY_BASE64 and HMAC_KEY_BASE64 in ~/.fwknoprc we will need these for the /etc/fwknop/access.conf file and the client.

In the /etc/fwknop/access.conf file. Note: I substituted the iptable commands for ufw commands. We don't have to worry about our ssh session being kicked as once it's connected the CMD_CYCLE_CLOSE (at least with ufw) won't close the existing connection.

SOURCE                          ANY

# Limit the Ports able to be opened
OPEN_PORTS                      tcp/22

# Keys from ~/.fwknoprc
KEY_BASE64                      [...]
HMAC_KEY_BASE64                 [...]

# Optionally use iptables
# CMD_CYCLE_OPEN                /sbin/iptables -A INPUT -p $PROTO --dport $PORT -j ACCEPT
# CMD_CYCLE_CLOSE               iptables -D INPUT -p $PROTO --dport $PORT -j ACCEPT

CMD_CYCLE_OPEN                  /usr/sbin/ufw allow $PORT
CMD_CYCLE_CLOSE                 /usr/sbin/ufw delete allow $PORT

# Default cycle time Mandatory for CMD_CYCLE_OPEN/CLOSE
CMD_CYCLE_TIMER                 180

A word of warning, fwknop can run arbitrary commands if ENABLE_CMD_EXEC is enabled. I don't see why you would ever really want to do that. You can also run any bash script from CMD_CYCLE_OPEN and CMD_CYCLE_CLOSE with the optional variables $PROTO, $PORT and $SRC. You can potentially get yourself in a lot of trouble if you do this so proceed with caution.

Now we need to setup the systemd file /etc/systemd/system/fwknop-server.service. Note: on a ubuntu (18.04.4 LTS) install I had to create the folder /var/fwknop/.

NOTE: [email protected] in the comments also mentioned the PID file needs to be in /run/fwknop dir in Ubuntu 20.04

[Unit]
Description=Firewall Knock Operator Daemon
After=network-online.target

[Service]
Type=forking
PIDFile=/var/fwknop/fwknopd.pid
ExecStart=/usr/sbin/fwknopd
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target

Then we just enable and start the service

sudo systemctl enable fwknop-server.service && sudo systemctl start fwknop-server.service

Running sudo systemctl status fwknop-server.service should now show you the service is active Active: active (running). Currently if you have already allowed port 22 with ufw it will stay open until the first time you cycle fwknop with a client.

Client Side:

You have three options fwknop-client, fwknop2 on android - [F-Droid] - [Google play] or fwknop-gui available on Windows, Mac and Linux

In fwknop2 and fwknop-gui:

  • KEY_BASE64 -> Rijndael Key
  • Key Is Base 64 - Checkbox below key entry
  • HMAC_KEY_BASE64 -> HMAC Key
  • HMAC Is Base 64 - Checkbox below key entry
  • Allow IP - This can be anything as we are ignoring this setting
  • Access Ports: tcp/22

The Firewall timeout is in seconds and can be anything as long as it's long enough for you to authenticate. Remember if you have the same set-up as I do, you wont get kicked after the timeout.

And there we go a nice locked ssh port. You will now have to send a SLA to your server prior to connecting with your ssh client.


Securing My Server With Dropbear SSH

Having a small home server I've always wanted to encrypt my files, however I have never wanted to be locked out if I'm far away. Enter dropbear ssh. A small light weight ssh server already packaged in debian to work prior to decryption.

Install

sudo apt update && sudo apt install dropbear-initramfs

Note: initramfs will kick up an error after installing dropbear-initramfs. This is solved after adding your public key

Add your ssh key

ssh-keygen -f ~/.ssh/dropbear.id_rsa
sudo cat ~/.ssh/dropbear.id_rsa > /etc/dropbear-initramfs/authorized_keys

Changing the port

/etc/dropbear-initramfs/config

DROPBEAR_OPTIONS="-p 3000"

A little extra security

You can further secure dropbear by disabling forwarding and limiting it to only executing the cryptroot-unlock command.

Just add no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="/bin/cryptroot-unlock" to the authorized_keys file in front of the ssh public key

It should look something like this:

no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="/bin/cryptroot-unlock" ssh-rsa A AQQQQQJJQQHx[...]

Finishing up

sudo update-initramfs -u

See the dropbear manpage for further details


Better SSH Management with Keepass and Putty

Out of the box keepass recognizes the URI ssh:// and will open it with putty. However it is limited, you can't change ports from the default port 22 nor can you save a convenient list of port forwards. Thankfully this is something you can change.

Things you will need:

  1. Keepass
  2. Putty
  3. Keeagent

Alternativly you can do an easy install with the windows package manager chocolatey

choco install putty.install keepass.install keepass-plugin-keeagent -y

URL overrides

We will now define a new ssh:// override globaly in keepass. It is possible to also do so per entry, for portability, however I do not use this feature as I run linux at home and use a separate override on that system.

  1. Tools -> Options
  2. Integration tab
  3. URL Overrides

  1. Click the add button
  2. Enter ssh in the Scheme field
  3. Enter: cmd://putty {T-REPLACE-RX:/{S:Forwards}/\{S:Forwards\}/ /} -P {T-REPLACE-RX:/{BASE:PORT}/-1/22 /} {BASE:HOST} -l {USERNAME} in the url override field. Note: add -pw {PASSWORD} to the end if you wish to auto submit your password. Just be aware this could be considered slightly insecure.

The Keepass entry

  1. Create an entry as you normally would adding the ssh:// URL

Note: to add a port just use ssh://example.tld:222

  1. If you need port forwards add then under the Advanced tab as a String Field entry in the following format: -L 6080:127.0.0.1:6080 -L 444:10.1.1.1:444

Now when you open the url you will have your putty session with port changes and port forwards.

Breaking it down

  1. cmd://putty

    Opens putty via a shell command

  2. {T-REPLACE-RX:/{S:Forwards}/\{S:Forwards\}/ /}

    If the string field Forwards doesn't exist delete the string {S:Forwards}

  3. -P {T-REPLACE-RX:/{BASE:PORT}/-1/22 /}

    The {BASE:PORT} placeholder returns -1 If a port is not defined. If this happens we should replace it with the default ssh port 22

  4. {BASE:HOST}

    The Hostname/IP address part of the URL

  5. -l {USERNAME} -pw {PASSWORD}

    Login with the username and (optionally) password of the entry