Minimal WordPress Fail2ban integration

I used to have a fail2ban filter etc setup to look for POST requests to wp-login.php; but the size of the Apache log files on one server made this infeasible (it took fail2ban too long to parse/process the files). Also, doing a filter on the Apache log file looking for POST /wp-login … means you are also catching someone successfully logging in.

Perhaps this is a better approach :

Assumptions

  • You’re using PHP configured with an error_log = /var/log/php.log

    • If this isn’t configured, PHP will probably log to the webserver’s error log file (/var/log/apache2/error.log perhaps).

  • The Apache/PHP processes are able to write to the error_log file.
  • You’re using Debian or Ubuntu Linux

Add a ‘must use’ wordpress plugin

Put this in … /path/to/your/site/wp-content/mu-plugins/log-auth-failures.php

(It must be wp-content/mu-plugins … )

<?php
add_action( ‘wp_login_failed’, ‘login_failed’ );
function login_failed( $username ) {
error_log(“WORDPRESS LOGIN FAILURE {$_SERVER[‘REMOTE_ADDR’]} – user $username from ” . __FILE__);
}

(Yes, obviously you don’t have to use error_log, you could do something else, and there’s a good argument not to log $username as it’s ultimately user supplied data that might just mess things up)

Fail2ban config

Then in /etc/fail2ban/jail.d/wordpress-login.conf :

[wordpress-login]
enabled = true
filter = wordpress-login
action = iptables-multiport[name=wp, port="80,443", protocol=tcp]
logpath = /var/log/php.log
maxretry = 5

If you have PHP logging somewhere else, change the logpath appropriately.

Finally in /etc/fail2ban/filter.d/wordpress-login.conf put :

[Definition]

# PHP error_log is logging to /var/log/php.log something like :
#[31-Jan-2024 20:34:10 UTC] WORDPRESS LOGIN FAILURE 1.2.3.4  - user admin.person from /var/www/vhosts/mysite/httpdocs/wp-content/mu-plugins/log-auth-failures.php

failregex = WORDPRESS LOGIN FAILURE <HOST> - user 


ignoreregex =

Extra bonus points for making the failregex stricter, or stop including $username in the log output (which perhaps makes it vulnerable to some sort of injection attack).

There’s probably a good argument for using a different file (not the PHP error_log) so other random error messages can’t confuse fail2ban, which might also allow you to specify a more friendly date format for fail2ban etc….

Finally …

Restart fail2ban and watch it’s log file (and /var/log/php.log).

service fail2ban restart

Excessive uptime(!?)

Somewhere on the internet there’s a mailserver with a larger uptime, I guess?

[root@xxxxxxxx ~]# uname -a
Linux xxxxxxxxxxxxxxx 2.6.18-419.el5 #1 SMP Fri Feb 24 22:47:42 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

[root@xxxxxxxx ~]# uptime
09:34:38 up 2290 days,  1:47,  ....

I don’t think anyone dares to reboot it …. (this is a server the customer was going to migrate off about 5 years ago …. somehow it’s still in use)

(2290 days is a little over 6 years)

intel nuc d54250wyk (haswell) ~10 years later

This little NUC I bought ages ago is still chugging along, in continual use (albeit only as a backup ‘server’ with a large 4TiB ssd in it).

It’s recently had ‘open heart’ surgery to replace a failing fan and to clean the dust out of it (for the first time in 10 years).

Wow, it’s quiet now.

In other news, I’m tempted to buy a new desktop mini-pc to replace the ASUS PN50 I have (which seems to struggle a little, perhaps due to me having 3 monitors and it having a relatively weak graphics card).

So I’m now torn between waiting a bit longer, getting a NUC 13 Pro or ASUS PN53 or hoping BeeLink/someone release something. I’m skeptical any of the cheaper Chinese manufacturers will produce anything that’ll last >10 years though).

Traefik + Azure Kubernetes

Just a random note or two …

At work we moved to use Azure for most of our hosting, for ‘reasons’. We run much of our workload through kubernetes.

The Azure portal has a nice integration to easily deploy a project from a github repo into Kubernetes, and when it does, it puts each project in it’s own namespace.

In order to deploy some new functionality, I finally bit the bullet and tried to get some sort of Ingress router in place. I chose to use Traefik.

Some random notes ….

  1. You need to configure/run Traefik with –providers.kubernetescrd.allowCrossNamespace=true, without this it’s not possible for e.g. Traefik (in the ‘traefik’ namespace) to use MyCoolApi in the ‘api’ namespace. The IngressRoute HAS to be in the same namespace as traefik is running in …. and the IngressRoute needs to reference a service in a different namespace…
  2. While you’re poking around, you probably want to load traefik with –log.level=DEBUG
  3. Use cert-manager for LetsEncrypt certificates (see https://www.andyroberts.nz/posts/aks-traefik-https/ for some details)
  4. You need to make sure you’re using a fairly recent Kubernetes variant – ours was on 1.19.something, which helpfully just silently”didn’t work” when trying to get the cross namespace stuff working.
  5. Use k9s as a quick way to view logs/pods within the cluster.

Example Ingress Route

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  namespace: traefik
  name: projectx-ingressroute
  annotations:
    kubernetes.io/ingress.class: traefik
    cert-manager.io/cluster-issuer: my-ssl-cert

spec:
  entryPoints:
    - websecure    
  routes:
    - kind: Rule
      match: Host(`mydomain.com`) && PathPrefix(`/foo`) 
      services:
        - name: foo-api-service
          namespace: foo-namespace
          port: 80
  tls:
    secretName: my-ssl-cert-tls
    domains:
    - main: mydomain.com

Initially I tried to use traefik’s inbuilt LetsEncrypt provider support; and wanted to have a shared filesystem (azure storage, cifs etc) so multiple Traefik replicas could both share the same certificate store…. unfortunately this just won’t work, as the CIFS share gets mounted with 777 perms, which Traefik refuses to put up with.

brewdog vs beer52

I’m obviously not cut out to be a beer critic. But I thought I might as well moan to no one….

Beer52https://www.beer52.com/join/KJSLYS

I had a monthly beer52 subscription for about a year. They’ve recently launched ‘wine52’ as well which has been quite good.

beer52’s monthly subscription costs about £27/month for 8 different beers. There’s a range of sizes – 330 ml cans or bottles and larger, a magazine I never really read and a snack. You can choose a ‘mixed’ or ‘light’ theme to influence the selection.

After a month or so Beer52 try and supersize you “for only £2/more a month you can have 10 beers! For only £2/more a month you can have 12 beers! … etc etc”.

The beers are well packaged, and tasted better than most of what my local corner shop sells. There was a good variety of flavours and types.

Brewdog ( “Brewdog and Friends” )

I had as a birthday present from work. It’s about £20/month for 8 x 330ml cans.

The cans have always arrived with dents in them (is that Yodel’s fault or Brewdog’s or having crap packaging?).

There are four different beers, so you get two of each.

Each can is 330ml. There’s a very thin magazine and no snack.

Summary

Choose Beer52. Don’t choose Brewdog.

After having Brewdog for a couple of months, if I wanted a beer subscription I would definitely not be choosing them. While their Birmingham pub was a good experience when I went there last, their beer subscription has been bland, with beers that are all pretty much the same (high alcohol percentage / sweet / pale / lager like etc). There’s no real variety and it’s not worth the small saving by not going with beer52.

While I doubt it effects the taste, the cans being battered gives me a bad impression. The beer hasn’t been anything interesting (I could buy better stuff at my local corner shop) and there’s been a lack of variety.

hotel booking / wp_mphb_sync_logs

Just a random post about a WordPress plugin (hotel booking).

For the site in question, I have a script running which alerts me to any long running (>600s) MySQL queries (or causes of deadlock) and attempts to kill them. When it does this it emails me….

So, the site/MySQL was trying to run queries like this :

DELETE FROM wp_mphb_sync_logs WHERE queue_id IN (SELECT queue_id FROM wp_mphb_sync_queue WHERE queue_name < '1639573980');

which I did an explain, it showed there were > 5 million rows to examine (none of which are actually deleted by the query, so I assume the 5 million rows are all for now invalid queue_id entries).

Adding an index on the wp_mphb_sync_logs.queue_id field didn’t really help speed up the delete … and googling around and checking the plugin’s source code, led me to think it was safe (enough) to do a ‘TRUNCATE wp_mphb_sync_logs‘.

Now that’s done the table has remained empty 12 hours later; so I think everything’s fine.

This post is mostly because the plugin’s forum requires a paid membership to contribute; and I’m not paying $400 just to post “doing this worked for me”.

Recompressing BTRFS files

Ages ago, I reconfigured my postfix/dovecot mail server to use BTRFS for it’s mail store; thinking that the mail files would compress fairly well so it’d be an efficient use of disk space.

I’d just mounted the volume with compress=lzo and not thought anything else about it.

Yesterday, Icinga/Nagios started nagging me that the disk was 80% full…..

So I thought I’d see if changing the compression algorithm to zstd would make any difference.

btrfs filesystem defrag -czstd -r /srv/mail -v

Result – disk usage went from 80% (40Gb) to 57% (28Gb) of a 50Gb volume.