Well, I never really ‘got’ the block editor (Gutenberg – wordpress block editor) – probably due to a lack of use … so I’ve moved this site over to ClassicPress
( thank you LWN – ClassicPress article )
Linux, PHP, geeky stuff … boring man.
Well, I never really ‘got’ the block editor (Gutenberg – wordpress block editor) – probably due to a lack of use … so I’ve moved this site over to ClassicPress
( thank you LWN – ClassicPress article )
A few years ago I setup an Azure Function App to retrieve a LetsEncrypt certificate for a few $work services.
Annoyingly that silently stopped renewing stuff.
Given I’ve no idea how to update it etc or really investigate it (it’s too much of a black box) I decided to replace it with certbot etc, hopefully run through a scheduled github action.
To keep things interesting, I need to use the Route53 DNS stuff to verify domain ownership.
Random bits :
docker run --rm -e AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
-e AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
-v $(pwd)/certs:/etc/letsencrypt/ \
-u $(id -u ${USER}):$(id -g ${USER}) \
certbot/dns-route53 certonly \
--agree-tos \
--email=me@example.com \
--server https://acme-v02.api.letsencrypt.org/directory \
--dns-route53 \
--rsa-key-size 2048 \
--key-type rsa \
--keep-until-expiring \
--preferred-challenges dns \
--non-interactive \
--work-dir /etc/letsencrypt/work \
--logs-dir /etc/letsencrypt/logs \
--config-dir /etc/letsencrypt/ \
-d mydomain.example.com
Azure needs the rsa-key-size 2048 and type to be specified. I tried 4096 and it told me to f.off.
Once that’s done, the following seems to produce a certificate that keyvault will accept, and the load balancer can use, that includes an intermediate certificate / some sort of chain.
cat certs/live/mydomain.example.com/{fullchain.pem,privkey.pem} > certs/mydomain.pem
openssl pkcs12 -in certs/mydomain.pem -keypbe NONE -cetpbe NONE -nomaciter -passout pass:something -out certs/something.pfx -export
az keyvault certificate import --vault-name my-azure-vault -n certificate-name -f certs/something.pfx --password something
Thankfully that seems to get accepted by Azure, and when it’s applied to an application gateway listener, clients see an appropriate chain.
git config –global gpg.format ssh
git config –global user.signingkey ~/.ssh/id_ed25519.pub
git config –global commit.gpgsign true
Hopefully that’ll result in my github commits being signed…. and when I forget how to do it …
I used to have a fail2ban filter etc setup to look for POST requests to wp-login.php; but the size of the Apache log files on one server made this infeasible (it took fail2ban too long to parse/process the files). Also, doing a filter on the Apache log file looking for POST /wp-login … means you are also catching someone successfully logging in.
Perhaps this is a better approach :
Assumptions
error_log = /var/log/php.log
Put this in … /path/to/your/site/wp-content/mu-plugins/log-auth-failures.php
(It must be wp-content/mu-plugins … )
<?php
add_action( ‘wp_login_failed’, ‘login_failed’ );
function login_failed( $username ) {
error_log(“WORDPRESS LOGIN FAILURE {$_SERVER[‘REMOTE_ADDR’]} – user $username from ” . __FILE__);
}
(Yes, obviously you don’t have to use error_log, you could do something else, and there’s a good argument not to log $username as it’s ultimately user supplied data that might just mess things up)
Then in /etc/fail2ban/jail.d/wordpress-login.conf :
[wordpress-login]
enabled = true
filter = wordpress-login
action = iptables-multiport[name=wp, port="80,443", protocol=tcp]
logpath = /var/log/php.log
maxretry = 5
If you have PHP logging somewhere else, change the logpath appropriately.
Finally in /etc/fail2ban/filter.d/wordpress-login.conf put :
[Definition]
# PHP error_log is logging to /var/log/php.log something like :
#[31-Jan-2024 20:34:10 UTC] WORDPRESS LOGIN FAILURE 1.2.3.4 - user admin.person from /var/www/vhosts/mysite/httpdocs/wp-content/mu-plugins/log-auth-failures.php
failregex = WORDPRESS LOGIN FAILURE <HOST> - user
ignoreregex =
Extra bonus points for making the failregex stricter, or stop including $username in the log output (which perhaps makes it vulnerable to some sort of injection attack).
There’s probably a good argument for using a different file (not the PHP error_log) so other random error messages can’t confuse fail2ban, which might also allow you to specify a more friendly date format for fail2ban etc….
Restart fail2ban and watch it’s log file (and /var/log/php.log).
service fail2ban restart
Well, I sort of realised I had a web server or two that were still on Debian Buster, and it was time to move to Bullseye or Bookworm. As usual the Debian upgrade procedure was mostly pretty straight forward and uneventful.
Interesting findings :
hitch -t --config /etc/hitch/hitch.conf
In other news, I noticed this post where someone moaned about systemd-resolved the other day – https://www.reddit.com/r/linux/comments/18kh1r5/im_shocked_that_almost_no_one_is_talking_about/ – I’ve had similar problems to the people on the thread (resolved stops working etc) so thought it was time to try and use ‘unbound‘ instead.
apt-get install unbound
and then tell /etc/resolv.conf
to use 127.0.0.1 for DNS.
annoyingly, unbound-control stats isn’t quite as pretty as resolvectl statistics but oh well.
echo -e “nameserver 127.0.0.1\nnameserver 8.8.8.8\noptions timeout:4” >/etc/resolv.conf
and an /etc/unbound/unbound.conf file that looks perhaps like :
server:
interface: 127.0.0.1
access-control: 127.0.0.0/8 allow
access-control: ::1/128 allow
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"
tls-cert-bundle: "/etc/ssl/certs/ca-certificates.crt"
remote-control:
control-enable: yes
# by default the control interface is is 127.0.0.1 and ::1 and port 8953
# it is possible to use a unix socket too
control-interface: /run/unbound.ctl
forward-zone:
name: "."
forward-tls-upstream: yes
forward-addr: 1.1.1.1@853#cloudflare-dns.com
forward-addr: 1.0.0.1@853#cloudflare-dns.com
(Unfortunately my ISP is shitty, and doesn’t yet give me an ipv6 address).
Looking at https://1.1.1.1/help – I do sometimes see that ‘DNS over TLS’ is “yes”…. so I guess something is right; annoyingly I don’t see anything useful from unbound’s stats (unbound-control stats) to show it’s done a secure query…
“unbound-host” (another debian package) – will helpfully tell you whether a lookup was done ‘securely’ or not – e.g.
$ unbound-host google.com -D -v google.com has address 142.250.178.14 (insecure) google.com has IPv6 address 2a00:1450:4009:815::200e (insecure) google.com mail is handled by 10 smtp.google.com. (insecure)
which seems a little odd to me (I’d have thought google would support dns sec), but some domains do work – e.g.
$ unbound-host mythic-beasts.com -D -v
mythic-beasts.com has address 93.93.130.166 (secure)
mythic-beasts.com has IPv6 address 2a00:1098:0:82:1000:0:1:2 (secure)
mythic-beasts.com mail is handled by 10 mx1.mythic-beasts.com. (secure)
mythic-beasts.com mail is handled by 10 mx2.mythic-beasts.com. (secure)
Does anyone else care about having a blog any longer?
“New PC Time”
I’ve had an ASUS PN50 (AMD 4800u processor) as my desktop/daily driver for sometime, and it’s nice and power efficient, but increasingly I found it being slow.
I eventually discovered I could turn on the CPU ‘boost’ feature (doh!) – but doing that seemed to result in it crashing within the next 24-48 hours…. which isn’t good. I don’t know if it’s a hardware or Linux problem – but I had already sort of decided it was time to consider upgrading to something with more ‘ooomph’.
So, I came across a slightly dodgy looking listing on Amazon for a Beelink SER6 max (32gb RAM, 500GiB SSD). The SER6 Max is a fairly new release, and Beelink are a relatively cheap, newish supplier of hardware with some past quality issues. Anyway, I thought I’d stop dithering over it, and buy it and rely on Amazon’s returns policy if there were problems with the PC/hardware.
My reason for choosing the SER6 Max was that it had enough rear ports for all three of my monitors, most other minipc variants don’t. I did contemplate the Geekom AS6 (which is an ASUS PN53 with the same CPU as this beelink, but it has slower RAM and I was concerned it might be noisy).
So, I “pulled the trigger” on https://www.amazon.co.uk/dp/B0C279T4P6 and on a whim I tried installing Siduction Linux…. so now I’ve got full disk encryption and what looks like a fairly up to date stack of stuff (with XFCE).
The SER6 has at least passed a token memory test, and some system tests – so I’m fairly optimistic about it, although I did have one hard lock up / crash yesterday which is unexplained.
(1 week later, and it seems well stable/reliable … )
Escaping quotes within variables is always painful in bash (somehow) – e.g.
foo”bar
and it’s not obvious that you’d need to write e.g.
“foo”\””bar”
(at least to me).
Thankfully a bash built in magical thing can be used to do the escaping for you.
In my case, I need to pass a ‘PASSWORD’ variable through to run within a container. The PASSWORD variable needs escaping so it can safely contain things like ; or quote marks (” or ‘).
e.g. docker compose run app /bin/bash "echo $PASSWORD > /some/file"
or e.g. ssh user@server “echo $PASSWORD > /tmp/something”
The fix is to use the ${PASSWORD@Q} variable syntax – for example:
#!/bin/bash
FOO=”bar’\”baz”
ssh user@server “echo $FOO > /tmp/something”
This will fail, with something like : “bash: -c: line 1: unexpected EOF while looking for matching `''
“
As she shell at the remote end it seeing echo bar'"baz
and expects the quote mark to be closed.
So using the @Q magic –
ssh user@server “echo ${FOO@Q} > /tmp/something”
which will result in /tmp/something containing “bar'”baz” which is correct.
I’ve been using an ASUS PN50 (that’s a mini pc, with an AMD Ryzen 4800u processor – so sort of a laptop without a screen) as my desktop for ages.
Increasingly I’ve found it sluggish and I was contemplating replacing it with something newer, and then I discovered why the CPU speed in /proc/cpuinfo was always 1400mhz….
I needed to :
echo 1 > /sys/devices/system/cpu/cpufreq/boost
Once that’s done, the CPU cores can go up to about 4.2Ghz … #doh
In other news – https://www.phoronix.com/news/Linux-Per-Policy-CPUFreq-Boost looks interesting.
Unfortunately now my minipc’s fan is always speeding up / slowing down when it used to be pretty quiet :-/
Thanks to https://www.reddit.com/r/MiniPCs/comments/16cuzd8/asus_pn50_unlock_cpu_speed_under_linux/
Random notes on resizing a disk attached to an Azure VM …
Check what you have already –
az disk list --resource-group MyResourceGroup --query '[*].{Name:name,Gb:diskSizeGb,Tier:accountType}' --output table
might output something a bit like :
Name Gb
———————————————- —-
foo-os 30
bar-os 30
foo-data 512
bar-data 256
So here, we can see the ‘bar-data’ disk is only 256Gb.
Assuming you want to change it to be 512Gb (Azure doesn’t support an arbitary size, you need to choose a supported size…)
az disk update --resource-group MyResourceGroup --name bar-data --size-gb 512
Then wait a bit …
In my case, the VMs are running Debian Buster, and I see this within the ‘dmesg‘ output after the resize has completed (on the server itself).
[31197927.047562] sd 1:0:0:0: [storvsc] Sense Key : Unit Attention [current]
[31197927.053777] sd 1:0:0:0: [storvsc] Add. Sense: Capacity data has changed
[31197927.058993] sd 1:0:0:0: Capacity data has changed
Unfortunately the new size doesn’t show up straight away to the O/S, so I think you either need to reboot the VM or (what I do) –
echo 1 > /sys/class/block/sda/device/rescan
at which point the newer size appears within your ‘lsblk‘ output – and the filesystem can be resized using e.g. resize2fs