Upgrade some things

Well, I sort of realised I had a web server or two that were still on Debian Buster, and it was time to move to Bullseye or Bookworm. As usual the Debian upgrade procedure was mostly pretty straight forward and uneventful.

Interesting findings :

  • hitch“, which I use as an SSL frontend to varnish, doesn’t seem to get along all that well with systemd and silently fails if your config has “daemon = on” setting in /etc/hitch/hitch.conf. Annoyingly when trying to test the configuration with “hitch -t” you will get an error like: “No x509 certificate PEM file specified for frontend ‘default’!” – the solution to that is to specify the config file – i.e : hitch -t --config /etc/hitch/hitch.conf
  • hitch hasn’t had a release in it’s packagecloud.io repository for the last 3 years; so the debian supported variant looks more appealing.

In other news, I noticed this post where someone moaned about systemd-resolved the other day – https://www.reddit.com/r/linux/comments/18kh1r5/im_shocked_that_almost_no_one_is_talking_about/ – I’ve had similar problems to the people on the thread (resolved stops working etc) so thought it was time to try and use ‘unbound‘ instead.

apt-get install unbound

and then tell /etc/resolv.conf to use 127.0.0.1 for DNS.

annoyingly, unbound-control stats isn’t quite as pretty as resolvectl statistics but oh well.

echo -e “nameserver 127.0.0.1\nnameserver 8.8.8.8\noptions timeout:4” >/etc/resolv.conf

and an /etc/unbound/unbound.conf file that looks perhaps like :

server:
interface: 127.0.0.1
access-control: 127.0.0.0/8 allow
access-control: ::1/128 allow
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"
tls-cert-bundle: "/etc/ssl/certs/ca-certificates.crt"

remote-control:
control-enable: yes
# by default the control interface is is 127.0.0.1 and ::1 and port 8953
# it is possible to use a unix socket too
control-interface: /run/unbound.ctl

forward-zone:
name: "."
forward-tls-upstream: yes
forward-addr: 1.1.1.1@853#cloudflare-dns.com
forward-addr: 1.0.0.1@853#cloudflare-dns.com

(Unfortunately my ISP is shitty, and doesn’t yet give me an ipv6 address).

Looking at https://1.1.1.1/help – I do sometimes see that ‘DNS over TLS’ is “yes”…. so I guess something is right; annoyingly I don’t see anything useful from unbound’s stats (unbound-control stats) to show it’s done a secure query…

“unbound-host” (another debian package) – will helpfully tell you whether a lookup was done ‘securely’ or not – e.g.

$ unbound-host google.com -D -v
google.com has address 142.250.178.14 (insecure)
google.com has IPv6 address 2a00:1450:4009:815::200e (insecure)
google.com mail is handled by 10 smtp.google.com. (insecure)

which seems a little odd to me (I’d have thought google would support dns sec), but some domains do work – e.g.

$ unbound-host mythic-beasts.com -D -v
mythic-beasts.com has address 93.93.130.166 (secure)
mythic-beasts.com has IPv6 address 2a00:1098:0:82:1000:0:1:2 (secure)
mythic-beasts.com mail is handled by 10 mx1.mythic-beasts.com. (secure)
mythic-beasts.com mail is handled by 10 mx2.mythic-beasts.com. (secure)

bash – escaping variables for use within commands

Escaping quotes within variables is always painful in bash (somehow) – e.g.

foo”bar

and it’s not obvious that you’d need to write e.g.

“foo”\””bar”

(at least to me).

Thankfully a bash built in magical thing can be used to do the escaping for you.

In my case, I need to pass a ‘PASSWORD’ variable through to run within a container. The PASSWORD variable needs escaping so it can safely contain things like ; or quote marks (” or ‘).

e.g. docker compose run app /bin/bash "echo $PASSWORD > /some/file"

or e.g. ssh user@server “echo $PASSWORD > /tmp/something”

The fix is to use the ${PASSWORD@Q} variable syntax – for example:

#!/bin/bash

FOO=”bar’\”baz”

ssh user@server “echo $FOO > /tmp/something”

This will fail, with something like : “bash: -c: line 1: unexpected EOF while looking for matching `''

As she shell at the remote end it seeing echo bar'"baz and expects the quote mark to be closed.

So using the @Q magic –

ssh user@server “echo ${FOO@Q} > /tmp/something”

which will result in /tmp/something containing “bar'”baz” which is correct.

See also https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html#Shell-Parameter-Expansion

asus pn50 and cpufreq/boost

I’ve been using an ASUS PN50 (that’s a mini pc, with an AMD Ryzen 4800u processor – so sort of a laptop without a screen) as my desktop for ages.

Increasingly I’ve found it sluggish and I was contemplating replacing it with something newer, and then I discovered why the CPU speed in /proc/cpuinfo was always 1400mhz….

I needed to :

echo 1 > /sys/devices/system/cpu/cpufreq/boost

Once that’s done, the CPU cores can go up to about 4.2Ghz … #doh

In other news – https://www.phoronix.com/news/Linux-Per-Policy-CPUFreq-Boost looks interesting.

Unfortunately now my minipc’s fan is always speeding up / slowing down when it used to be pretty quiet :-/

Thanks to https://www.reddit.com/r/MiniPCs/comments/16cuzd8/asus_pn50_unlock_cpu_speed_under_linux/

Resizing a VM’s disk within Azure

Random notes on resizing a disk attached to an Azure VM …

Check what you have already –

az disk list --resource-group MyResourceGroup --query '[*].{Name:name,Gb:diskSizeGb,Tier:accountType}' --output table

might output something a bit like :

Name Gb
———————————————- —-
foo-os 30
bar-os 30
foo-data 512
bar-data 256

So here, we can see the ‘bar-data’ disk is only 256Gb.

Assuming you want to change it to be 512Gb (Azure doesn’t support an arbitary size, you need to choose a supported size…)

az disk update --resource-group MyResourceGroup --name bar-data --size-gb 512

Then wait a bit …

In my case, the VMs are running Debian Buster, and I see this within the ‘dmesg‘ output after the resize has completed (on the server itself).

[31197927.047562] sd 1:0:0:0: [storvsc] Sense Key : Unit Attention [current]
[31197927.053777] sd 1:0:0:0: [storvsc] Add. Sense: Capacity data has changed
[31197927.058993] sd 1:0:0:0: Capacity data has changed

Unfortunately the new size doesn’t show up straight away to the O/S, so I think you either need to reboot the VM or (what I do) –

echo 1 > /sys/class/block/sda/device/rescan

at which point the newer size appears within your ‘lsblk‘ output – and the filesystem can be resized using e.g. resize2fs

Don’t forget to defragment /home if you’re using BTRFS

As root: (as a regular user it just won’t work) –

btrfs filesystem defragment /home -r

You probably want to run that weekly.

I eventually noticed Thunderbird and phpStorm were being really slow and laggy … at which point I realised the cron job I had (as my non-root user) wasn’t working.

(using filefrag /path/to/file you can see the change in the number of extents change after defragmenting)

btrfs & ext4 – error handling when the hardware fails …

I have a mini PC (old intel NUC) I use for taking backups of my desktop. It has a single 4TiB ssd in it.

Filesystem Type Size Used Avail Use% Mounted on
/dev/sda3 ext4 916G 80G 790G 10% /
/dev/sda4 btrfs 2.8T 106G 2.7T 4% /backup

I’ve been using btrfs for ages for /backup as I use the snapshot functionality of btrfs with an hourly rsync job from my desktop to copy changes over.

Recently the fan on the NUC failed, and while overheating (I think) it appears to have written garbage in various places (this was seen on the ext4 rootfs as well as the /backup btrfs volume).

BTRFS

Trying to scrub the filesystem highlights the problems –

root@nectarine:~# btrfs scrub status /backup
UUID:             36f93b26-6187-4874-8cc6-4d4bd092e7d8
Scrub resumed:    Sat Jun 17 13:48:33 2023
Status:           finished
Duration:         1:21:28
Total to scrub:   1.23TiB
Rate:             263.66MiB/s
Error summary:    csum=60
  Corrected:      0
  Uncorrectable:  60
  Unverified:     0

(As I only have one underlying block device, it’s not possible for it to repair itself).

I now also see messages like this in ‘dmesg’ –

[ 3570.123946] BTRFS error (device sda4): unable to fixup (regular) error at logical 1870167986176 on dev /dev/sda4
[ 3570.128866] BTRFS error (device sda4): bdev /dev/sda4 errs: wr 0, rd 0, flush 0, corrupt 199, gen 0
[ 3570.128862] BTRFS warning (device sda4): checksum error at logical 1870167683072 on dev /dev/sda4, physical 1477245284352, root 8890, inode 3750321, offset 384077824, length 4096, links 1 (path: .icedove/e1kre066.default-release-2/ImapMail/imap.gmail-2.com/INBOX-1)

Before trying to re-initialise the checksum tree (And then just let the corrupt files expire out of the filesystem with time as they get rsync’ed over) I thought I’d try :

root@nectarine:~# btrfs check -p /dev/sda4 
Opening filesystem to check...
Checking filesystem on /dev/sda4
UUID: 36f93b26-6187-4874-8cc6-4d4bd092e7d8
[1/7] checking root items                      (0:00:10 elapsed, 6406461 items checked)
Segmentation faultents                         (0:00:02 elapsed, 7542 items checked)

So that didn’t work very well.

So I thought I might as well try just re-initialising the checksum tree –

root@nectarine:~# btrfs check -p --init-csum-tree /dev/sda4 
Creating a new CRC tree
WARNING:

	Do not use --repair unless you are advised to do so by a developer
	or an experienced user, and then only after having accepted that no
	fsck can successfully repair all types of filesystem corruption. Eg.
	some software or hardware bugs can fatally damage a volume.
	The operation will start in 10 seconds.
	Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1
Starting repair.
Opening filesystem to check...
Checking filesystem on /dev/sda4
UUID: 36f93b26-6187-4874-8cc6-4d4bd092e7d8
Reinitialize checksum tree
kernel-shared/extent_io.c:650: free_extent_buffer_internal: BUG_ON `eb->refs < 0` triggered, value 1
btrfs(+0x2b1f7)[0x5590e079d1f7]
btrfs(+0x2b381)[0x5590e079d381]
btrfs(+0x2b68e)[0x5590e079d68e]
btrfs(alloc_extent_buffer+0x77)[0x5590e079e740]
btrfs(read_tree_block+0x47)[0x5590e0796066]
btrfs(read_node_slot+0x47)[0x5590e078f7fd]
btrfs(btrfs_next_sibling_tree_block+0x95)[0x5590e0792900]
btrfs(+0x19e14)[0x5590e078be14]
btrfs(+0x1a8a8)[0x5590e078c8a8]
btrfs(iterate_extent_inodes+0x68)[0x5590e078d5dc]
btrfs(fill_csum_tree+0x46b)[0x5590e07f9440]
btrfs(+0x74bf2)[0x5590e07e6bf2]
btrfs(main+0x3d3)[0x5590e078a203]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7ff38d37fd0a]
btrfs(_start+0x2a)[0x5590e078a86a]
Aborted

So I don’t feel that worked all that well.

I guess I’ll copy off the data I don’t want to lose, and just reformat it. I was hoping the repair tools (btrfs-progs v6.2, kernel 6.1.34) had hopefully matured since I last broke a btrfs filesystem (a few years ago). I guess not?

I know btrfs is at least alerting me to issues with the data – which ext4 definitely isn’t (given /var/lib/dpkg/status contained a load of trash) – so I’ll give it credit for that. It’s just a shame the ‘repair’ tools aren’t working that well.

ext4

This isn’t written to much on this system – there’s a munin daemon running (so /var/lib/munin will have been written to) and a few log files.

Interestingly, when I first noticed a problem with the device, after logging in, I instinctively ran ‘apt-get update’ (I was hoping a reboot would fix it, at which point I might as well make sure any updates were installed).

Running ‘apt-get update’ resulted in /var/lib/dpkg/status being full of rubbish.

After the PC had been turned on for a few hours, ext4 eventually figured there were problems with it – by logging this :

[11591.230282] munin-html[22255]: segfault at a400000e ip 0000557783eaf0e9 sp 00007ffca1d969f0 error 4 in perl[557783de1000+185000] likely on CPU 3 (core 1, socket 0)
[11591.230298] Code: 4e 0c 89 56 08 83 e9 09 83 f9 01 76 14 83 fa 01 76 3f 83 ea 01 89 55 08 48 83 c4 10 5d c3 0f 1f 00 48 8b 70 08 48 85 f6 74 e3 <f6> 46 0e 10 74 dd 48 c7 40 08 00 00 00 00 8b 56 08 83 fa 01 76 22
[11591.432906] munin-graph[22257]: segfault at 55a6b77c7df0 ip 000055a64601ebc2 sp 00007ffcd88c5150 error 4 in perl[55a645fc0000+185000] likely on CPU 3 (core 1, socket 0)
[11591.432927] Code: 0f 1f 84 00 00 00 00 00 48 8b 4f 10 48 85 c9 74 5f 48 83 ec 08 48 8b 87 30 01 00 00 48 8b 50 10 48 39 d1 75 4c 48 85 f6 74 55 <48> 8b 04 f1 48 85 c0 74 20 48 8d 97 50 01 00 00 48 39 d0 74 14 8b
[12723.693630] EXT4-fs error (device sda3): htree_dirblock_to_tree:1080: inode #28706704: comm find: Directory block failed checksum
[12723.693673] Aborting journal on device sda3-8.
[12723.696920] EXT4-fs error (device sda3): ext4_journal_check_start:83: comm systemd-journal: Detected aborted journal
[12723.696945] EXT4-fs error (device sda3): ext4_journal_check_start:83: comm rs:main Q:Reg: Detected aborted journal
[12723.708257] EXT4-fs (sda3): Remounting filesystem read-only

Rebooting and running : fsck -Cy /dev/sda3 MIGHT have fixed the rootfs.

systemd-resolve (DNS is always to blame)

For the record, this is using systemd v247, from Debian’s buster-backports.

I think I was enticed by the cool aid, hoping to be able to have DNSSEC or DNSoverTLS …. and caching … and to be fair, it appeared to work on all the servers I’d installed it on (although they were just ‘boring’ LAMP style webservers).

Anyway, everything seemed to be going well, with the default /etc/resolv.conf like :

nameserver 127.0.0.53

options edns0

and /etc/systemd/resolved.conf looking like :

[Resolve]
DNS=8.8.8.8#dns.google 8.8.4.4#dns.google 1.1.1.1
FallbackDNS=1.1.1.1 8.8.4.4 9.9.9.9
LLMNR=no
DNSOverTLS=opportunistic
DNSSEC=no
Cache=yes

Unfortunately, on one relatively busy server which makes multiple HTTP requests out every second, I saw sporadic failures where curl would report a timeout for e.g. graph.facebook.com (>10 connect time).

The timeouts seemed to be grouped together (no timeouts for a number of hours, and then a load of requests would fail) and obviously to be annoying this only happened in production and wasn’t something I could reproduce.

As best I can tell, a failure to lookup was being cached, so all requests for a specific hostname would then fail until the cache expired (30 seconds?)

So I end up having /etc/resolv.conf looking a bit more like a traditional one with 8.8.8.8 as the first nameserver and some custom options to lower the retry time and hopefully trigger multiple DNS lookup attempts.

So, perhaps …. perhaps … systemd-resolve isn’t quite ready for production yet?

faster rsync (ssh cipher choice)

Perhaps the bottleneck isn’t always bandwidth – but does changing ssh cipher make any difference?

Using a derivative of :

rsync -W --delete --no-owner --no-group --no-perms \
    -e ssh \
    -arv /source/ remote@destination:/destination/path/

In unscientific tests, it looks like ssh parameters might do something when copying a 4GiB file between two random virtual machines in different data centres, but both in London.

SSH Variant Speed
-e “ssh” ~45MB/s
-e “ssh -x -T” ~44MB/s
-e “ssh -x -T -c chacha20-poly1305@openssh.com” ~42MB/s
-e “ssh -x -T -c aes128-ctr” ~47MB/s
-e “ssh -x -T -c aes256-gcm@openssh.com” ~50MB/s
-e “ssh -x -T -c aes128-gcm@openssh.com “ ~45MB/s

I’m not sure if these results are particularly insightful / useful.

(re)building varnish modules

I’m using Varrsh 6 LTS in some places, and need a way to rebuild dependent modules …. which seem to need recompiling even for a minor feature release (E.g. 6.0.1 to 6.0.2).

I use dynamic (DNS routing), var and vsthrottle.

Firstly, here’s a Dockerfile –

FROM debian:buster as builder

ARG VARNISH_VERSION=6.0.8-1~buster

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get -qy update && \
    apt-get -qy install eatmydata apt-transport-https lsb-release ca-certificates curl gnupg wget && \
    apt-get clean

RUN echo "\
Package: varnish\n\
Pin: version ${VARNISH_VERSION}\n\
Pin-Priority: 1001 \
\
Package: varnish-dev \n\
Pin: version ${VARNISH_VERSION} \n\
Pin-Priority: 1001 \
" >> /etc/apt/preferences.d/varnish 

RUN echo "deb https://packagecloud.io/varnishcache/varnish60lts/debian/ buster main" > /etc/apt/sources.list.d/varnish.list

RUN wget -qO /tmp/varnish.gpg https://packagecloud.io/varnishcache/varnish60lts/gpgkey && \
    apt-key add /tmp/varnish.gpg && \
    apt-get -q update && \
    eatmydata -- apt-get -qy install varnish varnish-dev automake libtool make libncurses-dev pkg-config python3-docutils unzip libgetdns10 libgetdns-dev

RUN apt-cache policy varnish

WORKDIR /tmp

RUN wget -qO /tmp/varnish.zip https://github.com/varnish/varnish-modules/archive/refs/heads/6.0.zip && \
    unzip /tmp/varnish.zip && \
    cd varnish-modules-6.0 && \
    bash bootstrap && \
    ./configure --disable-dependency-tracking && \
    make && \
    make check && \
    make install 

RUN wget -qO /tmp/dynamic.zip https://github.com/nigoroll/libvmod-dynamic/archive/refs/heads/6.0.zip && \
    unzip /tmp/dynamic.zip && \
    cd libvmod-dynamic-6.0 && \
    bash autogen.sh && \
    bash configure && \
    make && \
    make install


FROM debian:buster
    
WORKDIR /srv/export
COPY --from=builder /usr/lib/varnish/vmods/libvmod_dynamic.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_proxy.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_var.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_vsthrottle.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_header.so /srv/export/

and then, I copy the files out of that build pipeline (dare i call it that?) with this shell script

#!/bin/bash

set -eux

# Build a new set of varnish modules.

# Each version of varnish needs it's own build of some modules - moving from e.g. varnish 6.0.7~1-stretch to 6.0.8~1-stretch 
# isn't possible without these modules being rebuilt.

[ -d $(pwd)/tmp ] && rm -Rf $(pwd)/tmp

docker build --pull -f Dockerfile -t builder .

mkdir tmp

docker run -v $(pwd)/tmp:/srv/tmp -ti builder bash -c 'cp /srv/export/* /srv/tmp'

Then it’s just a case of running ‘build.sh’ and waiting …. and you’ll find the files you want in ‘tmp’.

AWS vs Azure … round 1, fight!

So, for whatever reason, I need to move some virtual machines and things from AWS (EC2, RDS), to an Azure. I have a few years experience with AWS, but until recently I’ve not really used Azure ….

Here are some initial notes……

  • AWS tooling feels more mature (with the ‘stock’ ansible that ships with Ubuntu 20.10, I’m not able to create a virtual machine in Azure without having python module errors appear)
  • AWS EBS disks are more flexible – I can enlarge and/or change their performance profile at runtime (no downtime). With Azure, I have to shutdown the server before I can change them.
  • AWS SSL certificates are better (for Azure I had to install a LetsEncrypt application and integrate it with my DNS provider ( e.g. https://github.com/shibayan/keyvault-acmebot ). AWS has it’s certificate service that issues free certs built in, and if the domain is already in Route53 there’s hardly anything to do.
  • Azure gives you more control over availability (with its concept of availability sets, it allows you to have some control over VM placement and order of updates being applied). It also gives Placement Groups – allowing you to influence physical placement of resources to reduce latency etc.
  • Azure feels more ‘commercial’ (with the various different third party products appearing in the portal when you search etc).
  • Azure has worse support for IPv6 (e.g. if you have a VPN within your Virtual Network you can’t have IPv6).
  • Azure doesn’t seem to offer ARM based Virtual Machines and fewer AMD equivalents (see also: EC2 Graviton 2).
  • Azure’s pricing feels harder to understand – there’s often a ‘standard’ and ‘premium’ option for most products, but the description of differences is often buried in documentation away from the portal ….. I often see ‘Pricing unavailable’.
    • Do I want a premium IP address?
    • Do I need Ultra or Premium SSDs or will Standard SSD suffice? Will I be able to change/revert if I’ve chosen the wrong one without deleting and recreating something?
    • Why do I need to choose a VPN server SKU?
  • Azure networks all have outbound NAT based internet access by default – so even if you’ve not assigned a public IP address to the resource, it can reach out. At the same time, you can also buy a NAT Gateway. If you give a VM a public IP address then it will use that for it’s outbound traffic.
  • Azure has a lot of services in ‘preview’ (to me beta). At the time of writing (March 2021), it doesn’t yet offer a production ready ….
    • MySQL database service that has zone redundancy (i.e. no real high availability)
    • Storage equivalent of EFS (NFS is in preview)
  • Azure does provide a working serial console for VMs, which is quite handy when systemd decides to throw a fit on bootup (2021/04/02 – AWS apparently now provides this too!).
  • Azure doesn’t let you detach the root volume from a stopped server to mount it elsewhere (e.g. for maintenance to fix something that won’t boot up!).
  • When deleting a VM in Azure, it’s necessary to manually delete linked disks. In AWS they can be cleaned up at the same time.