Upgrade some things

Well, I sort of realised I had a web server or two that were still on Debian Buster, and it was time to move to Bullseye or Bookworm. As usual the Debian upgrade procedure was mostly pretty straight forward and uneventful.

Interesting findings :

  • hitch“, which I use as an SSL frontend to varnish, doesn’t seem to get along all that well with systemd and silently fails if your config has “daemon = on” setting in /etc/hitch/hitch.conf. Annoyingly when trying to test the configuration with “hitch -t” you will get an error like: “No x509 certificate PEM file specified for frontend ‘default’!” – the solution to that is to specify the config file – i.e : hitch -t --config /etc/hitch/hitch.conf
  • hitch hasn’t had a release in it’s packagecloud.io repository for the last 3 years; so the debian supported variant looks more appealing.

In other news, I noticed this post where someone moaned about systemd-resolved the other day – https://www.reddit.com/r/linux/comments/18kh1r5/im_shocked_that_almost_no_one_is_talking_about/ – I’ve had similar problems to the people on the thread (resolved stops working etc) so thought it was time to try and use ‘unbound‘ instead.

apt-get install unbound

and then tell /etc/resolv.conf to use 127.0.0.1 for DNS.

annoyingly, unbound-control stats isn’t quite as pretty as resolvectl statistics but oh well.

echo -e “nameserver 127.0.0.1\nnameserver 8.8.8.8\noptions timeout:4” >/etc/resolv.conf

and an /etc/unbound/unbound.conf file that looks perhaps like :

server:
interface: 127.0.0.1
access-control: 127.0.0.0/8 allow
access-control: ::1/128 allow
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"
tls-cert-bundle: "/etc/ssl/certs/ca-certificates.crt"

remote-control:
control-enable: yes
# by default the control interface is is 127.0.0.1 and ::1 and port 8953
# it is possible to use a unix socket too
control-interface: /run/unbound.ctl

forward-zone:
name: "."
forward-tls-upstream: yes
forward-addr: 1.1.1.1@853#cloudflare-dns.com
forward-addr: 1.0.0.1@853#cloudflare-dns.com

(Unfortunately my ISP is shitty, and doesn’t yet give me an ipv6 address).

Looking at https://1.1.1.1/help – I do sometimes see that ‘DNS over TLS’ is “yes”…. so I guess something is right; annoyingly I don’t see anything useful from unbound’s stats (unbound-control stats) to show it’s done a secure query…

“unbound-host” (another debian package) – will helpfully tell you whether a lookup was done ‘securely’ or not – e.g.

$ unbound-host google.com -D -v
google.com has address 142.250.178.14 (insecure)
google.com has IPv6 address 2a00:1450:4009:815::200e (insecure)
google.com mail is handled by 10 smtp.google.com. (insecure)

which seems a little odd to me (I’d have thought google would support dns sec), but some domains do work – e.g.

$ unbound-host mythic-beasts.com -D -v
mythic-beasts.com has address 93.93.130.166 (secure)
mythic-beasts.com has IPv6 address 2a00:1098:0:82:1000:0:1:2 (secure)
mythic-beasts.com mail is handled by 10 mx1.mythic-beasts.com. (secure)
mythic-beasts.com mail is handled by 10 mx2.mythic-beasts.com. (secure)

(re)building varnish modules

I’m using Varrsh 6 LTS in some places, and need a way to rebuild dependent modules …. which seem to need recompiling even for a minor feature release (E.g. 6.0.1 to 6.0.2).

I use dynamic (DNS routing), var and vsthrottle.

Firstly, here’s a Dockerfile –

FROM debian:buster as builder

ARG VARNISH_VERSION=6.0.8-1~buster

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get -qy update && \
    apt-get -qy install eatmydata apt-transport-https lsb-release ca-certificates curl gnupg wget && \
    apt-get clean

RUN echo "\
Package: varnish\n\
Pin: version ${VARNISH_VERSION}\n\
Pin-Priority: 1001 \
\
Package: varnish-dev \n\
Pin: version ${VARNISH_VERSION} \n\
Pin-Priority: 1001 \
" >> /etc/apt/preferences.d/varnish 

RUN echo "deb https://packagecloud.io/varnishcache/varnish60lts/debian/ buster main" > /etc/apt/sources.list.d/varnish.list

RUN wget -qO /tmp/varnish.gpg https://packagecloud.io/varnishcache/varnish60lts/gpgkey && \
    apt-key add /tmp/varnish.gpg && \
    apt-get -q update && \
    eatmydata -- apt-get -qy install varnish varnish-dev automake libtool make libncurses-dev pkg-config python3-docutils unzip libgetdns10 libgetdns-dev

RUN apt-cache policy varnish

WORKDIR /tmp

RUN wget -qO /tmp/varnish.zip https://github.com/varnish/varnish-modules/archive/refs/heads/6.0.zip && \
    unzip /tmp/varnish.zip && \
    cd varnish-modules-6.0 && \
    bash bootstrap && \
    ./configure --disable-dependency-tracking && \
    make && \
    make check && \
    make install 

RUN wget -qO /tmp/dynamic.zip https://github.com/nigoroll/libvmod-dynamic/archive/refs/heads/6.0.zip && \
    unzip /tmp/dynamic.zip && \
    cd libvmod-dynamic-6.0 && \
    bash autogen.sh && \
    bash configure && \
    make && \
    make install


FROM debian:buster
    
WORKDIR /srv/export
COPY --from=builder /usr/lib/varnish/vmods/libvmod_dynamic.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_proxy.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_var.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_vsthrottle.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_header.so /srv/export/

and then, I copy the files out of that build pipeline (dare i call it that?) with this shell script

#!/bin/bash

set -eux

# Build a new set of varnish modules.

# Each version of varnish needs it's own build of some modules - moving from e.g. varnish 6.0.7~1-stretch to 6.0.8~1-stretch 
# isn't possible without these modules being rebuilt.

[ -d $(pwd)/tmp ] && rm -Rf $(pwd)/tmp

docker build --pull -f Dockerfile -t builder .

mkdir tmp

docker run -v $(pwd)/tmp:/srv/tmp -ti builder bash -c 'cp /srv/export/* /srv/tmp'

Then it’s just a case of running ‘build.sh’ and waiting …. and you’ll find the files you want in ‘tmp’.

Using hitch with varnish on Debian Jessie

I ended up needing to install hitch on a server recently, so the https:// traffic could be routed through Varnish (along with the existing ‘http’ stuff) for performance reasons.

The server only runs WordPress sites, so there are WordPress specific things in the Varnish configuration (vcl) file below.

Versions: Varnish 5.2, Hitch 1.4.4, Apache 2.4 and Debian Jessie.

Continue reading “Using hitch with varnish on Debian Jessie”

Checking varnish configuration syntax

If you’ve updated your varnish server’s configuration, there doesn’t seem to be an equivalent of ‘apachectl configtest’ for it, but you can do :

varnishd -C -f /etc/varnish/default.vcl

If everything is correct, varnish will then dump out the generated configuration. Otherwise you’ll get an error message pointing you to a specific line number.

Varnish + Zope – Multiple zope instances behind a single varnish cache

I run multiple Zope instances on one server. Each Zope instance listens on a different port (localhost:100xx). Historically I’ve just used Apache as a front end which forwards requests to the Zope instance.

Unfortunately there are periods of the year when one site gets a deluge of requests (for example; when hosting a school site, if it snows overnight, all the parents will check the site in the morning at around about 8am).

Zope is not particularly quick on it’s own – Apache’s “ab” reports that a dual core server with plenty of RAM can manage about 7-14 requests per second – which isn’t that many when you consider each page on a Plone site will have a large number of dependencies (css/js/png’s etc).

Varnish is a reverse HTTP proxy – meaning it sits in-front of the real web server, caching content.

So, as I’m using Debian Lenny….

  1. apt-get install -t lenny-backports varnish
  2. Edit /etc/varnish/default.vcl
  3. Edit Apache virtual hosts to route requests through varnish (rather than directly to Zope)
  4. I didn’t need to change /etc/default/varnish.

In my case there are a number of Zope instances on the same server, but I only wanted to have one instance of varnish running. This is possible – but it requires me to look at the URL requested to determine which Zope instance to route through to.

So, for example, SiteA runs on a Zope instance on localhost:10021/sites/sitea. My original Apache configuration would contain something like :

<IfModule mod_rewrite.c>
   RewriteEngine on
   RewriteRule ^/(.*) http://127.0.0.1:10021/VirtualHostBase/http/www.sitea.com:80/sites/sitea/VirtualHostRoot/$1 [L,P]
 </IfModule>

To use varnish, I’ll firstly need to tell Varnish how to recognise requests for sitea (and other sites), so it can forward a cache miss to the right place, and then reconfigure Apache – so it sends requests into varnish and not directly to Zope.

So, firstly, in Varnish’s configuration (/etc/varnish/default.vcl), we need to define the different backend server’s we want varnish to proxy / cache. In my case they’re on the same server –

backend zope1 {
.host = "127.0.0.1";
.port = "10021";
}
backend zope2 {
.host = "127.0.0.1";
.port = "10022";
}
Then, in the 'sub vcl_recv' section, use logic like :
if ( req.url ~ "/sites/sitea/VirtualHostRoot") {
   set req.backend = zope1;
}
if ( req.url ~ "/siteb/VirtualHostRoot") {
    set req.backend = zope2;
}

With the above in place, I can now just tell Apache to rewrite Sitea to :

RewriteRule ^/(.*) http://127.0.0.1:6081/VirtualHostBase/http/www.sitea.com:80/sites/sitea/VirtualHostRoot/$1 [L,P]

Instead….. and now we’ll find that our site is much quicker 🙂 (This assumes your varnish listens on localhost:6081).

There are a few additional snippets I found – in the vcl_fetch { … } block, I’ve told Varnish to always cache items for 30 seconds, and to also overwrite the default Server header given out by Apache etc, namely :

sub vcl_fetch {

    # ..... <snip> <snip>

    # force minimum ttl for objects

    if (obj.ttl < 30s) {

        set obj.ttl = 30s;

    }

    # ... <snip> <snip>

    unset obj.http.Server;

    set obj.http.Server = "Apache/2 Varnish";

    return (deliver);

}
I'm happy anyway. :)
Use 'varnishlog', 'varnishtop' and 'varnishhist' to monitor varnish.