As a random, useful, console tool … try atuin for some magical shell history searching.
I’m too lazy to try and record a semi-useful demo of it, but it’s replaced my ‘ctrl+r’ lookup thingy and has a good search interface.
Linux, PHP, geeky stuff … boring man.
As a random, useful, console tool … try atuin for some magical shell history searching.
I’m too lazy to try and record a semi-useful demo of it, but it’s replaced my ‘ctrl+r’ lookup thingy and has a good search interface.
Escaping quotes within variables is always painful in bash (somehow) – e.g.
foo”bar
and it’s not obvious that you’d need to write e.g.
“foo”\””bar”
(at least to me).
Thankfully a bash built in magical thing can be used to do the escaping for you.
In my case, I need to pass a ‘PASSWORD’ variable through to run within a container. The PASSWORD variable needs escaping so it can safely contain things like ; or quote marks (” or ‘).
e.g. docker compose run app /bin/bash "echo $PASSWORD > /some/file"
or e.g. ssh user@server “echo $PASSWORD > /tmp/something”
The fix is to use the ${PASSWORD@Q} variable syntax – for example:
#!/bin/bash
FOO=”bar’\”baz”
ssh user@server “echo $FOO > /tmp/something”
This will fail, with something like : “bash: -c: line 1: unexpected EOF while looking for matching `''
“
As she shell at the remote end it seeing echo bar'"baz
and expects the quote mark to be closed.
So using the @Q magic –
ssh user@server “echo ${FOO@Q} > /tmp/something”
which will result in /tmp/something containing “bar'”baz” which is correct.
I decided to stop using my hacky perl script for Postfix policyd stuff as it’s ages since I wrote any perl … and instead use postscreen the other day.
Postscreen setup – was fairly easy – there’s a load of config below.
Gotchas – spamhaus doesn’t like you if you might be sending your DNS through a public resolver (E.g. 8.8.8.8) – so you need to do an =127.0.0.[1..11] to it.
It also logs quite a lot.
Current Postfix postscreen main.cf config :
postscreen_access_list = permit_mynetworks, cidr:/etc/postfix/postscreen_access.cidr
postscreen_dnsbl_threshold = 2
postscreen_dnsbl_sites = zen.spamhaus.org=127.0.0.[2..11]*2
bl.spamcop.net*1
b.barracudacentral.org=127.0.0.2*1
bl.spameatingmonkey.net*1
bl.mailspike.net*1
tor.ahnl.org*1
dnsbl.justspam.org=127.0.0.2*1
bip.virusfree.cz*1
spam.dnsbl.sorbs.net=127.0.0.6*1
postscreen_greet_action = enforce
postscreen_greet_wait = 5s
postscreen_greet_ttl = 2d
postscreen_blacklist_action = drop
postscreen_dnsbl_ttl = 2h
SMTP Auth whitelisting …
My server allows people to send out authenticated on port 25, but postscreen doesn’t seem to be aware of this when it runs; so such people may be blocked by their IP being in a DNS Blacklist … and therefore need explicitly whitelisting via a dovecot postlogin script (example below) which if used, requires the postscreen_access_list to change to be something like :
postscreen_access_list = permit_mynetworks
cidr:/etc/postfix/postscreen_access.cidr
mysql:/etc/postfix/mysql/check_mail_log.cf
and /etc/postfix/mysql/check_mail_log.cf looks like :
user = mail_log
password = something
hosts = 127.0.0.1
dbname = mail_log
query = SELECT 'permit' FROM mail_log WHERE ip_address = '%s' UNION SELECT 'dunno' LIMIT 1 ;
The dovecot config change(s) are – in /etc/dovecot/dovecot.conf
....
service pop3 {
executable = pop3 postlogin
}
service imap {
executable = imap postlogin
}
service postlogin {
executable = script-login /etc/dovecot/postlogin.sh
user = $default_internal_user
unix_listener postlogin {
}
}
....
and /etc/dovecot/postlogin.sh looks a bit like :
#!/bin/bash
if [ "x${IP}" != "x" ]; then
if [ ! "$IP" = "127.0.0.1" ]; then
echo "INSERT INTO mail_log (username, ip_address) VALUES ('$USER', '$IP')" | mysql --defaults-extra-file=/etc/dovecot/mysql.cnf mail_log
fi
fi
exec "$@"
exit 0
The /etc/dovecot/postlogin.sh will need to be executable.
/etc/dovecot/mysql.cnf just looks like a normal MySQL cnf file –
[client]
user = mail_log
password = something
database = mail_log
CREATE TABLE `mail_log` (
`username` varchar(255) NOT NULL,
`ip_address` varchar(255) NOT NULL,
`dt` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
KEY `mlip` (`ip_address`(191)),
KEY `dt_idx` (`dt`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
and that being the SQL schema.
Ideally I suppose you’d add a cron job to prune entries in mail_log older than a set time, and probably have a unique key on username with some sort of “INSERT INTO x ON DUPLICATE … ” change to the postlogin.sh script above.
I’m using Varrsh 6 LTS in some places, and need a way to rebuild dependent modules …. which seem to need recompiling even for a minor feature release (E.g. 6.0.1 to 6.0.2).
I use dynamic (DNS routing), var and vsthrottle.
Firstly, here’s a Dockerfile –
FROM debian:buster as builder
ARG VARNISH_VERSION=6.0.8-1~buster
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get -qy update && \
apt-get -qy install eatmydata apt-transport-https lsb-release ca-certificates curl gnupg wget && \
apt-get clean
RUN echo "\
Package: varnish\n\
Pin: version ${VARNISH_VERSION}\n\
Pin-Priority: 1001 \
\
Package: varnish-dev \n\
Pin: version ${VARNISH_VERSION} \n\
Pin-Priority: 1001 \
" >> /etc/apt/preferences.d/varnish
RUN echo "deb https://packagecloud.io/varnishcache/varnish60lts/debian/ buster main" > /etc/apt/sources.list.d/varnish.list
RUN wget -qO /tmp/varnish.gpg https://packagecloud.io/varnishcache/varnish60lts/gpgkey && \
apt-key add /tmp/varnish.gpg && \
apt-get -q update && \
eatmydata -- apt-get -qy install varnish varnish-dev automake libtool make libncurses-dev pkg-config python3-docutils unzip libgetdns10 libgetdns-dev
RUN apt-cache policy varnish
WORKDIR /tmp
RUN wget -qO /tmp/varnish.zip https://github.com/varnish/varnish-modules/archive/refs/heads/6.0.zip && \
unzip /tmp/varnish.zip && \
cd varnish-modules-6.0 && \
bash bootstrap && \
./configure --disable-dependency-tracking && \
make && \
make check && \
make install
RUN wget -qO /tmp/dynamic.zip https://github.com/nigoroll/libvmod-dynamic/archive/refs/heads/6.0.zip && \
unzip /tmp/dynamic.zip && \
cd libvmod-dynamic-6.0 && \
bash autogen.sh && \
bash configure && \
make && \
make install
FROM debian:buster
WORKDIR /srv/export
COPY --from=builder /usr/lib/varnish/vmods/libvmod_dynamic.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_proxy.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_var.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_vsthrottle.so /srv/export/
COPY --from=builder /usr/lib/varnish/vmods/libvmod_header.so /srv/export/
and then, I copy the files out of that build pipeline (dare i call it that?) with this shell script
#!/bin/bash
set -eux
# Build a new set of varnish modules.
# Each version of varnish needs it's own build of some modules - moving from e.g. varnish 6.0.7~1-stretch to 6.0.8~1-stretch
# isn't possible without these modules being rebuilt.
[ -d $(pwd)/tmp ] && rm -Rf $(pwd)/tmp
docker build --pull -f Dockerfile -t builder .
mkdir tmp
docker run -v $(pwd)/tmp:/srv/tmp -ti builder bash -c 'cp /srv/export/* /srv/tmp'
Then it’s just a case of running ‘build.sh’ and waiting …. and you’ll find the files you want in ‘tmp’.
I “woke up” and realised I often do this wrong …. (too many connections/individual queries on MySQL) when the query is returning simple values (single word like things) for each field.
As a means of example :
So a before query :
MYSQL="mysql -NB my_database"
IDENTIFIER=some_value
# Obviously $VAR1/$VAR2 can contain spaces etc in this example
VAR1=$(echo "SELECT field FROM table WHERE some_key = '${IDENTIFIER}'" | $MYSQL)
VAR2=$(echo "SELECT field FROM table WHERE some_key = '${IDENTIFIER}'" | $MYSQL)
....
And perhaps a nicer way might look more like :
#!/bin/bash
# Chuck output from a query into an array (retrieve many fields at once)
# coalesce( ... ) copes with a field that may be null.
BITS=( $(echo "SELECT coalesce(field1,200) as field1, field2, field3 FROM some_table WHERE some_key = '${IDENTIFIER}'" | $MYSQL ) )
# and then check we had enough values back
if [ ${#BITS[@] -lt 3 ]; then
echo "handle this error..."
fi
# This will not work if field1/field2/field3 contain spaces.
VAR1=${BITS[0]}
VAR2=${BITS[1]}
VAR3=${BITS[2]}
I keep forgetting the syntax for these two things, so there’s a chance writing it here will help me remember.
Possibly of use/relevance for: elasticsearch or Debezium….
Continue reading “curl, jq and slightly dynamic input to a service”
I have a TP-Link HS110 plug (probably identical to the HS100 … but I thought being able to query it through the app to find out energy usage would be neat …).
Anyway, it originally didn’t seem to let me schedule it through the app, so I dug around and wrote a crap shell script I can prod via cron.
Usage examples:
1. tplink.sh -u my@email.com -p mypassword -o off -> turns the first device found off.
2. tplink.sh -u my@email.com -p mypassword -o on -> turns the first device found on.
3. tplink.sh -u my@email.com -p mypassword -d TpLinkDeviceId -o on -> now for a specific device.
4. tplink.sh -u …. -p …. -d “?” -> dumps device list output …
5. tplink.sh -t tpLinktoken -d DeviceId -o on|off …
Some alternative bash things…. ( avoiding unneecessary use of cat / awk / grep …. )