bash – escaping variables for use within commands

Escaping quotes within variables is always painful in bash (somehow) – e.g.

foo”bar

and it’s not obvious that you’d need to write e.g.

“foo”\””bar”

(at least to me).

Thankfully a bash built in magical thing can be used to do the escaping for you.

In my case, I need to pass a ‘PASSWORD’ variable through to run within a container. The PASSWORD variable needs escaping so it can safely contain things like ; or quote marks (” or ‘).

e.g. docker compose run app /bin/bash "echo $PASSWORD > /some/file"

or e.g. ssh user@server “echo $PASSWORD > /tmp/something”

The fix is to use the ${PASSWORD@Q} variable syntax – for example:

#!/bin/bash

FOO=”bar’\”baz”

ssh user@server “echo $FOO > /tmp/something”

This will fail, with something like : “bash: -c: line 1: unexpected EOF while looking for matching `''

As she shell at the remote end it seeing echo bar'"baz and expects the quote mark to be closed.

So using the @Q magic –

ssh user@server “echo ${FOO@Q} > /tmp/something”

which will result in /tmp/something containing “bar'”baz” which is correct.

See also https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html#Shell-Parameter-Expansion

Bash / MySQL queries…

I “woke up” and realised I often do this wrong …. (too many connections/individual queries on MySQL) when the query is returning simple values (single word like things) for each field.

As a means of example :

So a before query :

MYSQL="mysql -NB my_database"
IDENTIFIER=some_value

# Obviously $VAR1/$VAR2 can contain spaces etc in this example
VAR1=$(echo "SELECT field FROM table WHERE some_key = '${IDENTIFIER}'" | $MYSQL)
VAR2=$(echo "SELECT field FROM table WHERE some_key = '${IDENTIFIER}'" | $MYSQL)
....

And perhaps a nicer way might look more like :

#!/bin/bash
# Chuck output from a query into an array (retrieve many fields at once)
# coalesce( ... ) copes with a field that may be null.

BITS=( $(echo "SELECT coalesce(field1,200) as field1, field2, field3 FROM some_table WHERE some_key = '${IDENTIFIER}'" | $MYSQL ) )

# and then check we had enough values back
if [ ${#BITS[@] -lt 3 ]; then
echo "handle this error..."
fi
# This will not work if field1/field2/field3 contain spaces.
VAR1=${BITS[0]}
VAR2=${BITS[1]}
VAR3=${BITS[2]}

Bash random number generation

Historically I’ve used $RANDOM as a random number source in bash — a bit like :

RAND=$(( $RANDOM % 10 )) 

when I’ve needed a random number out of 0,1,2,3,4,5,6,7,8 and 9

one problem with this is that $RANDOM itself is populated between 0 and 32767 by the shell – so it’s not going to give totally even distribution.

Finally, I discovered ‘shuf’ — usage like :

shuf -i 1-100 -n 1

-i RangeFrom-RangeTo
-n how many

So –

RAND=$(( shuf -i 1-10 -n 1)) 

Sponge – Shell command

Today, my sed kung-foo seemed to be lacking, so I ended up having to split the sed command over a zillion lines…

Normally I’d do something like :

sed 's/foo/bar/g' tmp.txt > tmp2.txt
sed 's/fo2/blah/g' tmp2.txt > tmp3.txt

But this obviously gets painful after a time, a different approach would be to use sponge where we can do :

sed 's/foo/bar/g' tmp.txt | sponge tmp.txt
sed 's/fo2/blah/g' tmp.txt | sponge tmp.txt

Whereby ‘sponge’ soaks up standard input and when there’s no more, opens the output file. This gets around the obvious problem that :

sed 's/foo/bar/g' tmp.txt > tmp.txt

doesn’t work because the shell opens (and overwrites) tmp.txt  before sed’s had a chance to do anything.