Squid 3.4.x for with transparent ssl proxying/support for Debian Wheezy.

I needed  a variant of Squid which supported transparent SSL interception (i.e via iptables redirection) so I could log outgoing HTTPS requests without the client being aware.

The stock wheezy variant doesn’t support SSL (see : Debian Bug Report).

Even after recompiling Wheezy’s squid3 it didn’t seem to work (perhaps my stupidity) so I ended up moving to the latest-and-greatest squid (3.4.9 at the time of writing) and getting that to work. Brief notes follow.

Continue reading “Squid 3.4.x for with transparent ssl proxying/support for Debian Wheezy.”

Virtualbox 4.2 VM autostart on Debian Squeeze & Wheezy

One new feature of VirtualBox 4.2 is that it has support for auto-starting vm’s on bootup of the host server (via init etc). This means I can remove my hackish ‘su – vbox -c “VBoxHeadless –startvm VMName &”‘ additions in /etc/rc.local, and the VM’s will also hopefully be terminated gracefully on shutdown.

The docs/guides online which I could find were a bit cryptic, or incomplete, so here’s what I ended up doing :

Continue reading “Virtualbox 4.2 VM autostart on Debian Squeeze & Wheezy”

Migrating an ext3 filesystem to ext4 (Debian Squeeze)

Interestingly (well, perhaps not really) this is very easy.

In my case, I’m hoping that the migration will lead to faster fsck times (currently it’s taking about an hour, which is somewhat excessive, each time the server crashes for whatever reason).

In my case, the filesystem is /dev/md0 and mounted at /home – change the bits below as appropriate.
Continue reading “Migrating an ext3 filesystem to ext4 (Debian Squeeze)”

netstat –tcp -lp output not showing a process id

I often use ‘netstat –tcp -lpn’ to display a list of open ports on a server – so i can check things aren’t listening where they shouldn’t be (e.g. MySQL accepting connections from the world) and so on. Obviously I firewall boxes; but I like to have a reasonable default incase the firewall decides to flush itself randomly or whatever.

Anyway, I ran ‘netstat –tcp -lpn’ and saw something like the following :

tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      3355/mysqld     
tcp        0      0 0.0.0.0:54283           0.0.0.0:*               LISTEN      -               
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1940/portmap

Now ‘mysqld’ looks OK – and portmap does (well, I need it on this box). But what on earth was listening on port 54283, and why is there no process name/pid attached to it?

After lots of rummaging, and paranoia where I thought perhaps the box had been rooted, I discovered it was from an NFS mount (which explains the lack of a pid, as it’s kernel based).

lsof -i tcp:54283

Didn’t help either. Unmounting the NFS filesystem did identify the problem – and the entry went away.

Checking varnish configuration syntax

If you’ve updated your varnish server’s configuration, there doesn’t seem to be an equivalent of ‘apachectl configtest’ for it, but you can do :

varnishd -C -f /etc/varnish/default.vcl

If everything is correct, varnish will then dump out the generated configuration. Otherwise you’ll get an error message pointing you to a specific line number.

Automated snapshot backup of an Amazon EBS volume

I found the following Python script online, but it didn’t really work too well :

http://aws-musings.com/manage-ebs-snapshots-with-a-python-script/

EBS – Elastic Block Storage …

I had to easy_install boto, to get it to work.

I’m not sure the Debian python-boto package in Lenny is up to date.

Anyway, $server now has :

from boto.ec2.connection import EC2Connection
from boto.ec2.regioninfo import RegionInfo

from datetime import datetime
import sys

# Substitute your access key and secret key here
aws_access_key = 'MY_AWS_ACCESS_KEY'
aws_secret_key = 'MY_AWS_SECRET_KEY'
# Change to your region/endpoint...
region = RegionInfo(endpoint='eu-west-1.ec2.amazonaws.com', name='eu-west-1')

if len(sys.argv) < 3:
    print "Usage: python manage_snapshots.py volume_id number_of_snapshots_to_keep description"     
    print "volume id and number of snapshots to keep are required. description is optional"
    sys.exit(1) 

vol_id = sys.argv[1] 
keep = int(sys.argv[2]) 
conn = EC2Connection(aws_access_key, aws_secret_key, region=region) 
volumes = conn.get_all_volumes([vol_id]) 
print "%s" % repr(volumes) 
volume = volumes[0] 
description = 'Created by manage_snapshots.py at ' + datetime.today().isoformat(' ') 
if len(sys.argv) > 3:
    description = sys.argv[3]

if volume.create_snapshot(description):
    print 'Snapshot created with description: ' + description

snapshots = volume.snapshots()
snapshot = snapshots[0]

def date_compare(snap1, snap2):
    if snap1.start_time < snap2.start_time:
        return -1
    elif snap1.start_time == snap2.start_time:
        return 0
    return 1

snapshots.sort(date_compare)
delta = len(snapshots) - keep
for i in range(delta):
    print 'Deleting snapshot ' + snapshots[i].description
    snapshots[i].delete()

And then plonk something like the following in /etc/cron.daily/backup_ebs :

for volume in vol-xxxx vol-yyyyy vol-zzzz
do
	/path/to/above/python/script.py $volume 7 "Backup of $volume on $(date +%F-%H:%m)"
done

Which keeps 7 backups for each volume with a time/date stamp in each description.