Don’t forget to defragment /home if you’re using BTRFS

As root: (as a regular user it just won’t work) –

btrfs filesystem defragment /home -r

You probably want to run that weekly.

I eventually noticed Thunderbird and phpStorm were being really slow and laggy … at which point I realised the cron job I had (as my non-root user) wasn’t working.

(using filefrag /path/to/file you can see the change in the number of extents change after defragmenting)

MySQL update/write query analysis (query profiling)

Do you have a slow MySQL update/insert/delete query?

Obviously, for ‘SELECT’ queries you can prepend the query with “EXPLAIN ” – however that doesn’t work for the other query types (UPDATE/INSERT/DELETE).

So, one solution which may explain why the query is slow is to turn on MySQL’s profiling functionality, like in the following example :
Continue reading “MySQL update/write query analysis (query profiling)”

Zend_Cache – automatic cache cleaning can be bad, mmkay?

$customer uses Zend_Cache in their codebase – and I noticed that every so often a page request would take ~5 seeconds (for no apparent reason), while normally they take < 1 second …

Some rummaging and profiling with xdebug showed that some requests looked like :

xdebug profiling output - note lots of zend_cache stuff
xdebug profiling output - note lots of zend_cache stuff

Note how there are 25,000 or so calls for various Zend_Cache_Backend_File thingys (fetch meta data, load contents, flock etc etc).

This alternative rendering might make it more clear – especially when compared with the image afterwards :

zend cache dominating the call map
zend cache dominating the call map

while a normal request should look more like :

a "normal" request - note Zend_Cache is not dominating
a "normal" request - note Zend_Cache is not dominating

Zend_Cache has a ‘automatic_cleaning_mode’ frontend parameter – which is by default set to 10 (i.e. 10% of all write requests to the cache result in it checking if there is anything to garbage collect/clean). Since we’re nearly always writing something to the cache, this results in 10% of requests triggering the cleaning logic.

See http://framework.zend.com/manual/en/zend.cache.frontends.html.

 

The cleaning is now run via a cron job something like :

$cache_instance->clean(Zend_Cache::CLEANING_MODE_OLD);

 

Late to the performance party

Everyone else probably already knows this, but $project is/was doing two queries on the MySQL database every time the end user typed in something to search on

  1. to get the data between a set range (SELECT x,y….. LIMIT n, OFFSET m or whatever) and
  2. another to get the total count of records (SELECT count(field) ….).

This is all very good, until there is sufficiently different logic in each query that when I deliberately set the offset in query #1 to 0 and limit very high and find that the of rows returned by both doesn’t match (this leads to broken paging for example)

Then I thought – surely everyone else doesn’t do a count query and then repeat it for the range of data they want back – there must be a better way… mustn’t there?

At which point I found:
http://forge.mysql.com/wiki/Top10SQLPerformanceTips
and
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows

See also the comment at the bottom of http://php.net/manual/en/pdostatement.rowcount.php which gives a good enough example (Search for SQL_CALC_FOUND_ROWS)

A few modifications later, run unit tests… they all pass…. all good.

I also found some interesting code like :

$total = sizeof($blah);
if($total == 0) { … }
elseif ($total != 0) { …. }
elseif ($something) { // WTF? }
else { // WTF? }

(The WTF comment were added by me… and I did check that I wasn’t just stupidly tired and not understanding what was going on).

The joys of software maintenance.