A week of fire fighting (aka why you should <3 unit tests).

I feel like I’ve spent most of this week debugging some PHP code, and writing unit tests for it. Thankfully I think this firefighting exercise is nearly over.

Perhaps the alarm bells should have been going off a bit more in my head when it was implied that the code was being written in a quick-and-dirty manner (“as long as it works….”) and the customer kept adding in additional requirements – to the extent it is no longer clear where the end of the project is.

“It’s really simple – you just need to get a price back from two APIs”

then they added in :

“Some customers need to see the cheapest, some should only see some specific providers …”

and then :

“We want a global markup to be applied to all quotes”

and then :

“We also want a per customer markup”

and so on….

And the customer didn’t provide us with any verified test data (i.e. for a quote consisting of X it should cost A+B+C+D=Y).

The end result is an application which talks to two remote APIs to retrieve quotes. Users are asked at least 6 questions per quote (so there are a significant number of variations).

Experience made me slightly paranoid about editing the code base – I was worried I’d fix a bug in one pathway only to break another. On top of which, I initially didn’t really have any idea of whether it was broken or not – because I didn’t know what the correct (£) answers were.

Anyway, to start with, it was a bit like :

  • Deploy through some weird copy+pasting manner due to Windows file permissions
  • No unit test coverage
  • No logging
  • Apparently finished
  • Apparently working (but the customer kept asking for more changes)

Now:

  • Deployment through git; Stupid Windows file permissions fixed.
  • Merged in changes by third party graphics designer – should be much easier to track any changes he makes in the future
  • ~80% test code coverage. I’ve had to modify the ‘real’ scripts to accept a new parameter, which would make them read a local XML file (rather than the remote API/service) – in order for the tests to be reproducible (and quick to run)
  • Logging in place, so there’s some traceability
  • Better error handing
  • Calculations manually checked and confirmed.

Interesting things I didn’t like about the code :

  • Premature attempt at dependency injection – the few functions there are/were have a $db variable being passed in – but often don’t use it.
  • There is significant duplication in the code base still.
  • The code didn’t (and still doesn’t really) support testing particularly well – I’m having to retrieve output through a Zend_Http_Client call (i.e. mimicking a browser) – which means I can’t get code test coverage stats easily.
  • In some places a variable was being set to 0 (e.g. $speed) which would be used to determine a special case (if $speed == 0). Having multiple meanings in/on the same variable makes it difficult to follow what’s going on – and is a pain when the customer requests it behaves a bit differently. Really a separate parameter should have been used.

Leave a Reply

Your email address will not be published. Required fields are marked *