:::: MENU ::::

The Pomodoro Technique – Scrum in the small

Over the past month I’ve been experimenting with the pomodoro technique of time management to great success.

The technique is surprisingly simple; yet I’ve found it contains a wealth of physical and emotional benefits. To give some context; I’m using it as a programmer as part of a agile scrum team. I typically program using TDD techniques. That being said, I don’t see why it wouldn’t be applicable in most “desk” based jobs.

A pomodoro is a unit of focused, uninterrupted time; measured by an egg timer. For me, 25 minutes works well.

At the beginning of my work day, I write a collection of tasks that I think I achieve during the day onto a fresh piece of paper. (my todo list). I estimate how long I think each task will take in units of a pomodoros. Next to each task I put a number of boxes; one for each pomodoro unit. I make sure not to have more pomodoro units than I achieved yesterday; and I try to make sure that I’m estimating tasks based on how long similar tasks actually took me in the past.

Then I wind up my egg timer, place it visibly on my desk and begin the first task. The ritual of winding up the timer, placing it down and hearing it tick helps me to drop into the zone of full concentration – and let my team know that they shouldn’t interrupt me.

Brrrriiinng. Pomodoro up, finish the current test and stop. Cross out one of the boxes on my todo list. Get up and leave my desk; stretch, drink some water, focus on something far away to relax the eyes, go and speak to anyone who came past during my pomodoro time & was waived away.

Then back to the desk, reassess which is now the most important task to get one with, and start the next pomodoro.

At the end of the day I transcribe the results of my todo list back to a records sheet; update our project management software (VersionOne); and leave, satisfied that I achieved what I set out to do.

I’ve found that running my day like this greatly increases my job satisfaction & efficiency.

Firstly; I’m breaking my addition to hopium, and setting myself up to fail every day. I used to live in this lala land called – I have 8 hours of productive work time each day. The empirical reality shows that I usually do 5 – 8 pomodoro units / day – so much more like 3 – 4 hours. The rest gets gobbled up by meetings, emails, conversations. So its no wonder that I used to achieved half what I thought I would each day; and left work feeling disappointed.

Secondly, having a forced reset every 25 mins really helps me to stop falling down rabbit holes. I’ll often be trying to solve a problem with a specific technique that just isn’t working, and if I’m not careful I can spend a whole afternoon bashing my head against a wall. With the forced breaks, I’ll often find that when I sit back down to the problem, I’ll have a flash of inspiration for a much simpler way to solve it, or realise that I don’t even need to solve it in the first place!

Thirdly, being reminded to get away from my desk frequently really helps physically. I’ve experienced much less “mouse shoulder” and dry eyes.

The technique is also really helpful when pairing; keeping meetings from rambling; keeping focussed on one task (rather than having to check email or twitter every 10 seconds) and getting going on a large daunting task.

If you struggle with hopium like me; I’d really encourage you to give the Pomodori technique a try for 2 weeks, and let me know how you get on in the comments to this post.

Brrriiinng :)


Howto uninstall a broken MSI

I’m busy creating an MSI installer package at work; and managed to get my system into a bit of a knot.

Basically, my custom action crashes on uninstall – so when I try to remove the broken MSI, it throws an error and rolls back the uninstall process.

AAARGH! How do I remove the broken MSI now that I’ve fixed the bug?

Fortunately MSKB supplies this helpful little tool:

Windows Installer CleanUp Utility

Simply run, select offending MSI, and it will forceably remove any MSI registration from your system.

Got me out of a pickle; and will hopefully do the same for you.

Try/Catch for SQL!?

Thanks to Nick Sertis for this trick – who knew TSQL could do try/catch statements!

Very useful when you need to write data manipulation scripts for production databases.



        --Some SQL


-- Catch the errors on the inserts



For a software craftsman, reducing technical debt should be as much of a habit as typing

I was involved in an interesting group discussion with fellow craftsmen yesterday on Technical Debt at the 2009 Software Craftsmanship conference.

The question put to the group was: “How should a team make time to reduce technical debt?”

I was interested that there was a totally unanamious response – “You shouldn’t”. “You should be doing tiny pieces of technical debt reduction all the time”.

Previously I have advocated creating technical debt reduction stories, and trying to schedule those into the iteration plan. People thought this was in principal the wrong strategy; and indeed in my experience this approach doesn’t work.

The group felt that in general tackling technical debt reduction though large scale refactorings was the wrong approach – rather a craftsman should be making incremental improvements every time they touch the code.

Bob Martin’s Boy Scout Rule:check in your code a little cleaner than what you checked out – encapsulates this principal. Its the little refactorings that you make – removing a tiny piece of duplication, changing a variable name to better reveal intent; extracting an expression into a intention revealing method – that, over time, result in a clean, maintainable code base.

In a way, this is similar to implementing the “Fixing Broken Windows Theory” in software development. The theory is that having a zero tolerance attitude towards the little things makes a huge impact on the so called “bigger” things.

Its perhaps easier understood if you consider what happens if you don’t care about the little things. Its about the attitude – if I couldn’t care enough to clean up a messy bit of code; will my team mates care about a few broken tests? If its okay to have a few broken tests; then it’s probably okay to ignore some bugs. If its okay to ignore bugs; then who really needs to care about well defined acceptance tests? And if the team doesn’t care about precise acceptance tests; why should the business care about unambiguous requirements. You get the picture.

Its the little things, added up, that result in technical debt reduction.

Functions with side effects are just rude!

Today I fell into a trap when using a function that had a side effect – it unexpectedly changed an input parameter; causing a later statement to fail. Debugging took an age!

For example, consider the following function:

      string StringReplace(string haystack, string needle)

If this function is side-effect free, we can use it without fear like this:

        string menagerie = "cat,dog,bee,llama";
        string catFreeMenagerie = StringReplace(menagerie, "cat");
        string beeFreeMengerie = StringReplace(menagerie, "eric");

        Assert.AreEqual(",dog,fish,llama", catFreeMenagerie);
        Assert.AreEqual("cat,dog,,llama", beeFreeMengerie);

However, if StringReplace() had the side effect of also changing the passed in haystack, then the second Assert would fail, because the first StringReplace has the unexpected side effect of changing one of its arguments.

Evans in the DDD book has quite a bit to say about this; arguing that having side effect free functions goes a long way towards making a supple design

Side effect free functions also make testing & refactoring easier (less state to worry about etc)

Remember, a function that changes its parameters is rude, and should not be trusted!

PS: Eric the half a bee lyrics

Selenium gotcha – selenium.GetHtmlSource() returns processed HTML

Whilst writing some Selenium based acceptance tests today; I bumped into a hair pulling gotcha.  Hopefully this post will prevent you from the same pain.

The test was to check whether some tracking tag javascript was being inserted into the page correctly or not.

I assumed that I could get the page source as it was being delivered to the browser by calling selenium.GetHtmlSource(); and then check that for the javascript string I was expected.

Unfortunately, GetHtmlSource is just a proxy for the browsers DOM.InnerHTML method; and that returns the Html after it has been preprocessed by the browser.

Turns out that preprocessing does a couple of funky things, including

  • Changing line-endings (Firefox)
  • Changing capitalization (IE6)
  • Seemingly random removal / insertion of ” & ‘  (IE6)

So, when I was expecting a string like this:

   var amPid = '206'';
   var amPPid = '4803';
   if (document.location.protocol=='https:')

IE6 was presenting me with:

   var amPid = '206'';
   var amPPid = '4803';
   if (document.location.protocol=='https:')

A possible solution is to ignore case, whitespace and quotes when doing the comparison, with a helper method like this:

        /// Use this to compare strings to those returned from selenium.GetHtmlSource for an Internet Explore instance
        /// (IE6 seems to change case and inclusion of quotes, especially for Javascript.?)
        private static void AssertStringContainsIgnoreCaseWhiteSpaceAndQuotes(string expected, string actual)
            string expectedClean = Regex.Replace(expected, @"s", "").ToLower().Replace(""","").Replace("'","");
            string actualClean = Regex.Replace(actual, @"s", "").ToLower().Replace(""", "").Replace("'", "");
                                  string.Format("Expected string nn{0} nnis not contained within nn{1}", expected, actual));

It was the line endings that really floored me; because they were automatically normalized/corrected by my test runner when displaying the error. Aaargh!

Apache2 on Ubuntu 8.04LTS; restrict access to PAM authenticated users

I have a couple of static pages that I want to restrict access to.

I don’t want to manage another set of usernames & passwds, so I’d like apache2 to authenticate off the standard users on my system, via PAM.

To get this to work, you need to install and configure mod_auth_pam and mod_auth_shadow

aptitude install libapache2-mod-auth-pam libapache2-mod-auth-shadow

Ensure the www-data user is part of the shadow group, so apache2 can read the passwords

usermod -G shadow www-data

And set up the relevent virtual host:


                AuthPAM_Enabled On
                AuthShadow on
                AuthPAM_FallThrough Off
                AuthBasicAuthoritative Off
                AuthType Basic
                AuthName "Restricted to group: sysadmins"
                AuthUserFile /dev/null
                Require group sysadmins

Restart apache, and you’re done!

Self Cert SSL certificate for Apache2 on Ubuntu 8.04LTS

Generate a self cert certificate:


Create a new virtual host (you can only have one SSL virtual host / IP)

sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/ssl

Edit ssl sothat it looks like this:
NameVirtualHost *:443

ServerName webangle-www1.everyangle.co.uk
ServerAdmin [email protected]

DocumentRoot /var/www/

SSLEngine on

SSLOptions +StrictRequire

SSLCertificateFile /etc/ssl/certs/server.crt
SSLCertificateKeyFile /etc/ssl/private/server.key

Finally, if you want to force redirect of all traffic to a certain folder via SSL (e.g, /phpmyadmin), add the following to /etc/apache2/sites-available/default

#Redirect traffic to /phpmyadmin through https
        RewriteEngine   on
        RewriteCond     %{SERVER_PORT} ^80$
        RewriteRule     ^/phpmyadmin(.*)$ https://%{SERVER_NAME}/phpmyadmin$1 [L,R]

Enable it:

sudo a2ensite ssl
sudo /etc/init.d/apache2 reload

Automount remote filesystem over SSH

Previously I posted on how I backup my server’s data to rsync.net’s remote storage.

A convienient way to access that remote storage is to configure rsync over sshfs:

sudo aptitude install sshfs
mkdir /mnt/sshfs
mkdir /mnt/sshfs/rsync.net
sshfs **username**@ch-s011.rsync.net: /mnt/rsync.net
Now, test that you can access /mnt/rsync.net, and copy a few files to your remote storage.  if all works well, the next step is to have sshfs automatically connect whenever we try to access the directory

First, unmount

fusermount -u /mnt/rsync.net

Then, install autofs, and edit the config file

sudo aptitude install autofs
sudo vi /etc/auto.master

Add the following line 

/mnt/sshfs /etc/auto.sshfs --timeout=30,--ghost


sudo vi /etc/auto.sshfs


rsync.net -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs#**username**@ch-s011.rsync.net:


And finally restart autofs 

sudo /etc/init.d/autofs restart


Now, when you cd /mnt/sshfs/rsync.net, after a short delay you will automatically be connected to the remote filesystem over SSH.  After 30 seconds of inactivity, the connection will be closed.