:::: MENU ::::
Browsing posts in: Agile

Fitnesse Smell – Executable Requirements that look like Scripts

Gojko (who wrote the Fitnesse book) has this interesting discussion on what makes a good acceptance test in Fitnesse.

http://gojko.net/2010/06/16/anatomy-of-a-good-acceptance-test/

His point seems to be that Fitnesse is a good tool for documenting specifications, and continuously automating their validation. When your Fitnesse tests become like “scripts” (which is how developers are trained to see the world), then Fitnesse is a pretty crummy test execution environment (just use a unit test runner!)

Interesting that alternate tools – http://www.concordion.org, http://wiki.github.com/aslakhellesoy/cucumber/, http://www.specflow.org/ have arisen that effectively try to limit the power of the “test description language” to prevent the acceptance tests becoming script like.

Interesting food for thought – I know that many of my Fitnesse tests exhibit this codesmell


Why Pair Programming Doesn’t Reduce Productivity

The other day I was asked why pair programming doesn’t reduce productivity; and its taken me a few days to come up with a this succinct answer –

because we’re building a system to release software changes rapidly over a long period of time, not type more lines of code to reach some predefined goal post

The purpose of a software team is to deliver a working software solution that solves customer’s problems over a long period of time.

This would be easy if:

  1. Customers knew what they wanted
  2. Developers knew how to deliver the features
  3. The world remained the same on the day the software is released as it was at the time it was designed.

The trouble is that none of these are true. We have to guess at the right solution, get some real world experience with it, and then optimise when we know more about the problem domain (aka, after we have delivered the feature for the first time).

The way to do this better is to reduce the length of the feedback cycle (think 5 days, not 6 months), and grow a system that can rapidly and repeatedly deliver changes to the software over the life (years) of that software.

Pair programming contributes directly to growing this system by:

  • Facilitating communication about the architecture & design of the system, and ensuring everyone actually understands it
  • Reducing brittleness & bottlenecks caused by one person “owning” a core module
  • Improving consistency and adherence to common standards
  • Catching bugs at the cheapest time to fix them

Unintuatively, it also tends to ensure that developers actually spend longer working on the software by:

  • Reducing “wander time”. You are less likely to get sidetracked into email, facebook or some interesting blog article when pairing.
  • Reducing “stuck time”. Two perspectives on a problem have twice as many solutions to try out

These articles go into more depth:

To conclude, pair programming would be a unproductive if developers had the perfect solution is their head, and programming was just the task of typing that into the computer to release a single perfect version of the software.

But in the real world we we’re in the business of creating a system that can rapidly deliver changes to the software as it narrows in on, and adapts to the best solution to the problem at hand. And to do this, pair programming excels.


The Pomodoro Technique – Scrum in the small

Over the past month I’ve been experimenting with the pomodoro technique of time management to great success.

The technique is surprisingly simple; yet I’ve found it contains a wealth of physical and emotional benefits. To give some context; I’m using it as a programmer as part of a agile scrum team. I typically program using TDD techniques. That being said, I don’t see why it wouldn’t be applicable in most “desk” based jobs.

A pomodoro is a unit of focused, uninterrupted time; measured by an egg timer. For me, 25 minutes works well.

At the beginning of my work day, I write a collection of tasks that I think I achieve during the day onto a fresh piece of paper. (my todo list). I estimate how long I think each task will take in units of a pomodoros. Next to each task I put a number of boxes; one for each pomodoro unit. I make sure not to have more pomodoro units than I achieved yesterday; and I try to make sure that I’m estimating tasks based on how long similar tasks actually took me in the past.

Then I wind up my egg timer, place it visibly on my desk and begin the first task. The ritual of winding up the timer, placing it down and hearing it tick helps me to drop into the zone of full concentration – and let my team know that they shouldn’t interrupt me.

Brrrriiinng. Pomodoro up, finish the current test and stop. Cross out one of the boxes on my todo list. Get up and leave my desk; stretch, drink some water, focus on something far away to relax the eyes, go and speak to anyone who came past during my pomodoro time & was waived away.

Then back to the desk, reassess which is now the most important task to get one with, and start the next pomodoro.

At the end of the day I transcribe the results of my todo list back to a records sheet; update our project management software (VersionOne); and leave, satisfied that I achieved what I set out to do.

I’ve found that running my day like this greatly increases my job satisfaction & efficiency.

Firstly; I’m breaking my addition to hopium, and setting myself up to fail every day. I used to live in this lala land called – I have 8 hours of productive work time each day. The empirical reality shows that I usually do 5 – 8 pomodoro units / day – so much more like 3 – 4 hours. The rest gets gobbled up by meetings, emails, conversations. So its no wonder that I used to achieved half what I thought I would each day; and left work feeling disappointed.

Secondly, having a forced reset every 25 mins really helps me to stop falling down rabbit holes. I’ll often be trying to solve a problem with a specific technique that just isn’t working, and if I’m not careful I can spend a whole afternoon bashing my head against a wall. With the forced breaks, I’ll often find that when I sit back down to the problem, I’ll have a flash of inspiration for a much simpler way to solve it, or realise that I don’t even need to solve it in the first place!

Thirdly, being reminded to get away from my desk frequently really helps physically. I’ve experienced much less “mouse shoulder” and dry eyes.

The technique is also really helpful when pairing; keeping meetings from rambling; keeping focussed on one task (rather than having to check email or twitter every 10 seconds) and getting going on a large daunting task.

If you struggle with hopium like me; I’d really encourage you to give the Pomodori technique a try for 2 weeks, and let me know how you get on in the comments to this post.

Brrriiinng :)

Resources
www.pomodorotechnique.com


Functions with side effects are just rude!

Today I fell into a trap when using a function that had a side effect – it unexpectedly changed an input parameter; causing a later statement to fail. Debugging took an age!

For example, consider the following function:

      string StringReplace(string haystack, string needle)

If this function is side-effect free, we can use it without fear like this:

        string menagerie = "cat,dog,bee,llama";
        string catFreeMenagerie = StringReplace(menagerie, "cat");
        string beeFreeMengerie = StringReplace(menagerie, "eric");

        Assert.AreEqual(",dog,fish,llama", catFreeMenagerie);
        Assert.AreEqual("cat,dog,,llama", beeFreeMengerie);

However, if StringReplace() had the side effect of also changing the passed in haystack, then the second Assert would fail, because the first StringReplace has the unexpected side effect of changing one of its arguments.

Evans in the DDD book has quite a bit to say about this; arguing that having side effect free functions goes a long way towards making a supple design

Side effect free functions also make testing & refactoring easier (less state to worry about etc)

Remember, a function that changes its parameters is rude, and should not be trusted!

PS: Eric the half a bee lyrics


Selenium gotcha – selenium.GetHtmlSource() returns processed HTML

Whilst writing some Selenium based acceptance tests today; I bumped into a hair pulling gotcha.  Hopefully this post will prevent you from the same pain.

The test was to check whether some tracking tag javascript was being inserted into the page correctly or not.

I assumed that I could get the page source as it was being delivered to the browser by calling selenium.GetHtmlSource(); and then check that for the javascript string I was expected.

Unfortunately, GetHtmlSource is just a proxy for the browsers DOM.InnerHTML method; and that returns the Html after it has been preprocessed by the browser.

Turns out that preprocessing does a couple of funky things, including

  • Changing line-endings (Firefox)
  • Changing capitalization (IE6)
  • Seemingly random removal / insertion of ” & ‘  (IE6)

So, when I was expecting a string like this:

<!--
   var amPid = '206'';
   var amPPid = '4803';
   if (document.location.protocol=='https:')
...[snip]...

IE6 was presenting me with:

<!--
   var amPid = '206'';
   var amPPid = '4803';
   if (document.location.protocol=='https:')
...[snip]...

A possible solution is to ignore case, whitespace and quotes when doing the comparison, with a helper method like this:

/// 
        /// Use this to compare strings to those returned from selenium.GetHtmlSource for an Internet Explore instance
        /// (IE6 seems to change case and inclusion of quotes, especially for Javascript.?)
        /// 
        /// 
        /// 
        private static void AssertStringContainsIgnoreCaseWhiteSpaceAndQuotes(string expected, string actual)
        {
            string expectedClean = Regex.Replace(expected, @"s", "").ToLower().Replace(""","").Replace("'","");
            string actualClean = Regex.Replace(actual, @"s", "").ToLower().Replace(""", "").Replace("'", "");
            StringAssert.Contains(expectedClean,actualClean,
                                  string.Format("Expected string nn{0} nnis not contained within nn{1}", expected, actual));
        }

It was the line endings that really floored me; because they were automatically normalized/corrected by my test runner when displaying the error. Aaargh!


The Correlation between Schedule Pressure & Low Quality

Research suggests that

  • 40% of all software errors are caused by pressure on developers to complete quicker (Glass 2004)
  • Under extreme schedule pressure, code defects increase by 400% (Jones 2004)
  • Projects which aim to have the lowest number of defects also have the shortest schedules (Jones 2000)

This makes sense is you consider that good engineering practises are the first to leave the building under pressure to finish, and most teams will revert to quick & dirty hacks to get things implemented, without complete testing etc.

My personal opinion is that the only way to shorted development cycles is to reduce the feature set. Its pleasing for me to see that the research seems to back this up.

When deciding which features will be dropped; I think its worth revisiting the business requirements that are driving a particular set of features. In many cases a simpler “design” could suffice; for example a fancy calendar widget could be replaced with a simple textbox; a little used settings screen could be retired in favour of manually changing config files; or overly complex but little used workflows could be put on the back burner.

I maintain that a lot of “features” can be dropped, without actually impairing the business functionality of the system.

Just remember, what every you do DON’T consider dropping testing or QA in an effort to meet your deadline; unless you want to guarantee that you will continue to miss all future deadlines until the project gets cancelled!


Announcing the TDD TestHelpers opensource project

Whenever I start working on a project; I invariably find myself writing a collection of TDD test helper methods.  I quick survey of other TDDers reveals the same; and thus the birth of my latest opensource project, TestHelpers (http://code.google.com/p/testhelpers/).

The aim of the project is to centralise all those little test helper methods you end up creating into a useful assembly you can use to jumpstart your next project.  Things like:

  • Comparers
    • Generic object comparers
    • DataSet comparers
  • Test Data generators
    • Builder pattern
  • Automocking containers

For example, I’ve just added an “AssertValues” functor; which helps you check whether the values of who object instances are the same. 

One area I keep using asserts like this is in integration tests; where I want to check that the objects I’m persisting to the database via my ORM actually end up in the database in a non-mangled form.  In this case, I new up entityA, persist it, reload it into entityB and then need to check that all the values in entityB are the same as those in entityA.

A standard Assert.AreEqual will fail, because entityA and entityB are different instances.  But, my helper method AssertValues.AreEqual will pass, because it checks the (serialized) string values of entityA and entityB.

Here is another, simpler example to illustrate the concept.

    [TestFixture]
    public class StandardObjectsTests
    {
        public class StringContainer
        {
            public string String1 { get; set; }
            public string String2 { get; set; }
        }

        [Test]
        public void ObjectsWithSameValue_ShouldBeEqual()
        {
            var stringContainer1 = new StringContainer {String1 = "Test String1", String2 = "Test String 2"};
            var stringContainer2 = new StringContainer {String1 = "Test String1", String2 = "Test String 2"};

            Assert.AreNotEqual(stringContainer1, stringContainer2);

            AssertValues.AreEqual(stringContainer1, stringContainer2);
        }
   }

I’m sure you have a bunch of similar helper methods lying about your projects.

How about contributing them to the TestHelper project?


ALT.NET; London; 13 Sept 2008

Intro

Debate over what ALT.NET is; should it have a set of guiding principles like the Agile manifesto?

Continuous integration & deployment

There seemed to be 3 major areas where people encountered difficulties doing continuous integration & deployment.

 

  1. Configuration files
  2. DB schema migrations
  3. Data migrations.
Best practise approaches discussed were:
Config files
  1. Make sure that  your config files are small. and contain only that config data that changes often (DB connection strings, file paths etc).  Put all your “static” config data into separate files (DI injection config etc).
  2. Consider templated config files; where specific values are injected during deploy process.
  3. Keep all config in simple text files in source control.
DB schema migrations
  1. Migration techniques borrowed from Ruby on Rails – generate change scripts by hand or using tools like SQL Compare; and then apply them using a versioning tool like dbdeploy.
DB data migrations
  1. Take backup before data migration.
  2. Ensure app fails quickly if is a problem; cause if data has changed since deployment then cannot rollback.
  3. Consider apps upgrading themselves and running smoke tests upon startup – and refusing to run if there is a problem – this technique is used by established opensource projects – WordPress, Drupal, Joomla.
Mentioned tools: TFS, Subversion, CC.NET, Jetbrains TeamCity, dbdeploy, SQL compare.
Acceptance testing
It seemed to me that the majority of pain experienced in this area results from a lack of a ubiquitous domain specific language:
  • Build a DSL incrementally during short iterations.  Gives you opportunity to refine, fill in gaps, and train whole team to use same language.
  • Without a DSL, acceptance testing via the UI testing becomes brittle, as you end up specifying your tests at too low a level, (click button A, then check for result in cell B); rather than having a translation from acceptance tests in a higher DSL language to specific UI components.
  • Consider prioritised tests – have a set of facesaving tests / smoke tests that always work, and ensure major things are still working (company phone number correct?  Submit order button still work?).  Acceptance tests can be thrown away if they have served their function of evolving the design / team understanding.
  • The acceptance testing trio – Developers test for success – thus automated testing only tests happy flow – still need exploritory testing by someone with testing mindset; what happens if you do weird stuff?  Tester must have domain knowledge.  Business – what are should happen?  Don’t let developers be forced to make up business rules?
  • Ensure all layers of stack (tests, manuals, code, unit tests) use the same DSL language.
  • How do you get workable acceptance tests – see Requirements Workshops book
  • Short iterations – more focus, incremental specs, opportunity to discuss missing test examples.
  • Key is having a ubiquitous language encoded as a DSL (domain specific language) – develops over time, enables automated accpetance tests, 
  • Sign off against acceptance tests (Green Pepper tool – capture & approve acceptance tests)
  • Talk: The Yawning Gap of ?? doom – infoQ, Martin Fowler
  • Avoid describing these activities as “testing” – people avoid because testing has low social status.
Mentioned tools:  White for Windows GUI testing
Domain driven design
  • Discussion around the difference between DDD; where we treat the concepts & actions as central; vs DB centrered design, where we’re thinking about the data as central, and UI centered design, where the screens are considered central.
  • Concensus was that domain shouldn’t be tightly bound to the DB, or the UI.
  • Ideas around passing DTO objects up to view (UI, webservices etc), andchange  messages bad from view, indicating how the domain should be changed (rather than passing the whole DTO, where you don’t know what has changed).
BDD
  • Defined as Dan North’s Given, When, Then
  • Is it any difference from Acceptance testing? Only that it is better branding, because BDD doesn’t have the word “testing” in it; which prevents people being switched off hearing the word test when discussing specifications.
  • BDD is writing failing acceptance testing first; before writing code.  
  • Unit testing is ensuring that the code is built right, but acceptance testing / BDD ensures that the right code is built.
  • Toolset is still immature.  Fitnesse .NET & Java tooling is most mature toolset.  Many BDD tools (other than Ruby’s rSpec) have been started and abandoned (nBehave, nSpec etc)
  • BDD is not about testing, its about communicating and automating the DSL.  Be wary of implementing BDD in developer tools (e.g, nunit), which prevent other team members (business, customer, testers) from accessing them.
  • Refactoring can break fitnesse tests, because it isn’t part of the code base.
  • Executable specs (via acceptance tests) are the only way to ensure documentation / test suites are up to date & trustable
  • Agile is about surfacing problems early (rather than hiding them until its too late to address them).  So when writing acceptance tests up front is difficult; this is good, because you are raising the communication problems early.
  • The real value is in building a shared understanding via acceptance criteria; rather than building automated regression test suite.
  • Requirements workshops can degenerate into long boring meetings.  To mitigate this problem
Tools:  Ruby Rspec, JBehave, Twist, Green Pepper
Feedback
In the post conference feedback; everyone was overwhelmingly positive; and found the open spaces format very energising.  Fantastic sharing of real world experiences; introductions to new approaches, nuggets of information; great corridor conversations.  Format that allows human interaction.
Next ALT.NET beers on 14th Oct.
Next ALT.NET group therapy in Jan 2009, with larger ventue.

Using Acceptance Criteria & Text based stories to build a common domain language between business and developer

Trade Test Transmissions album coverImage via Wikipedia

Besides precisely pinning functionality, writing text based stories has another – and some would argue more important – benefit, developing a shared domain language between the business & developers.

A large part of developing a new software application is defining and codifying a business solution. To do this, both sides of the equation must be molded to fit the constraints of the other – the business process needs to be expressed in a precise manner that can be automated in software, and software must be molded to fit the use cases of its users.

The mismatch between the way the business sees the solution, and the way the developers view the solution becomes painfully obvious about half way into the project, when you start to try to match what data fields are labeled on the UI, and what they are called in the database / object model.

I’ve worked on what should have been simple projects; where maintenance is a exercise in hair pulling as you try to figure out what data input results in the weird output in a report.

The root problem is a lack of a shared domain language. Projects naturally evolve domain languages; and unless guided, you can guarantee that the language in the customers requirements spec and that in the code base will diverge rapidly.

Sitting developers, testers and the customer together to produce written text user stories following Dan North’s classic BDD story structure goes a long way towards resolving this issue.

Talking through how functionality will work, and being forced to formalize it by writing things down helps the domain understanding and language to evolve naturally, influenced equally by the customers domain understanding, and the constraints the developer must work within.

Its vital that this is done before coding begins for the following reasons:

  • All stakeholders have been indoctrinated in the same domain language
  • Names for domain concepts are at hand when the developer needs them, resulting in better named domain objects.
  • Both the developer and customer know exactly what functionality is expected; helping to keep both focused on solving the right problems.
  • Facilitate ongoing conversations as the solution evolves. Evolving a shared language is difficult, and better done at the beginning of the project whilst everyone’s enthusiasm is high. With that hurdle out the way, ongoing conversations are easier, and the temptation just to guess, or devolve into an us vs them mentality is greatly reduced.

During release planning, the high level “As a [x] I want [Y] so that [Z]” is probably sufficient, with the “Given, When, Then” acceptance scenario’s being fleshed out at the beginning of each sprint.

Specifying your functional requirements as text stories leads to some exciting opportunities:

  1. Your “unit” of work is immediately available, and understood by all. This makes prioritizing which stories to include, work on next, or drop much easier
  2. Its possible to turn the stories into executable specifications.

The Ruby community has made the most progress in the latter opportunity, with their rSpec story runner.

Consider the possibilities of the following development practice:

  • The team begin by specifying text stories & acceptance criteria.
  • The testers turn this into an executable spec, with each step “pending”
  • The developers then work from the executable spec, coding the functionality to make each step pass one by one
  • When the customers receive a new version, the first thing they do is execute the stories, to see exactly which functionality has been implemented, and prove that all is still working as expected.

At any stage its possible to see how far the team is (how many steps pass?), speculative complexity is reduced because the developers are focused on developing only that which the test requires, and all the while a suite of regression tests are being build up!

Reblog this post [with Zemanta]

Pages:12