:::: MENU ::::
Browsing posts in: TDD

Functions with side effects are just rude!

Today I fell into a trap when using a function that had a side effect – it unexpectedly changed an input parameter; causing a later statement to fail. Debugging took an age!

For example, consider the following function:

      string StringReplace(string haystack, string needle)

If this function is side-effect free, we can use it without fear like this:

        string menagerie = "cat,dog,bee,llama";
        string catFreeMenagerie = StringReplace(menagerie, "cat");
        string beeFreeMengerie = StringReplace(menagerie, "eric");

        Assert.AreEqual(",dog,fish,llama", catFreeMenagerie);
        Assert.AreEqual("cat,dog,,llama", beeFreeMengerie);

However, if StringReplace() had the side effect of also changing the passed in haystack, then the second Assert would fail, because the first StringReplace has the unexpected side effect of changing one of its arguments.

Evans in the DDD book has quite a bit to say about this; arguing that having side effect free functions goes a long way towards making a supple design

Side effect free functions also make testing & refactoring easier (less state to worry about etc)

Remember, a function that changes its parameters is rude, and should not be trusted!

PS: Eric the half a bee lyrics


Selenium gotcha – selenium.GetHtmlSource() returns processed HTML

Whilst writing some Selenium based acceptance tests today; I bumped into a hair pulling gotcha.  Hopefully this post will prevent you from the same pain.

The test was to check whether some tracking tag javascript was being inserted into the page correctly or not.

I assumed that I could get the page source as it was being delivered to the browser by calling selenium.GetHtmlSource(); and then check that for the javascript string I was expected.

Unfortunately, GetHtmlSource is just a proxy for the browsers DOM.InnerHTML method; and that returns the Html after it has been preprocessed by the browser.

Turns out that preprocessing does a couple of funky things, including

  • Changing line-endings (Firefox)
  • Changing capitalization (IE6)
  • Seemingly random removal / insertion of ” & ‘  (IE6)

So, when I was expecting a string like this:


<!--
   var amPid = '206'';
   var amPPid = '4803';
   if (document.location.protocol=='https:')
...[snip]...

IE6 was presenting me with:


<!--
   var amPid = '206'';
   var amPPid = '4803';
   if (document.location.protocol=='https:')
...[snip]...

A possible solution is to ignore case, whitespace and quotes when doing the comparison, with a helper method like this:

/// 
        /// Use this to compare strings to those returned from selenium.GetHtmlSource for an Internet Explore instance
        /// (IE6 seems to change case and inclusion of quotes, especially for Javascript.?)
        /// 
        /// 
        /// 
        private static void AssertStringContainsIgnoreCaseWhiteSpaceAndQuotes(string expected, string actual)
        {
            string expectedClean = Regex.Replace(expected, @"s", "").ToLower().Replace(""","").Replace("'","");
            string actualClean = Regex.Replace(actual, @"s", "").ToLower().Replace(""", "").Replace("'", "");
            StringAssert.Contains(expectedClean,actualClean,
                                  string.Format("Expected string nn{0} nnis not contained within nn{1}", expected, actual));
        }

It was the line endings that really floored me; because they were automatically normalized/corrected by my test runner when displaying the error. Aaargh!


Announcing the TDD TestHelpers opensource project

Whenever I start working on a project; I invariably find myself writing a collection of TDD test helper methods.  I quick survey of other TDDers reveals the same; and thus the birth of my latest opensource project, TestHelpers (http://code.google.com/p/testhelpers/).

The aim of the project is to centralise all those little test helper methods you end up creating into a useful assembly you can use to jumpstart your next project.  Things like:

  • Comparers
    • Generic object comparers
    • DataSet comparers
  • Test Data generators
    • Builder pattern
  • Automocking containers

For example, I’ve just added an “AssertValues” functor; which helps you check whether the values of who object instances are the same. 

One area I keep using asserts like this is in integration tests; where I want to check that the objects I’m persisting to the database via my ORM actually end up in the database in a non-mangled form.  In this case, I new up entityA, persist it, reload it into entityB and then need to check that all the values in entityB are the same as those in entityA.

A standard Assert.AreEqual will fail, because entityA and entityB are different instances.  But, my helper method AssertValues.AreEqual will pass, because it checks the (serialized) string values of entityA and entityB.

Here is another, simpler example to illustrate the concept.

    [TestFixture]
    public class StandardObjectsTests
    {
        public class StringContainer
        {
            public string String1 { get; set; }
            public string String2 { get; set; }
        }

        [Test]
        public void ObjectsWithSameValue_ShouldBeEqual()
        {
            var stringContainer1 = new StringContainer {String1 = "Test String1", String2 = "Test String 2"};
            var stringContainer2 = new StringContainer {String1 = "Test String1", String2 = "Test String 2"};

            Assert.AreNotEqual(stringContainer1, stringContainer2);

            AssertValues.AreEqual(stringContainer1, stringContainer2);
        }
   }

I’m sure you have a bunch of similar helper methods lying about your projects.

How about contributing them to the TestHelper project?


ALT.NET; London; 13 Sept 2008

Intro

Debate over what ALT.NET is; should it have a set of guiding principles like the Agile manifesto?

Continuous integration & deployment

There seemed to be 3 major areas where people encountered difficulties doing continuous integration & deployment.

 

  1. Configuration files
  2. DB schema migrations
  3. Data migrations.
Best practise approaches discussed were:
Config files
  1. Make sure that  your config files are small. and contain only that config data that changes often (DB connection strings, file paths etc).  Put all your “static” config data into separate files (DI injection config etc).
  2. Consider templated config files; where specific values are injected during deploy process.
  3. Keep all config in simple text files in source control.
DB schema migrations
  1. Migration techniques borrowed from Ruby on Rails – generate change scripts by hand or using tools like SQL Compare; and then apply them using a versioning tool like dbdeploy.
DB data migrations
  1. Take backup before data migration.
  2. Ensure app fails quickly if is a problem; cause if data has changed since deployment then cannot rollback.
  3. Consider apps upgrading themselves and running smoke tests upon startup – and refusing to run if there is a problem – this technique is used by established opensource projects – WordPress, Drupal, Joomla.
Mentioned tools: TFS, Subversion, CC.NET, Jetbrains TeamCity, dbdeploy, SQL compare.
Acceptance testing
It seemed to me that the majority of pain experienced in this area results from a lack of a ubiquitous domain specific language:
  • Build a DSL incrementally during short iterations.  Gives you opportunity to refine, fill in gaps, and train whole team to use same language.
  • Without a DSL, acceptance testing via the UI testing becomes brittle, as you end up specifying your tests at too low a level, (click button A, then check for result in cell B); rather than having a translation from acceptance tests in a higher DSL language to specific UI components.
  • Consider prioritised tests – have a set of facesaving tests / smoke tests that always work, and ensure major things are still working (company phone number correct?  Submit order button still work?).  Acceptance tests can be thrown away if they have served their function of evolving the design / team understanding.
  • The acceptance testing trio – Developers test for success – thus automated testing only tests happy flow – still need exploritory testing by someone with testing mindset; what happens if you do weird stuff?  Tester must have domain knowledge.  Business – what are should happen?  Don’t let developers be forced to make up business rules?
  • Ensure all layers of stack (tests, manuals, code, unit tests) use the same DSL language.
  • How do you get workable acceptance tests – see Requirements Workshops book
  • Short iterations – more focus, incremental specs, opportunity to discuss missing test examples.
  • Key is having a ubiquitous language encoded as a DSL (domain specific language) – develops over time, enables automated accpetance tests, 
  • Sign off against acceptance tests (Green Pepper tool – capture & approve acceptance tests)
  • Talk: The Yawning Gap of ?? doom – infoQ, Martin Fowler
  • Avoid describing these activities as “testing” – people avoid because testing has low social status.
Mentioned tools:  White for Windows GUI testing
Domain driven design
  • Discussion around the difference between DDD; where we treat the concepts & actions as central; vs DB centrered design, where we’re thinking about the data as central, and UI centered design, where the screens are considered central.
  • Concensus was that domain shouldn’t be tightly bound to the DB, or the UI.
  • Ideas around passing DTO objects up to view (UI, webservices etc), andchange  messages bad from view, indicating how the domain should be changed (rather than passing the whole DTO, where you don’t know what has changed).
BDD
  • Defined as Dan North’s Given, When, Then
  • Is it any difference from Acceptance testing? Only that it is better branding, because BDD doesn’t have the word “testing” in it; which prevents people being switched off hearing the word test when discussing specifications.
  • BDD is writing failing acceptance testing first; before writing code.  
  • Unit testing is ensuring that the code is built right, but acceptance testing / BDD ensures that the right code is built.
  • Toolset is still immature.  Fitnesse .NET & Java tooling is most mature toolset.  Many BDD tools (other than Ruby’s rSpec) have been started and abandoned (nBehave, nSpec etc)
  • BDD is not about testing, its about communicating and automating the DSL.  Be wary of implementing BDD in developer tools (e.g, nunit), which prevent other team members (business, customer, testers) from accessing them.
  • Refactoring can break fitnesse tests, because it isn’t part of the code base.
  • Executable specs (via acceptance tests) are the only way to ensure documentation / test suites are up to date & trustable
  • Agile is about surfacing problems early (rather than hiding them until its too late to address them).  So when writing acceptance tests up front is difficult; this is good, because you are raising the communication problems early.
  • The real value is in building a shared understanding via acceptance criteria; rather than building automated regression test suite.
  • Requirements workshops can degenerate into long boring meetings.  To mitigate this problem
Tools:  Ruby Rspec, JBehave, Twist, Green Pepper
Feedback
In the post conference feedback; everyone was overwhelmingly positive; and found the open spaces format very energising.  Fantastic sharing of real world experiences; introductions to new approaches, nuggets of information; great corridor conversations.  Format that allows human interaction.
Next ALT.NET beers on 14th Oct.
Next ALT.NET group therapy in Jan 2009, with larger ventue.

Using Acceptance Criteria & Text based stories to build a common domain language between business and developer

Trade Test Transmissions album coverImage via Wikipedia

Besides precisely pinning functionality, writing text based stories has another – and some would argue more important – benefit, developing a shared domain language between the business & developers.

A large part of developing a new software application is defining and codifying a business solution. To do this, both sides of the equation must be molded to fit the constraints of the other – the business process needs to be expressed in a precise manner that can be automated in software, and software must be molded to fit the use cases of its users.

The mismatch between the way the business sees the solution, and the way the developers view the solution becomes painfully obvious about half way into the project, when you start to try to match what data fields are labeled on the UI, and what they are called in the database / object model.

I’ve worked on what should have been simple projects; where maintenance is a exercise in hair pulling as you try to figure out what data input results in the weird output in a report.

The root problem is a lack of a shared domain language. Projects naturally evolve domain languages; and unless guided, you can guarantee that the language in the customers requirements spec and that in the code base will diverge rapidly.

Sitting developers, testers and the customer together to produce written text user stories following Dan North’s classic BDD story structure goes a long way towards resolving this issue.

Talking through how functionality will work, and being forced to formalize it by writing things down helps the domain understanding and language to evolve naturally, influenced equally by the customers domain understanding, and the constraints the developer must work within.

Its vital that this is done before coding begins for the following reasons:

  • All stakeholders have been indoctrinated in the same domain language
  • Names for domain concepts are at hand when the developer needs them, resulting in better named domain objects.
  • Both the developer and customer know exactly what functionality is expected; helping to keep both focused on solving the right problems.
  • Facilitate ongoing conversations as the solution evolves. Evolving a shared language is difficult, and better done at the beginning of the project whilst everyone’s enthusiasm is high. With that hurdle out the way, ongoing conversations are easier, and the temptation just to guess, or devolve into an us vs them mentality is greatly reduced.

During release planning, the high level “As a [x] I want [Y] so that [Z]” is probably sufficient, with the “Given, When, Then” acceptance scenario’s being fleshed out at the beginning of each sprint.

Specifying your functional requirements as text stories leads to some exciting opportunities:

  1. Your “unit” of work is immediately available, and understood by all. This makes prioritizing which stories to include, work on next, or drop much easier
  2. Its possible to turn the stories into executable specifications.

The Ruby community has made the most progress in the latter opportunity, with their rSpec story runner.

Consider the possibilities of the following development practice:

  • The team begin by specifying text stories & acceptance criteria.
  • The testers turn this into an executable spec, with each step “pending”
  • The developers then work from the executable spec, coding the functionality to make each step pass one by one
  • When the customers receive a new version, the first thing they do is execute the stories, to see exactly which functionality has been implemented, and prove that all is still working as expected.

At any stage its possible to see how far the team is (how many steps pass?), speculative complexity is reduced because the developers are focused on developing only that which the test requires, and all the while a suite of regression tests are being build up!

Reblog this post [with Zemanta]

100% code coverage is only part of the QA story

Luke Francl of Tumblon has a nice summary with backing research showing how unit testing is only part of the QA story.

Its important to realise that developers by nature will only test the happy path; hence its likely that those diabolical edge cases will remain untested by a developer.

Another important factor is that most bugs come from missing features or requirements; and its impossible to unit test what isn’t there.

See more at http://railspikes.com/2008/7/11/testing-is-overrated; specifically Luke’s create Venn diagram showing how different types of testing (unit, user, code review etc) uncover different types of defects.

This isn’t to say that unit testing isn’t worthwhile; it just frees your testers up to concentrate on the unexpected, non-logical things that users are sure to try use your software for :)