:::: MENU ::::
Posts tagged with: Agile

Using Acceptance Criteria & Text based stories to build a common domain language between business and developer

Trade Test Transmissions album coverImage via Wikipedia

Besides precisely pinning functionality, writing text based stories has another – and some would argue more important – benefit, developing a shared domain language between the business & developers.

A large part of developing a new software application is defining and codifying a business solution. To do this, both sides of the equation must be molded to fit the constraints of the other – the business process needs to be expressed in a precise manner that can be automated in software, and software must be molded to fit the use cases of its users.

The mismatch between the way the business sees the solution, and the way the developers view the solution becomes painfully obvious about half way into the project, when you start to try to match what data fields are labeled on the UI, and what they are called in the database / object model.

I’ve worked on what should have been simple projects; where maintenance is a exercise in hair pulling as you try to figure out what data input results in the weird output in a report.

The root problem is a lack of a shared domain language. Projects naturally evolve domain languages; and unless guided, you can guarantee that the language in the customers requirements spec and that in the code base will diverge rapidly.

Sitting developers, testers and the customer together to produce written text user stories following Dan North’s classic BDD story structure goes a long way towards resolving this issue.

Talking through how functionality will work, and being forced to formalize it by writing things down helps the domain understanding and language to evolve naturally, influenced equally by the customers domain understanding, and the constraints the developer must work within.

Its vital that this is done before coding begins for the following reasons:

  • All stakeholders have been indoctrinated in the same domain language
  • Names for domain concepts are at hand when the developer needs them, resulting in better named domain objects.
  • Both the developer and customer know exactly what functionality is expected; helping to keep both focused on solving the right problems.
  • Facilitate ongoing conversations as the solution evolves. Evolving a shared language is difficult, and better done at the beginning of the project whilst everyone’s enthusiasm is high. With that hurdle out the way, ongoing conversations are easier, and the temptation just to guess, or devolve into an us vs them mentality is greatly reduced.

During release planning, the high level “As a [x] I want [Y] so that [Z]” is probably sufficient, with the “Given, When, Then” acceptance scenario’s being fleshed out at the beginning of each sprint.

Specifying your functional requirements as text stories leads to some exciting opportunities:

  1. Your “unit” of work is immediately available, and understood by all. This makes prioritizing which stories to include, work on next, or drop much easier
  2. Its possible to turn the stories into executable specifications.

The Ruby community has made the most progress in the latter opportunity, with their rSpec story runner.

Consider the possibilities of the following development practice:

  • The team begin by specifying text stories & acceptance criteria.
  • The testers turn this into an executable spec, with each step “pending”
  • The developers then work from the executable spec, coding the functionality to make each step pass one by one
  • When the customers receive a new version, the first thing they do is execute the stories, to see exactly which functionality has been implemented, and prove that all is still working as expected.

At any stage its possible to see how far the team is (how many steps pass?), speculative complexity is reduced because the developers are focused on developing only that which the test requires, and all the while a suite of regression tests are being build up!

Reblog this post [with Zemanta]

What is the definition of Done?

 Scott Hanselman has a pretty interesting discussion with Scrum co-creator Ken Schwaber around the concept of when is a story Done.


Ken raising some interesting points, most notable that a well defined concept of Done, understood by all members of the project is a cornerstone of a good scrum process.  Without it, you can guarentee that you are building up technical debt; and your software won’t be in a releasable state once you have “Done” all the features, which kind of defeats the point of release planning!

So, what is your definition of done?

  • All acceptance cases / test scenarios pass?
  • Unit tests pass?
  • Performance tests pass?
  • Customers have used and approved the feature?

Story points = Complexity points / relative size

One of the ideas many people seem to struggle with in Agile projects is that of Story Points.

In an agile project, the time to implement a story (a feature), is deliberately estimated in a weird unit called story points, rather than in number of hours / days.

The most important thing to remember is that story points do NOT equal units of time.  Initially you will naturally find yourself trying to convert story points to days, or estimating in days or hours, and then trying to convert that to story points.

RESIST this temptation!  There is a method behind the madness.

  • Research has shown that people are better at estimating relative sizes (A – C is twice as far as A – B, Basket X is about 1/3 the weight of Basket Y) than coming up with absolute estimates (A to B is 15km, Basket X is 7.5kg)
  • Days are a very subjective unit of measure.  Depending on other commitments, your ideal days are very different from mine.
  • Estimating relative size is much quicker; and you need less information to get started (you don’t actually have to know how long anything will actually take, just the relative comparisons between different stories)

With a new project its impossible to know how quickly features will be produced.  There are just too many variables – learning of the domain & toolset, agreement within the team, stabilizing of work patterns.

What you do is complete a couple of iterations, and then measure how many story points you delivered on average.  This then becomes your velocity, which you can use to derive an estimated completion range based on the story points.

Note that with this technique your story points are still valid; as they are just measures of relative size/complexity.  The only time you really need to re-estimate story points is when you got the relative size of a story wrong – perhaps it turns out to be much easier to send emails than you thought, or much harder to draw graphs.