:::: MENU ::::
Browsing posts in: Agile

The Complexity Retrospective

Many projects go awry due to excessive complexity; and its always worth evaluating whether your team is approaching things in the simplest way that can work; especially when the deadlines begin to loom.

I recently lead a retrospective with my team focusing on complexity across all the areas of our project, using the a handful of techniques from “Agile Retrospectives – making good teams great” (a must have for every agile team).

As suggested, we structured the hour long retrospective into 5 parts:

  1. Setting the stage
  2. Gather data
  3. Generate Insights
  4. Decide what to do
  5. Close / Action plan

The purpose of “Setting the stage” is to get everyone engaged, and thinking about the same theme. To do this I reminded people of the actions we had set ourselves from the last retrospective, and then asked each person to complete the sentence “If we were a military commando, and our mission was last retrospective’s actions, we would be _______”

Awarded medals
Promoted
Ready for another mission
Court marshalled
Dead

I then told the team that in this retrospective we would be considering complexity, and whether we had too much, or just the right amount in each of the areas of our project.

To gather data, I drew a complexity radar, with each spoke a different area of complexity. I began by suggesting a couple of generic spoke names (Data model, Workflow), and then got the team to suggest the other areas. Using dot voting, everyone voted as to where they felt we ranked on each spoke, with closer to the centre being just right, and further away being overly complex. Joining the clustered dots produced the following radar map:

To generate insights we used the 5 whys excersize. I asked people to break into groups of 2, preferably cross discipline, and assigned each group 2 of the high ranking spokes. They were then tasked with asking each other “why is <spoke area> complex”, and “why is <answer>?” and so on, until 5 why’s had been asked. The answer to the 5th why was considered the root cause of the complexity, and recorded on a card. As the root causes cards came in, I grouped them, and when everyone was done, read the root cause groups out.

To decide what to do we constructed 2 more histograms, one considering the risk of the root cause, and the other difficulty to address the root cause. I then ask each person to vote which of the 2 root causes had had the highest impact & and which was the least difficult to address. This produced the following histograms.

In conclusion, we then combined the impact & difficulty histograms into the following map

My intention was that the final exercise would make it simple to choose the actions to take forward for the next sprint (basically chose the low hanging fruit – the easiest things to address which had the biggest risk reduction); but there wasn’t a clear winner shown on the graph.  Generating actions took a bit more discussion.

We found that this format was a fun and effective way to address the complexity problem.

Hopefully you’ll find running something similar with your own team helpful!


100% code coverage is only part of the QA story

Luke Francl of Tumblon has a nice summary with backing research showing how unit testing is only part of the QA story.

Its important to realise that developers by nature will only test the happy path; hence its likely that those diabolical edge cases will remain untested by a developer.

Another important factor is that most bugs come from missing features or requirements; and its impossible to unit test what isn’t there.

See more at http://railspikes.com/2008/7/11/testing-is-overrated; specifically Luke’s create Venn diagram showing how different types of testing (unit, user, code review etc) uncover different types of defects.

This isn’t to say that unit testing isn’t worthwhile; it just frees your testers up to concentrate on the unexpected, non-logical things that users are sure to try use your software for :)


What is the definition of Done?

 Scott Hanselman has a pretty interesting discussion with Scrum co-creator Ken Schwaber around the concept of when is a story Done.

http://www.hanselman.com/blog/HanselminutesPodcast119WhatIsDoneWithScrumCoCreatorKenSchwaber.aspx

Ken raising some interesting points, most notable that a well defined concept of Done, understood by all members of the project is a cornerstone of a good scrum process.  Without it, you can guarentee that you are building up technical debt; and your software won’t be in a releasable state once you have “Done” all the features, which kind of defeats the point of release planning!

So, what is your definition of done?

  • All acceptance cases / test scenarios pass?
  • Unit tests pass?
  • Performance tests pass?
  • Customers have used and approved the feature?

When to make technical stories?

During initial sprint planning, stories correspond to user features, and typically follow a

As a [user type]
I can [some action]
So that [some benefit]

structure.

Its important to keep the stories focused on features, rather than on tasks; because we need the users / product owner to be able to decide which stories to add or remove.  (A user cannot decide which tasks to add or remove, because the dependancies aren’t obvious).

However, during development of a particular story, you will often come across an area of the code that needs to be refactored.  A classic example is the case of removal of duplication; where as the design has evolved we discover additional areas of common functionality.

It can be tempting to attempt to work this refactoring into the current story, and if the refactoring is relatively small, this is a good idea.

However, in many cases the refactoring is too large to do without increasing the complexity of the story so much that it might not get finished in the current sprint.

This is the time to create a new “technical story”, which encompasses the refactoring (and perhaps any related work).

Its important that this block of work becomes a story to increase its visibility to the team, and to the product owner.  I’ve found that other team members always have useful input (hey, area Y of the team needs that too), and the product owner gets to prioritise the refactoring along with other stories.

This also makes plain to all the stakeholders why technical debt is increasing – if too many of these technical stories have been neglected in favour of new features.


Pages:12