DIY Sprint Retrospective Techniques
A few month’s ago I wrote about several Sprint Retrospective Techniques that I have found to be particularly effective. I noted that having several techniques in my arsenal allowed me to keep retrospective sessions fresh for my team. This has helped to ensure that we have a steady flow of corrective actions to aid us in our quest for continuous improvement. Since writing that post I have been rooting around on the web looking for other techniques to add to my tool kit.
One useful source for retrospective techniques I have found is the Agile Retrospective Resource Wiki. It lists more than a dozen retrospective techniques in varying levels of detail. While I have made use of several of the techniques listed in the wiki I have found that none of the remaining techniques fit with my team’s current requirements. Some strike me as being overly complex (simplicity is a major factor when it comes to an effective technique) while others are designed for a specific type of issue that we do not need to explore at this time. Rather than give up and stick with the same set of retrospective techniques I thought I would have a go at creating one of my own. This post describes my experience creating my first DIY Sprint Retrospective Technique.
I reckoned that creating a workable general purpose technique would be beyond me on my first attempt. By general purpose I mean that you can apply them to identify any kind of issue. Good examples of general purpose techniques are my favorite three techniques: The Wheel, The Sail Boat and Mad Sad Glad . However, special purpose techniques which aim to address a specific type of issue have their place. So I settled on creating a special purpose technique to address a particular problem my team was having.
The issue in question was boredom. My background as a software developer leads me to take this issue seriously because I know from experience that developers are usually creative individuals. They like to solve interesting problems and stretch their minds. While not every task can be interesting an overload of dull activities can demotivate a team of developers. Demotivation will slowly kill a team’s velocity and has to be addressed. Additionally boring tasks can be an indication of repetition and unnecessary work. Eliminating repetition through automation can improve a team’s velocity.
I observed signs of boredom creeping in during the stand ups as expressed by the team in their use of language and in the tone they used when describing their progress. I decided to nip this in the bud by exploring the issue in the next retrospective using a new custom retrospective technique.
For the new technique I drew inspiration from an unlikely source: a Dilbert book I owned more than a decade ago. It was a type of employee survival handbook written in the Dilbert style. I recalled that it contained a four square grid diagram that was used to describe different types of manager. I have recreated this from memory here but probably do not have the labels quite right.
Despite the obvious comedy around this example (you really don’t want your own boss to be in the smart/evil quadrant) the basic idea of the four square grid looked suitable for my needs. I modified it to my requirements and ran the retrospective as follows.
Step 1: First of all I drew a square on a white board and divided it horizontally into two halves. One half was labelled “Interesting” while the other was labelled “Dull”.
Step 2: I asked the team to think about the things they had done over the sprint and write out a sticky for each. For each task they were to decide whether or not they had found the work to be dull or interesting and place the sticky in the appropriate half.
Step 3: Once all of the tasks had been placed on the board I then asked the team to write the rough amount of time each task took on their own stickies. These values could be expressed in terms of hours or days and did not need to be exact. I then divided the square vertically into two further halves which I labelled Short and Long. With this second split in place I asked the team to split the tasks into one time category or the other depending on how they had taken relative to each other. Tasks that took hours moved to the Short quadrants while those that took days migrated to the Long quadrants.
Step 4: We now had four quadrants representing the following groupings of tasks: short interesting tasks, long interesting tasks, short dull tasks and long dull tasks. Of most interest from these is the long dull tasks. These represented the types of work which generated the most boredom for the team and perhaps the best opportunity for automation. We discussed each of the long and dull task stickies in turn and identified actions to shorten or eliminate them. While the main corrective actions fell out of this quadrant of the grid we also found it fruitful to have a discussion about the shorter dull tasks and the remaining work which was considered to be more interesting.
All in all I think that the session went rather well. We identified several corrective actions for the longest boring tasks. When implemented these actions have the potential to make work more interesting for the team and eliminate waste effort in the project.
I could have run the session far more simply. I could instead have asked the team to share which tasks they had found to be large and dull. I could still have used the grid for this by drawing it in its finished form upfront and asking the team to place tasks in the dull and long quadrant. However, I do not think that we would have benefited as much from this shortened approach. This is because it would have been much harder for team members to answer one big question with several dimensions than to follow the process outlined above. The technique answered the same big question by asking a series of simpler questions. The long running dull tasks simply emerge by following the steps.
Focussing on one question at a time is the key to why this technique works:
- Of the tasks you did which were dull and which were interesting?
- How long did each task take?
- Which tasks took the shortest and longest time?
From these simple questions and the division of the grid the issues are automatically identified ready to be addressed.
With the success of the technique in identifying long-running dull tasks I gave some consideration to generalising the technique to identify other types of issues. It turns out that this be can achieved easily with only two variables: the subject written on the stickies and the labels used on the grid.
The type of subject used on the grid can be anything. For the above example it was tasks from the last sprint. It could just as easily be bugs, stories or support issues or anything else that the team routinely does in a sprint.
The grid labels can also change. The above example uses interesting vs dull and short vs long. Some other examples are:
- Shorter v Longer (than estimated velocity)
- Missing v Complete (story requirements)
- Collaborative v Non-Collaborative (task execution)
- Simpler v More Complex (than planned)
- Planned v Unplanned (stories in sprint)
- Finished v Unfinished (stories in sprint)
Knowing what goes on the stickies and what labels to use on the grid is a matter of knowing what impediments a team is suffering from. This is a core skill for any effective Scrum Master.
Here is another concrete example I intend to use for my team’s next retrospective. For this retrospective I want to identify where missing story requirements are hurting our velocity. I will run the same technique as before with a couple of modifications. The changes will be to have the team post up user stories they completed in the last sprint and to use shorter v longer (than estimate) for the x-axis and missing v complete (requirements) for the y-axis. Any stories that wind up in the bottom-right grid (longer than estimated stories with missing requirements) may be instances where missing requirements are impeding us to a large degree. Corrective actions identified here may help us eliminate the issue in future.
Despite my coming up with this technique independently I doubt it is entirely original. However, I feel I have to give it a name anyway. The name I have chosen is Quartering. I think this is quite descriptive as that is exactly how the technique operates. The team’s activities in the last sprint are split into four quarters to reveal which may have been executed less than optimally.
The main lesson I have learned from this whole exercise is that I do not need to rely on others to come up with useful, general-purpose retrospective techniques. Anyone can do it themselves with a little imagination and a minimal amount of effort. There is no excuse not to continually change retrospective approaches when you can invent your own. This is especially true if people publish their own working ideas as is the case with the Agile Retrospective Resource Wiki.
On that note, if you think that Quartering may be of some use within your own retrospectives then please feel free to use it. I would be interested to hear about your experiences with it and any of your own DIY Sprint Retrospective Techniques.