In the morning, I flipped through Patrick Kua's The Retrospective Handbook for inspiration. I think I had come across a reference to this book on Martin Fowler's bliki, but it had been some time since I read it. Kua describes a format called "Solution-Focused Goal-Driven Retrospectives," which he credits to Jason Yip—in fact, Kua's presentations is just a copy of Yip's blog post, so if you read that, you've got the idea. Two things struck me about this format, making me think that it would be useful for my purposes. First, by starting with "the miracle question," you can help the team think about observable properties of desirable end states. This does seem like it would lead to the identification of shared goals better than my traditional retrospective format. Second, it still results in measurable actions to do in the coming iteration to incrementally improve.
I rolled this out on Friday right after our Sprint Review meeting. We primed with the Retrospective Prime Directive, and I pointed out—honestly, hopefully not too judgmentally—that the team had four failed sprints in a row, and that I was changing the retrospective format to help them identify shared goals. I wrote the Miracle Question on the board: "Imagine that a miracle occurred and all our problems have been solved. How could you tell? What would be different?" There was a palpable sense of surprise and shock in the room. A few students gasped, some said things like, "Whoah... I don't know" as they started turning the question over in their minds. One even claimed that he did not like the question, but this was clearly said in way that indicated he did like the question: he didn't like the fact that it was so hard for him to answer!
When I use my traditional retrospective approach, I invite the students to organically form clusters as they post their notes, but the clustering is usually pretty loose. Themes that are distributed across multiple columns cannot be clustered at all. Most students get up, post their notes, then sit down, so it's really just the last few who are making clusters. For this exercise, we moved all the tables back so that everyone would have room to reach the board at once. Three or four students still retreated to the small gap behind the table, but I called them out on this and made them join the group at the front—and I'm glad I did, despite a little whinging from them. As they formed clusters around goals, I asked them to articulate the goals. This also was much more active than my usual approach, with students passing markers around, dividing big clusters, and revising each others' articulations of the goals.
|The finished board with 13 shared goals|
This led nicely into a discussion of what specific practices we wanted to adopt to take our 3 to a 4 over the next short sprint. Most of the suggestions were clear and came to quick consensus. There was one that had some contention, though, as we tried to sort out the root of the problem. It started with a suggestion to clarify the conditions of satisfaction on the user stories, but when I looked at the conditions of satisfaction, I couldn't see that they were unclear. Of course, I acknowledged that I had written them, and so the root problem could have been that I was assuming domain knowledge that the team didn't have. The discussion raised a bigger problem, though, which I think was the real root: team members had focused myopically on completing the tasks that we had identified during sprint planning, but they never actually went back and read the story name and conditions of satisfaction as they considered validation. Hence, individual tasks were deemed to be validated when considered atomically, in a way that they would never have done if they had been held to the conditions of satisfaction criteria. The result of this was that the tasks were all complete but the story was not satisfied. We never did settle on a concrete action item to solve this, although we agreed that the discussion would make us more sensitive to the articulation of both tasks and conditions of satisfaction in the coming planning meeting. As I look back on it, this may have been a good opportunity to deploy Five Whys to try to get to root causes, but the truth is we were also fighting the clock at this point.
I think this format helped my team to articulate and discuss critical team issues that they had not been confronting before. Whether or not it makes an observable impact in the coming sprint will have to wait for a future blog post. I will need to think about adding this format to my tool belt. I am not sure if I want to replace it as my go-to structure, but I would like to try deploying it earlier with a team to see if it helps with the identification of shared goals.