I recently finished Fifty Quick Ideas to Improve Your User Stories by Gojko Adzic and David Evans. The book was strongly recommended by Dave Farley on his Modern Software Engineering channel. I have been using user stories for many years as part of my teaching and practice with agile software development, and I hoped that the book might give me some new ideas and perspectives. I read it with a particular interest in helping students manage game development projects.
The book itself is laid out simply: every two-page spread presents one of their recommendations. Each has a narrative, a summary of benefits, and then practical recommendations for getting started. From the title, I expected the book to be short and punchy like The Pocket Universal Principles of Design, but it contains a lot more detail. The authors are clear that the book is not an introduction to user stories; they assume the reader is already familiar with user stories and the jargon of agile software development. This is good for a reader like me, except that I had earlier lent the book to a student who was just learning the fundamentals. I would use a different resource for that purpose if I could do it again.
Here are suggestions from the book that struck me as potentially helpful to my teaching and mentoring. I have included the page numbers for ease of future reference.
Describe a behavior change (p14)
This is the second time recently that I have come across Bill Wake's INVEST mnemonic for user stories—that stories should be Independent, Negotiable, Valuable, Estimable, Small, and Testable. The other place was Vasco Duarte's No Estimates, which I wrote about earlier this year.
The idea behind this recommendation is to quantify change. For new capabilities, "start to..." or "stop..." are simple quantifiers. I feel like there's something useful in here for introducing students to user stories.
Approach stories as survivable experiments (p18)
This tip is all about the perspectives of what stories allow. I often see students mistake stories for traditional requirements, probably because traditional requirements look more like school. Framing the stories as experiments may help students see that this is more creative and exploratory.
Put a "Best Before..." date on stories (p24)
I wish I had thought of this one. I have mentored student teams where they have a story in the backlog that only makes sense to do by a certain date or milestone. It's an easy one to remember, and it falls into a common pattern in the book: make the stories your own.
Aside: It reminds me of the advice about managing my curriculum vita that I received from my colleague
Mellisa Holtzman many years ago. She said that your vita is your own, and that you should use it to tell a story rather than slavishly follow a template. She was right, both philosophically and rhetorically. My university recently moved to using a one-size-fits-none digital solution, and I was disappointed that there has been no discussion about the impoverished epistemology of such a move.
Set deadlines for addressing major risks (p28)
This is similar to the previous one, and it references it as a technique: one can use learning stories that have "Best Before..." dates.
The authors distinguish between learning stories and earning stories, referencing an old presentation by Alistair Cockburn. This is a nice distinction as well that I am sure I can use.
Use hierarchical backlogs (p30)
Having spent
the last year mentoring three teams with flat backlogs, it made me miss story maps, and I look forward to going back to them. Story maps come up explicitly on page 36 under the recommendation
Use User Story Maps, which is just after the recommendation on page 34 to
Use Impact Maps. Gojko Adzic also wrote
a book on impact mapping. I have read most of it and find it intriguing, and I can see how it could be useful in a business environment. It didn't strike me as immediately helpful for my needs teaching the game design and development or the basics of agile software development.
Set out global concerns at the start of a milestone (p40)
The authors acknowledge that user stories cannot and should not capture everything that a team has to do. Writing emails or reading applicant vitae are examples of crucial work that is not directly related to a user story.
Cross-cutting concerns such as security, performance, and usability should still be discussed, but they should manifest as something like a checklist that accompanies the whole process—not embedded or wedged into a user story.
Use low-tech for story conversations (p54)
The recommendation for whiteboards and sticky notes comes as no surprise to me. What did surprise me was that they caution against using a room with a large central table. Such a configuration makes people sit and opine rather than converse, and what comes out of the conversation (what we often call the "user stories" themselves) are just markers of that conversation.
Diverge and merge (p58)
When user story analysis is being done by a large team, they recommend breaking the team into smaller groups. Each group comes up with examples of user stories, then the groups compare their results. They suggest 10-15 minutes of divergence.
In last year's game production studio, one team was much larger than the others, and I recommended that they break down story analysis by feature domain. I wanted each group to have representatives from Computer Science and from Animation. It worked reasonably well, but it was slow, and there were still a lot of holes in the analysis: cross-cutting ideas were lost. (Also, most of the students didn't know anything about user stories, and some harbored significant misunderstandings.) Next time, I will keep this advice in mind and have them work on the same context rather than different ones. I regularly have students push back on this kind of activity as redundant but this is because they have not yet experienced the productivity drop that comes from realizing later that the requirements were wrong.
Measure alignment using feedback exercises (p62)
This one jumped out to me because it sounds like it came right from a book about active learning techniques for the classroom.
During user story analysis, rather than ask if anyone has questions, give the discussants a sample edge case and ask them an open-ended question such as, "What happens next?" Each writes their response, and then the team compares them. This shows whether there is shared understanding or not.
Reading this gets me fired up to use it in all of my teaching from now on. That's awkward since I won't be teaching again until Spring 2026. That gives me all the more reason for this blog post.
Split learning from earning (p90)
As mentioned above, this separates two kinds of stories. Learning stories still need a concrete goal that is valuable to stakeholders. An interesting corollary to this advice is that every story should be either learning or earning; if not, it is not really a user story at all.
Don't push everything into stories (p98)
I foreshadowed this one earlier as well. They advise strongly against tracking non-story work with user stories. An example they give is insightful: if such work is tracked in the backlog, and the stakeholders sort the backlog, then they will deprioritize work that does not generate value. The non-story work won't get addressed, then, until it is a crisis, at which point it becomes the highest priority, so there was no value in having it in the backlog.
Avoid using numeric story sizes (p102)
The two-page spread summarizes some of the key arguments behind the No Estimates movement. Here, they recommend that if someone uses T-Shirt sizes, to only use at most three different sizes. Better, they prefer Goldilocks sizing: a story is either too big, too small, or just right. Their conclusion is similar to Duarte's: just count stories.
Estimate capacity based on analysis time (p106)
If we know that planning for an iteration takes 5%-10% of the iteration time, then we can timebox the actual planning and then only work on the things that could be covered in that planning meeting. That is, use the planning meeting's duration as a heuristic for the scope of an iteration. This is clever, and the authors acknowledge that the moderator needs to prevent the meeting from getting too far into the weeds.
Split UX improvements from consistency work
They recommend having UX changes explored by a "mini-team" that explores and prototypes. The output of this mini-team's work are the user stories for the whole team, not exhaustive design specifications. The work of this exploratory team is on the learning side of the learning/earning divide. The mini-team can slipstream back into the main team to help with implementation.
This interested me considering how often I see teams struggle with questions of gameplay and engagement. "Will this be more fun?" is a good design question that can be approached through prototyping. Unfortunately, my teams often want to do this work as if the answer is a fait accompli. I wonder if this recommendation, together with the learning/earning split, will help me frame better for them what it means to answer the design question with a prototype first. The outcome would be the stories for the whole team to actually engineer a solution. I suspect I will encounter much of the same resistance I mentioned previously, where teams assume working on the "product" is more efficient than working in prototypes because they have never seen a product fail under bad design or bad engineering. I think this is why I prefer working with students who have tried hard things and seen how mistakes have consequences beyond a scolding or a bad grade.
Check outcomes with real users (p116)
The authors proclaim this as potentially the most important thing a team does. The important corollary here is that this requires one to write user stories the way one approaches TDD, or even how one ought to design learning experiences: focus from the beginning on an assessment plan.