Tuesday, February 11, 2025

Notes from "No Estimates"

I recently finished reading Vasco Duarte's No Estimates after hearing about it on Dave Farley's Continuous Delivery YouTube channel and Allen Holub's #NoEstimates talk. I had been curious about the #NoEstimates movement for some time, reading an article here and there, but this was my first real attempt to understand it. The book itself is clear and direct, interleaving traditional content with an ongoing fictional narrative that motivates and reinforces the ideas. I found many connections to my research and teaching interests. In this blog post, I will share a few findings from my notes and reflections.

Estimates

One of the foundational principles of the book is fairly simple but not something I had considered before: an estimate communicates the peak of a probability distribution. For example, if I estimate a task to take two hours, I am saying that the most likely case is two hours, but it could take as little as zero or negligible time, but it could also take a low probability that it takes forever. The cumulative probability is the area under the curve. From this, we can conclude that the probability of being late is much higher than of being early.

Duarte classifies estimates as waste as defined by a Lean perspective, that more of it will not make the product better from a user's point of view. It clarifies that "no estimates" isn't a goal but a vision: estimates won't be axiomatically removed but minimized. The discussion of waste got me thinking about QA testing in games, a point I will return to below.

Managing time, scope, cost, and quality

I regularly talk to my students about how cost, quality, and scope are the three levers we can control in project management. Since our work is constrained by the semester's schedule, I point out that we cannot shift cost; that is, we cannot simply add more time to the end of the semester to be able to get our projects done. I also argue that quality is non-negotiable: the point of undergraduate education is to learn how to work well, so sacrificing that is against the telos of the endeavor. Therefore, the only manipulable lever is scope. This perspective seems to help students understand why we focus on user story analysis, prioritizing based on features that add value to the users. 

I wish I could remember where I encountered that heuristic since it is distinct from a similar concept that dominates Web searches: the project management triangle, which is also known as the triple-constraint model or the iron triangle. This model explains how the constraints of scope, cost, and time are connected such that cutting one without changing the others will result in a loss of quality. Duarte uses this model in his book to draw a distinction between value-driven and scope-driven projects. Traditional management approaches are scope-driven, where the scope is fixed and so cost and time are unbounded. Scope-driven projects instead fix time and cost, leaving scope flexible, which leads to the approach of delivering the most value first. This is a standard agile perspective, but I previously didn't have the nomenclature of "value-driven" and "scope-driven," perhaps in part because in my academic environment I rely on that alternative model described above.

Reducing variability in throughput

In Chapter 3, Duarte provides suggestions for techniques to reduce the variable in throughput for a development team. I have used many of them before in mentoring student teams. These include using stable, cross-functional teams teams; having clearly defined priorities; not passing defects down the line; standardizing and automating when possible; freezing scope within iterations; and protecting the team from outside interruptions. He also suggests reducing dependencies so that people can work on one thing at a time. This got me thinking about how often my teams end up with coupled user stories, such that completing one requires work on another. Creating independent user stories comes up more than once in the book, and it's something I can watch for opportunities to practice and teach.

Duarte points out that a good requirements  must allow measuring progress early and often, and it must be flexible enough to determine what parts will be implemented later, after the system is better understood. This leads to a discussion of user stories, that the only real metric of progress is Running Tested Stories, which he abbreviates "RTS." Teams can be managed toward consistent throughput by ensuring that there are no large stories (no larger than half an iteration), several independent stories can be completed in an iteration, and the distribution of story size stays about the same throughout.

The book references a 2003 article by Bill Wake about the "INVEST" acronym, which I had not seen before. Wake describes how user stories need to be Independent, Negotiable, Valuable, Estimable, Small, and Testable. "Negotiable" here means that they deal with the essence and not the details: they are not contracts about technical details. Wake's definition of "Small" is between half a day's effort and a day's effort. Duarte adapts "Estimable" to be Essential, which is sensible given his specialization. He includes the term blink estimation, which he attributes to Angel Mednilla and was new to me. The idea is that one makes a snap judgement about whether a story fits within two weeks or not, and that this blink estimation is usually all that is needed. Regardless of which expansion I use, INVEST may be a helpful heuristic to give to teams who are breaking down a big problem such as a game design into smaller, valuable pieces.

Planning the details just in time

I started using Scrum with multidisciplinary undergraduate game development teams many years ago, and it has been a valuable practice. I was usually the Product Owner, responsible articulating the work as user stories and prioritizing the backlog. Teams pulled stories from the Product Backlog to the Sprint Backlog during our planning meetings, as per traditional Scrum. When my teams found that a one-dimensional Product Backlog makes it hard to see the big picture, we adopted Story Maps, which ameliorated the problems. Although we tracked each Sprint's progress using burndown charts, I never bothered to compute velocity. Teams tended to get a good sense of how much they could do in two weeks by around week ten, and since I was in charge of the backlog, I could cut scope to fit into the time remaining.

My preference for agility caused some friction then when trying to apply Richard Lemarchand's Playful Production Process with my last two cohorts of game development students. Although Lemarchand calls for concentric development, he has relatively little to say about how to implement that. More importantly, his approach for each phase of production is to start by enumerating all the work that is to be done, estimating how long it will take, and then moving toward that goal. A careful reader will recognize this as scope-driven management, and a cultural observer will note that the games industry is beset by death marches and crunch. 

Duarte's alternative is rooted in agile principles: plan the details of the imminent iteration, getting them into user stories that can be completed in a day or two, and let the future work remain coarse-grained epic stories. He suggests not planning more than about two months' worth of work due to how much will be learned about the system in that time. 

This caused some stress for me since it was quite counter to one of my ongoing research projects. I have been thinking about how to combine some of Lemarchand's ideas with some ideas I took from Allen Holub's #NoEstimates talk. One of Holub's primary arguments is that we can simplify our planning, and get equivalent results, by counting each user story as a single unit of work. I have been investigating the differences between tracking work items as single units versus tracking estimated hours remaining. For example, consider these two perspectives from the end of a team's alpha phase of production.

These are two perspectives of the same period of time, an Alpha phase that lasted about three months. The first shows the number of stories in the backlog and the second shows the total estimated number of hours. The top chart shows how the team cut a significant number of features around a third of the way through Alpha. For the next third, they added stories at about the same rate as they completed them, demonstrating how they were working to reshape the project based on the initial overestimate. We set up a nifty toolchain for tracking these data in realtime using a combination of Hacknplan and Google Sheets. I even gave a workshop about this at GDEX a few months ago. But the whole thing hinges on having those planning data at the end of August for a milestone that's coming up in early December.

Duarte suggests a radically different model. Break down the problem so that the stories for the current sprint are independent and small (taking no more than half a sprint to complete). Track how many of those the team can get done in an iteration. Do that for a few iterations, and you have a good sense of how much work the team can accomplish in future iterations, which lets you control scope. More specifically, you can measure a team's User Story Velocity and its Feature Velocity, where "Feature" here is elsewhere called an epic story or an activity. 

I like the sound of that. It was clear from watching student teams try to estimate an entire Alpha phase that they knew it was shoddy. Worse, for some of them, it planted a seed in their mind that they already knew enough about their projects to plan the whole phase when, in fact, they had yet to find the fun. Switching to the #NoEstimates approach would require me to supplement or replace Lemarchand's recommendations, including new ways of using project management tools.

The medium is the message

Incorporating #NoEstimates would also mean rethinking the relationship between the developers and the artists. When I was mentoring single-semester projects with a small art team, there was seldom any trouble: artists moved fluidly from concept art, sketches, and low-fidelity assets toward production-quality assets as the semester moved on. When artists struggled to match the iterative flow of the programmers, we adopted swimlanes as recommended by Clinton Keith

It wasn't clear to me how to scale this up to teams where half may be artists. I reached out to Duarte himself, and he was kind enough to talk with me about my questions. We had a fruitful discussion, and he helped me see something that I didn't understand before: there is a whole category of practices that, fundamentally, are symptoms of a failure to regularly integrate. Swimlanes are one example, but there are countless more, as overt as separate physical locations and as mundane as job titles and org charts. If we consider that the running tested story is the only way to measure progress, then anything that does not support that is potentially distracting from it. It is the kind of observation that would make Marshall MacLuhan smile: the presence of a swimlane says more about the team than anything in the swimlane itself.

Using social complexity to determine tactics

Buying the book also grants access to a keynote presentation that Duarte gave some years ago. It's an excellent talk and a good complement to the book. One particular element jumped off the screen and into my notebook, and that is Duarte's matrix for dealing with user stories. It deals with the problem of Social Complexity, which can be summarized as "the number of people in the organization you have to talk to about it." Here is a quick reproduction from his talk:


This captures something I have tried to express to many teams but failed to capture so clearly. It relates to the four conclusions of his presentation:
  1. Predict progress with #NoEstimates
  2. Break things down by value, not effort
  3. Agree on meaning with social and technical complexity, reducing risk
  4. Use RIDICULOUSLY SHORT timeboxes
He points out that if you can only do one of these, do the fourth one, since it is the essential practice from which the others derive.

Closing thoughts

I taught the first two cohorts of the game production sequence, but I am stepping away for the third one. There are a few reasons behind this, but primarily it's so that my new colleague has an opportunity to try his hand at it. I expect to be back in the saddle with the fourth cohort, who will start in Spring 2026. Writing up these notes took much longer than I expected, especially as I began to reflect on the substantial differences between what I have done in the past, what I did following Lemarchand, and what I might like to do in the future. For now, I need to put down this line of inquiry, but formalizing these notes gives me a point of entry when I need to refresh myself on these topics.

In the meantime, if you have thoughts, feedback, stories, or reflections, please feel free to share them in the comments.

No comments:

Post a Comment