Friday, May 1, 2026

A new way to start classes

Now listen! and before I start,
give me your hearing and your heart,
for words will quickly disappear,
if they aren't heard in heart and ear.
Some men will hear and then commend
things that they cannot comprehend.
Their sense of hearing lets them hear it,
but once the heart has lost the spirit,
the words will fall upon the ears
just like the wind that blows and veers.
The words don't linger there or stay;
in a short while they fly away,
if the unwary heart's asleep,
because the heart alone can keep
the words enclosed. The ears, they say,
are just the channel and the way
by which the voice comes to the heart.
But the heart's able to impart
the voice that enters through the ears
unto the breast of him who hears.
So he who would hear me must start
by giving me his ears and heart,
because, however it may seem,
it's not a lie, tall tale, nor dream.

Chrétien de Troyes, Yvain, the Knight of the Lion, translated by Ruth Harwood Cline (University of Georgia Press, 1975), lines 141–164.

I am thinking of reciting that passage as a way to start next semester's classes.

Thursday, April 30, 2026

Reflecting on CS390, Spring 2026 edition

I taught CS390 Game Studio Preproduction after a year off, and it was to have been the beginning of a three-semester sequence. However, there was an administrative error: an overestimate of the number of students who would need the sequence meant that we had two sections of CS390 running but only enough students to justify one. The compromise with the administration was that we would collapse the two cohorts into one. For various good reasons, mine was the sequence that was cancelled, so my small number of students will be joining with another cohort for next year's production sequence. 

I open with this story because it cast the whole course in a strange light. My original plan for the course was that I would be preparing them to work under my supervision next year, and I had set up scaffolding to support that growth. When I found out that these students would go to another faculty member, I realized that this aspect of my plans were essentially moot: the other faculty member and I have different approaches. At one point, I laid out on the board a list of topics I had planned to introduce and enforce on the preproduction students, and I left it up to them to vote on which things they thought sounded interesting. In the end, the only one they voted for was a bootcamp on command-line git, but I don't even think that "stuck" since I didn't have any follow-ups or see students working through comprehension; many seemed hooked on WYSIAYG GUI tools instead.

It was a good group of students, and it was fun to work with them on the journey. The first several weeks, we explored ideation and prototyping. There were a few concepts left on the table here that I think are absolutely worth pursuing. They ended up settling on two designs to move forward through the formal preproduction process. These were presented to my Games Advisory Board, who generously volunteered their time to give feedback to the students. This pitch presentation was one of the three major deliverables of the course, the other two being a vertical slice and a concept document. All the details, of course, are in the course plan.

This was my third time teaching this course, and a major change is that I did not require reading Lemarchand's Playful Production Process as I have in the past. This was an experiment to see how the students would do with less structure. I am not sure I have a strong conclusion. Parts of it are really powerful, and I found myself missing them. I did not miss the niggling doubt of whether a student actually read and thought about the content. I feel like if I brought the book back, I would need to do it in the way I evolved my handling of Clean Code in CS222, incorporating more explicit instruction on how to take notes and how to transfer from the text to the task at hand. I will need to revisit this before Spring 2027, when I expect to teach CS390 again.

I invested a lot of time during my sabbatical trying to come up with a good concept document framework. I blended together some of my favorite aspects of the old Tim Ryan article, Lemarchand's book, and Sellers' Advanced Game Design. It mostly worked, but there were three pain points in students' documents that I think I can alleviate by improving the recommendations. 

The first relates to project goals. Lemarchand has a whole chapter on this topic where he distinguishes between experience goals and project goals. I abstracted that into what I called design goals, but I think I would do better to just adopt Lemarchand's approach (which he credits to Fullerton, I believe). My students obviously struggled with the idea of making goals for the player's experience. They kept listing their own goals instead. That is, they talked about means where the goals should be about ends. For example, a team mentioned drafting as a core mechanism because they wanted drafting, but they didn't talk about the question that drafting answers. Curiously, this should tie back to MDA, which I know we talk about a lot in CS215 as an analytical tool, but perhaps we need to emphasize it more as a generative tool. Curiously, even after I pointed the teams specifically to Lemarchand's examples from Uncharted 2 in his book, the teams still did not seem to get it. In fact, I realized that they didn't get it in the same way that my CS222 students kept wrapping up technical requirements as user stories instead of doing proper user story analysis. It makes me wonder whether this idea, of user-centeredness or player-centeredness, ought to be a program-level learning objective, at least of the GDD concentration. (I was tempted to put that previous sentence in bold, but that looked really aggressive. Do you think I'll remember this otherwise?)

The second pain point was that I did not have an equivalent of Lemarchand's macro chart. He puts it as part of his "game design macro," which is equivalent to my "concept document." Without such instruction, the students waved their hands around things like how many levels there would be and what powerups might be available. Both teams, loving them as I do, really dropped the ball here: they both described games with a scope that was essentially unbounded due to their lack of quantifiable analysis. A macro chart really would have helped them face the work to be done, especially if I had pushed them to include incidentals that they tend to forget, such as marketing materials and playtesting time. I don't think an entire schedule is necessary, but the macro chart makes one face facts.

The third is that I did not require them to go deeply into what Sellers calls the detailed design. I can see in my own documentation that I shifted toward narrative explanation of core loops using Chambers' approach, but I cannot recall why I stopped there. I don't want to go as far as Sellers' formal system modeling, but I think more documented details would have helped communicate the interactions among systems. During the pitches, one of the board recognized that a project had simple systems with multiplicative effects: that's the kind of observation I want my students to make about their own projects, not for a third party to point out about it.

Taking all that together, it points to something crucial that I need to remember next time I teach the course: push the students to make smaller games. I am not sure how to quantify or enforce that, but it has to be done. They just don't have the experience to interpret subtle or even overt feedback in this regard. Baxter-Webb brings this up in a helpful video, arguing that people making "indie games" need to be familiar with what small games are like. If my students are coming in inspired by large games, they will be thinking of making large games. There may be opportunities here to require a more structured research phase to help set expectations.

For the "final exam," we met and talked about the semester. After some open discussion, I gave them three questions to answer, one at a time, and we went around and shared answers to each. It was a wonderful conversation; I could write a whole blog post about how the students articulated what they learned. The final question I asked was what advice they would give to their past selves or to new students coming into preproduction. The answer that sticks in my memory is to think of game design as making a gift for someone else, not as tinkering for yourself. This wraps up a lot of the meaningful conversation the students and I had about design goals, experience goals, playtesting, humility, and risk-taking. 

That covers the major parts of what I learned from teaching the course this past semester. I am grateful to have had a small group that worked so well together; I look forward to watching their projects from afar during their senior year. In the meantime, I will put this train away, knowing that I will be able to dig it back out in late Fall, when I get ready for the next group.

Reflecting on CS222, Spring 2026 Edition

I had a fun cohort of students in CS222 this semester. Each team designed a final project of the "connect to a Web service and do something with the data" variety. In many ways, their foibles and experienced were akin to past students, but something that stood out to me was how well they asked each other questions. It is not clear whether I have done something different to foster this; one possible factor is that I have improved the requirement that teams give a technical lesson as part of their presentations.

We cover a lot of ground in CS222, and so I am concerned that next time, I will have only fourteen weeks of instruction rather than fifteen. The university is changing the academic calendar as of Fall 2026, and it will be hard to find a week's worth of work to remove. Also, students coming in will have two fewer weeks of classroom experience programming; given that many students do not do much independent practice, this will have a significant effect on downstream courses. Coincidentally, I just got out of a meeting this morning where we talked about how much more we want to teach within the undergraduate major, and yet we will have eight fewer weeks in which to do what we are already struggling to fit in.

My course requires students to read and evaluate "project code," which is a term I define but that a majority of students don't understand. Either they don't read the instructions or they lack the ability to distinguish between code-for-learning and code-to-do-something. As a result, many of them look back on their own code that was just purely pedagogic; others find code on GitHub, which I recognize is a leap of faith, but they look at code that is just someone else's classroom or tutorial project. This leaps out to me upon inspection, but because they have only ever read pedagogic code, I don't think they recognize it. Hence, rather than just refine by definition, I think I need to point them toward specific examples of projects that are appropriate. I hate to do this because there is so much code out there and I want them to find projects that are interesting to them. Yet, despite this encouragement, almost no one has ever done that, so maybe that's a dream not worth chasing. I would need to set up some guardrails to prevent two students from evaluating the same block of code, which is also something I am not keen on. Perhaps I need to set up some kind of draft.

I have had a note in my planning spreadsheet for some time to remind students that a good way to think about SRP is responsibility to whom. Also, though, there's a second edition of Clean Code out now that I haven't read yet. I need to see if the new edition is worth switching to, especially around explanations of OO principles that often trip students.

My final note is about the final exam. Many years ago, it comprised only reflective questions, and those gave me an insight into the student experience. A few semesters ago, I was concerned about students blowing steam, and so I added a content question as well, to make sure students could express some of the fundamental class topics. One of these questions has been perennially challenging to articulate correctly. Essentially, the question is trying to see if students understand that they should be able to make a list of unit tests they have yet to write and that these should be SMART. The trouble is that if I remind them that each step has to be SMART, then I have given them the answer. What happens is that students will say that their first step is to "write some tests," for example. That is not wrong, but it also doesn't illustrate what I'm trying to get at. Relevantly, I think many student teams continue to struggle with this idea throughout the semester, that they should always have a goal to work toward. TDD is supposed to push them in this way, but in practice, many fall back on old habits of just plowing forward without discipline. So, I think this points toward two other action items for me: first, to use more class time to have them practice articulating what their immediate next steps are, like Kent Beck does in Test-Driven Development By Example, and also then to use a similar framing to ensure that they can do this individually, as an outcome of the class.

Monday, April 27, 2026

Reflecting on CS215, Spring 2026 edition

Introduction

It was a delight to teach CS215 Introduction to Game Design this past semester. I had not taught the class since Fall 2022, and I didn't realize how much I missed it. I had a really wonderful cohort of students, as good as I could ask for. They were eager with questions and comments. More than once, I was just warming up a story, and hands would go up with thoughts and reactions, before I even got to the good stuff. I am glad to have these students in the program and look forward to watching them over the next few years.

The class was overall a great success. I followed my tried-and-true approach, using the first half of the semester to go through readings and exercises that helped lay out what is known about game design and the second half to work on final projects. 

Course Structure

As in the past, I still feel like there are missed opportunities to help students deploy the ideas from the first half into the second half. One complication is that, because students design their own projects, there's no way to know which ideas from the first half will be salient to their work. I would like to get them thinking more about this, but at the same time, I don't want to add more load without a good reason.

The one universal is the iterative design process. This came up in informal meetings among the faculty who teach in the CS GDD concentration, and we agreed that it is the most important objective for students to meet. Currently, my final project structure recommends a particular sequence, but I do not mandate it. I wonder if it would help the students learn the importance of playtesting earlier if I required them to do it earlier, the way that I require acceptance testing in CS222 for example.

Player Logs

I asked the students to complete five player logs during the semester. These logs involved briefly describing a game they played and reflecting on what they learned from it. Because we were working in the analog game design space, I required that at least half be analog games or digital interpretations thereof. My hope is that they would be playing games anyway and that this would help integrate their personal and academic experiences. Judging from their selections, I think many chose games for this exercise, although curiously not the ones I suggested as good starting points. I did not ask them to explain why they chose the games they did, but I might ask that in the future. No one should be choosing a knock-off of Uno over a brilliant design like Carcassonne.

The player logs were evenly spaced throughout the semester. This had the advantage of appearing elegant on the calendar but the disadvantage of colliding with other deadlines. Of course, many students did the work right before the deadline, but this meant it competed for attention with other work rather than complementing a continuous practice of study. I may need to reframe the exercise as either formal assignments or as the building of a portfolio that is routinely verified.

I realized too late that there was an opportunity to tie the player logs into the final projects. Next time, for late-semester player logs, I should require that the game be related somehow (mechanically, thematically) to their final projects. A lot of the students admitted to having explored a genre in which they had little experience, and this could be a way to help them understand the design space they are in.

Design Logs

Once again, I required my students to maintain design logs during their final project. I love this idea, and I am sure it helps them think through their practice. However, I think I can make the progress a bit smoother. For example, I required them to follow a particular document structure, but many struggled to read and follow the instructions. Rather than be indignant about this, I could provide a template to remove some friction. Similarly, I gave them some latitude as to what specifics go into the design logs, but they probably lack the experience and wisdom to make this decision well. I have them read Dan Cook's article, where he appropriately provides some guidance but no rules. I think I should give them a few more rules in the spirit of shuhari.

Workshopping

We used six weeks of class for workshopping. I split the class into four groups, and since we met twice a week, each group had three days to focus on workshopping their games. Each presenter took a section of the room, and the rest of the class moved among the stations. For the first round, this was done ad hoc; I set a timer, but moving was a recommendation, not a requirement. We ran a little postmortem after that iteration, and the students liked the ad hoc groupings but not the optionality of rotation. Thereafter, groups had to move when the timer went off, and it went well.

I gave them a feedback structure that I learned from Lemarchand's Playful Production Process. Any feedback during workshopping had to be framed as, "I like..., I wish..., What if...?" The students who tried this loved it, but some were too eager to give advice without the framing. They desired some kind of accountability, so I decided that we needed a magic word that could be invoked: what better for a game design course than "xyzzy"? If anyone heard feedback without the format, they could say, "Xyzzy!" and the hearer would have to re-frame their discourse. I think I was the only one who actually used this, but it was a good reminder to have on the board.

Similar to the feedback structure, starting in the second iteration, I asked students to begin each workshopping session by giving their elevator pitch and stating their design question. This was not always followed, and a little more accountability could have helped here, akin to the magic word for feedback. Students also struggled to articulate a design question, but I did not give much guidance here either. Many students came in with questions like, "Is this fun?" despite our earlier conversations about the danger of the F-word. I think this is a place where I can help them develop better practices and tie together the two halves of the semester. For example, I could give them a template like, "What mechanic can I add or remove in order to improve [emotion] from playing my game?" or "What goal can I add for [player type]" These would require students to think about their games in the context of the theory we studied rather than falling back on colloquialisms like, "Is this better than that?" 

Along those lines, I wonder if it would help the students to articulate design goals, to be explicit about what their game is supposed to do. This is something my preproduction students have struggled mightily with, and I don't know if this means that it should be introduced earlier or that it requires more wisdom than many new game designers possess.

Rules

I was initially surprised when, reading their final submissions, that their rulebooks were low quality. Many were missing fundamental details so that it was clear that the rules articulations themselves were not tested. My evaluation rubric enthroned a clear rules articulation as being a requirement for a good grade on the final project. I reflected on the course and realized that most of the students had completed the semester without ever reading an actual rulebook. For their player logs, they either used a digital interpretation of a game such as Board Game Arena or they were taught a game by a friend. Digital adaptations are deceiving here because software prevents players from doing things that they ought not do whereas in a tabletop game, you are only beholden to the laws of nature and localized miracles.

The students had never worked with the genre of game rules, and so clearly I could not hold them accountable to that genre's standards. It still bothers me that the CS majors in particular did not deploy their programming knowledge to write clear rules anyway, since rulebooks are essentially programs run on people, but that doesn't change the fact that I did not scaffold a good learning experience to make them good at it.

Being able to articulate the rules should be part of the class. After all, I encouraged them to think about posting their work on itch.io or someplace similar, but if the work is unclear, then it may do more harm than good. Next time, I need to think about this particular deliverable the way that I handle other incremental development tasks: give them a scaffolded experience with deadlines for draft submissions and/or rules-based playtesting.

Tuesday, April 21, 2026

What we learned in CS222, Spring 2026 edition

I worked my CS222 students through my usual semester reflection exercise today. I gave them five minutes to write down everything they learned this semester that was related to CS222, then we set a timer for 30 minutes while they offered items and I wrote them on the board. For some reason, we only had nine people in class today, but these nine came up with 88 items. I was going to give then six votes (because ceil(lg(88))=6), but a student in the front agreed with me that five was more symbolic, so they got five votes to mark the items most important to them.

The distribution of votes was unlike any I have seen before, and this may be because of the low attendance. Two items rose quickly to the top, with five and four votes respectively, but the next classification were with two votes, of which there were six. After a brief reflection, I decided to seize the opportunity and go ahead and mark all eight of these as "top items" for use in the final exam. These items, and their vote counts, are as follows:

  • Clean Code (5)
  • TDD (4)
  • Version control (2)
  • Using an API (2)
  • SRP (2)
  • Coding with a team (2)
  • Red-Green-Refactor (2)
  • Always run all the tests (2)
There is clearly some overlap among these, but that is fine. I look forward to seeing what the students have to say about these topics in the final exam.

Friday, April 17, 2026

Vlaada Chvátil's observations on diplomacy and winning

I just finished listening to Vlaada Chvátil's appearance on Justin Gary's Think Like a Game Designer podcast. It's definitely worth a listen for game designers and fans of Chvátil's work, and there were a few surprises along the way. Today, I want to share two brief observations that were shared at the end of the episode.

First, Chvátil mentions that he stopped caring to make or play games of diplomacy: the metagame is entirely about convincing the table to go after the person in the lead but never to go after yourself. All the resource management that comprises the systemic game becomes moot in face of the human game of negotiation (or, as Gary quipped, who whines best). This is particularly interesting to me because of some conversations with my game design students this semester. I have pushed several to think about the player as a first-order component of the game—that they are something you should contemplate when designing systems. Obviously, player interaction can be fun and interesting, but Chvátil is pointing out a limit. It is like an amortized analysis of an algorithm: when one factor dominates, the rest become irrelevant.

Second, Chvátil says he does not care for games where the dominant strategy is to hide that you are winning: it is unsatisfying other players if you can obscure your standings and then surprise everyone that you have won. This played into a theme of the whole interview in which Gary was trying to understand what connects Chvátil's incredibly diverse oeuvre. Chvátil mentioned offhandedly, "winning a game is overrated." That is, what matters is that everyone has a good time. I also discussed this with my students since many of them are still struggling to understand why few contemporary designers use player elimination or even Uno-style "Skip turn" actions. I encouraged my students to think about it narratively: you invite someone to your house to play a game, and then during the game, you tell them that they don't get to play any more. It's just not polite. Yet, students who are all-in on player elimination will find a justification!

Monday, March 23, 2026

Paper prototyping, digital prototyping, and the DORA Metrics

My video game preproduction class formed teams and started work on their final projects a few weeks ago. Each group started working on digital prototypes before they had really settled on the core gameplay. I encouraged them to move toward paper prototyping so that they could make changes faster, even radical changes. I am pleased to report that they have embraced nondigital tools as part of their process, although I'm still concerned that they are not iterating fast enough. 

This was on my mind when I watched Steve Smith's latest video from the Modern Software Engineering YouTube channel, which reminded me how the DORA metrics can positively influence software development practice. It got me thinking about the metric change lead time, which measures how long it takes from a change to go from committed to deployed into production. I realized that this is essentially the same idea as I have been advocating for their prototypes, whether they are analog or digital: the less time it takes for an idea to go from implemented to tested, the better. It doesn't matter whether we are committing in the sense of version control or in the sense of having made a coherent change to a paper prototype. I suspect that the DORA metrics would still hold, that a team that reduces that time is going to be more productive.

In today's meeting, we talked about what this means for us as game developers. I borrowed part of Smith's analysis, discriminating between activity metrics, output metrics, and outcome metrics. The students got the idea that the outcome metric is most important, and that this could be something like profit or it could be something like delight. That is, if we are making a game that is designed to bring joy to people, then the amount of joy we bring the player is the intended outcome, so we should measure that. In fact, we often call that playtesting.

This led me to hypothesize that we can use change lead time metrics as a tool to determine when to switch between forms of prototyping. I have that students sometimes have difficulty understanding physical prototyping and other forms of lightweight prototyping when they are developing video games. The DORA metric points toward a solution: one should change modes of prototyping when it reduces change lead time. That is, given an intended change to the game system, one can ask which mode of prototyping allows us to go most quickly from implementation to playtesting. While that is paper, stay with paper. At some point, though, there will be more systems or crafted aesthetics than the low-fidelity form is accurately conveying, and so we use higher-fidelity prototyping instead. 

There are at least two potentially confounding factors within this hypothesis. The first is that it assumes that we can accurately assess whether a particular system can be accurately playtested in a given prototyping medium. Drawing a physical card feels different than tapping a depiction of a deck, for example. The second is that it may require an amortized analysis when getting started. Engineering a system that allows for rapidly changing design components requires some up-front work. The point remains, though, that one cannot build software that is infinitely malleable, and the low-fidelity prototyping ought to inform which parts of an implementation are expected to change and which are not. Like the first challenge, this deals with the inherent risks of software development: the only way to know exactly how to build it is to have already build it.