Thursday, February 15, 2024

Reaping the benefits of automated integration testing in game development

This academic year, I am working on a research and development project: a game to teach middle-school and early high-school youth about paths to STEM careers. I have a small team, and we are funded by the Indiana Space Grant Consortium. It's been a rewarding project that I hope to write more about later.

In the game, the player goes through four years of high school, interacting with a small cast of characters. We are designing narrative events based on real and fictional stories around how people get interested in STEM. Here is an example of how the project looked this morning:

This vignette is defined by a script that encodes all of the text, the options the player has, the options' effects, and whether the encounter is specific to a character, location, and year. 

We settled on the overall look and feel several months ago, and in that discussion, we recognized that there was a danger in the design: if the number of lines of text in the options buttons (in the lower right) was too high, the UI would break down. That is, we needed to be sure that none of the stories ever had so many options, or too much text, that the buttons wouldn't fit in their allocated space.

The team already had integration tests configured to ensure that the scripts were formatted correctly. For example, our game engine expects narrative elements to be either strings or arrays of strings, so we have a test that ensures this is the case. The tests are run as pre-commit hooks as well as on the CI server before a build. My original suggestion was to develop a heuristic that would tell us if the text was likely too long, but my student research assistant took a different tack: he used our unit testing framework's ability to test the actual in-game layout to ensure that no story's text would overrun our allocated space.

In yesterday's meeting, the team's art specialist pointed out that the bottom-left corner of the UI would look better if the inner blue panel were rounded. She mentioned that doing so would also require moving the player stats panel up and over a little so that it didn't poke the rounded corner. I knew how to do this, so I worked on it this morning. It's a small and worthwhile improvement: a cleaner UI with just a little bit of configuration. 

I ran the game locally to make sure it looked right, and it did. Satisfied with my contribution, I typed up my commit message and then was surprised to see the tests fail. How could that be, when I had not changed any logic of the program? Looking at the output, I saw that it was the line-length integration test that had failed, specifically on the "skip math" story. I loaded that one up to take a look. Sure enough, the 10-pixel change in the stat block's position had changed the line-wrapping in this one particular story. Here's how it looked:


Notice how the stat block is no longer formatted correctly: it has been stretched vertically because the white buttons next to it have exceeded their allocated space. 

This is an unmitigated win for automated testing. Who knows if or when we would have found this defect by manual testing? We have a major event coming up on Monday where we will be demonstrating the game, and it would have been embarrassing to have this come up then. Not only does this show the benefit of automated testing, it also is a humbling story of how my heuristic approach likely would not have caught this error, but the student's more rigorous approach did.

I tweaked the "skip math" story text, and you can see the result below. This particular story can come from any character in any location, and so this time, it's Steven in the cafeteria instead of Hilda in the classroom.

We will be formally launching the project before the end of the semester. It will be free, open source, and playable in the browser.

Friday, February 9, 2024

Tales of Preproduction: Refining the prototyping procedure

I am teaching the Game Preproduction class for the second time this semester, and this time I am joined by Antonio Sanders as a team-teacher from the School of Art. There are already a lot of interesting things happening as we have a class that is now have Computer Science majors and half Animation majors. We have also extended the class time to a "studio" duration, so we meet twice a week for three hours per meeting instead of the 75 minutes I had with my inaugural group last year.

Given that quick summary of our context, I want to share a significant change that we made to the prototyping process from last year. Last year, the team adjusted the schedule because we didn't dedicate enough ideation time to prototyping, and so this year, we set aside five days for this. Each day, the students are supposed to bring in a prototype that answers a design question. I remember this also being challenging last year, and it wasn't until late in that process that we remembered the seven questions that Lemarchand poses about prototypes in A Playful Production Process

In an effort to get the students thinking more critically about their prototypes, we have required them to write short prototype reports that address Lemarchand's seven questions. The last of the questions, which Lemarchand himself typesets in bold to show its importance, is, "What question does this prototype answer?" What the reports help reveal, which was harder to see last year, were cases where the questions themselves were either malformed or unanswerable. That is, students are going into prototyping without a good idea of what prototyping is. Several times, I've seen students show their prototypes, and when I ask what design question they answer, the students have to look it up on their reports. This is pretty strong evidence that the questions were developed post hoc. What's most troubling is that, after having completed four of the planned five rounds, these problems are still rampant.

Early in the process, my teaching partner suggested students think about design questions in the form "Is X Y?" where X is a capability being prototyped and Y is a design goal. For example "Is holding the jump button down to fly giving the player a sense of freedom?" While this heuristic proved helpful, a lot of students struggled with it: in part, I think they didn't understand that it was only a heuristic, and in part because they haven't practiced the analysis skills required to pull a design question out of an inspiration. If I were to use this again, I'd follow the obvious-in-retrospect need to rename those variables, to something like "Does this player action produce this design goal?" (Unfortunately, the discussion of design goals comes up later in the book, so maybe even this idea is too fuzzy for the students.)

Many of the questions that students want to pursue are actually research questions. I mean this in both the colloquial and the academic senses. A question like, "Does adding a sudden sound make the player scared when they see the monster?" is obviously answered in the affirmative: one need only look at games that induce jump-scares to see that this is effective. Questions like, "Do timers increase player stress?" are simple design truisms that are not worth prototyping. In yesterday's class, I tried to explain to the students that if the question is generic then it's a research question, and that design questions are always about specifics. In particular, through science, we approach generic questions through specific experiments that attempt to answer the general question; through design, we answer specific questions through specifics directly.

Reflecting on these problems, it becomes clear that the earlier parts of the semester were not goal-directed enough. Students acknowledged after our in-class brainstorming session that they were not brainstorming game ideas (but that's a topic for another post). When the students did research, many of these were also not goal-directed. Now, in prototyping, we're more easily to see what students are interested in, and we can point out to them that their interests and issues can and should be solved by blue-sky ideation or by research. However, we haven't baked that into these first five weeks. Put another way, we took a waterfall approach to ideation whereas perhaps next year we should try an iterative one.

We're in the process of collecting summaries of all the students' prototypes. I put together a form that uses this template for students to self-describe prototypes that are viable for forming teams around:

This game will be a GENRE/TYPE where the player CORE MECHANISM to GOAL/THEME. 

I'm eager to see if this was a helpful hook for the students. I will have to ask them about it on Tuesday and then see if it's something we can use with next year's cohort.