This past Spring, I led another team of undergraduates in the development of an original educational game. The final result is Canning Season, a game for two or more players that is played on three or more tablets. It works best if the number of tablets is at least one more than the number of players. The team comprised thirteen undergraduates: an English major, two Art majors, one double major in Art and Computer Science, and the remainder, Computer Science. Late in the semester, the team named themselves Raccoon Hands Productions. The name came from a feature suggestion that was never included.
The game itself is a playful simulation of the canning process. We were once again working with Minnetrista and so this game is something of a spiritual successor to Canning Heroes. There are three stations: carrot preparation, jar sanitizing, and jar filling. The gameplay is inspired by Overcooked, but instead of couch-cooperative play with gamepads, players have to physically run between tablet stations.
The first three weeks of the semester yielded two potential design directions. One was a very conservative design, a simple action game that the team was sure they could get done with time to spare. The other design was the one the team went for. I told them at the time that it was technically challenging. I have mentored two other studios in the creation of networked games (Collaboration Station and Children of the Sun, both of which are no longer available), and while both were ostensibly completed, both lacked polish. The team was undaunted and decided that it was better to pursue an ambitious project than to squander the opportunity on something less.
I wrote in February about some of the joys and frustrations of working with a large team of eager but inexperienced students. I will be touching on a few of those ideas again today, but I will try not to repeat myself. In fact, there are a few puzzles remaining in my mind from during the semester, but they are not the kind of thing I feel I can write about here. I am still trying to discern whether or not they are even fruitful to bring up on the team Slack. Put another way, the meta-puzzle is, does anyone care about this the way that I do?
Let me focus on some of my conclusions from the semester, even if some of the background has to be obfuscated.
I was not satisfied with the grading system for the class. I had them complete personal reflection essays at the end of each iteration, and as usual, I found this to be a good exercise. That is to say, I felt like the work was fruitful in that it made them think. However, many times, students misread the instructions, or submitted something clearly done in haste and without care. Students never seemed to react to my feedback on these either: several times, I pointed to concrete steps and improvements that I thought would benefit the student, but these never came up in action nor in discussion. As I have written about many times before, I have no way of knowing if students even read my feedback.
It was clear that asking students to self-report on whether they followed the methodology was unfruitful. It was obvious by late in the semester that many had not read the methodology, and many certainly did not understand it. The methodology opens with the seven properties of Crystal Clear, which, if you understand them, basically give your team super powers. Never during the semester, not even when prompted fairly directly, did anyone reference or ask about those seven properties. I think the best solution is to take what I have learned from my CS315 Game Programming course and apply it here: use checklists. Students, who are brainwashed into doing the least possible work, will easily waste the opportunity for improvement given by reflective self-assessments. Making them complete a checklist puts the issues in front of them.
What exactly goes on the checklists is a different issue, but this segues into another important issue. There were certain ideals I posed to them at the beginning of the semester, such as the importance of having everyone be able to access, update, run, and test the game. Many team members still could not do this at the end of the semester, and this contributed to several of the team's problems. I suspect they didn't even realize this, since they had no point of reference: it's like having a student in CS222 look at code they wrote before studying Clean Code and realizing how much they didn't know at the time. A potential solution here, and one that I've written about before, is to use something like specifications grading or a skill tree. I know I have written about this before, and it's still not clear to me how this can be done in an interdisciplinary and equitable way. At the very least, I think it would be useful to have clear criteria such as "I can access and modify the game," "I have playtested the game with someone outside the studio," and "I have automated something tedious."
Thirteen is just too many for one person to manage, and I think it's likely too large for novices to self-organize within. I suspect that having some kind of assistant producer could have really helped. Here's an example: at the end of each iteration, we held a retrospective meeting, and this meeting resulted in action items. The team approved these action items, and I recorded them in a Retrospective Report... which I suspect was never read by anybody. A great many of these action items were never mentioned again, or some that were mentioned took weeks to implement; stranger yet, one that was not approved was implemented. My attention was occupied with helping the critical path, and so I did not allocate much of my attention to reminding students that they had committed to something. A student assistant producer could take this kind of role, helping guide and shape the team.
This team seemed to struggle longer than most at understanding the task board. This may be in part because of their difficulty in understanding the idea that everyone is responsible for the project. I found myself wondering if a Kanban approach would have been more clear to them. I am unsure, though, because the fact is that the team only succeeded in one out of six sprints. This makes me think that they had internalized the idea that it didn't matter if they met their commitments or not, which also robbed them of the ability to actually learn from mistakes. It was nice to see a few students tackle this concept in their final exam essays, though I suspect many of them still don't see this. To put it another way, if a team says, "It is OK if we fail," and then they fail, it becomes a positive feedback loop that normalizes failure. It's not obvious to me that a different way of organizing the work would help, but I plan to spend some time this summer following up with some friends in industry to collect some new ideas on how to structure tasks.
These things have been on my mind in part because I'm helping spin up my department's new Game Design and Development concentration. Next year, I will be teaching the inaugural versions of CS414 and CS415, which will be a year-long project sequence. In future years, it will be an interdisciplinary course taught with musicians and artists, though for now, I think it will be just Computer Science students. A significant portion of my summer attention will be devoted to figuring out how to get that sequence up and running.