Showing posts with label cs490. Show all posts
Showing posts with label cs490. Show all posts

Monday, May 9, 2022

Reflecting on the Spring 2022 Game Studio

This past Spring, I led another team of undergraduates in the development of an original educational game. The final result is Canning Season, a game for two or more players that is played on three or more tablets. It works best if the number of tablets is at least one more than the number of players. The team comprised thirteen undergraduates: an English major, two Art majors, one double major in Art and Computer Science, and the remainder, Computer Science. Late in the semester, the team named themselves Raccoon Hands Productions. The name came from a feature suggestion that was never included.

The game itself is a playful simulation of the canning process. We were once again working with Minnetrista and so this game is something of a spiritual successor to Canning Heroes. There are three stations: carrot preparation, jar sanitizing, and jar filling. The gameplay is inspired by Overcooked, but instead of couch-cooperative play with gamepads, players have to physically run between tablet stations.

The first three weeks of the semester yielded two potential design directions. One was a very conservative design, a simple action game that the team was sure they could get done with time to spare. The other design was the one the team went for. I told them at the time that it was technically challenging. I have mentored two other studios in the creation of networked games (Collaboration Station and Children of the Sun, both of which are no longer available), and while both were ostensibly completed, both lacked polish. The team was undaunted and decided that it was better to pursue an ambitious project than to squander the opportunity on something less.

I wrote in February about some of the joys and frustrations of working with a large team of eager but inexperienced students. I will be touching on a few of those ideas again today, but I will try not to repeat myself. In fact, there are a few puzzles remaining in my mind from during the semester, but they are not the kind of thing I feel I can write about here. I am still trying to discern whether or not they are even fruitful to bring up on the team Slack. Put another way, the meta-puzzle is, does anyone care about this the way that I do?

Let me focus on some of my conclusions from the semester, even if some of the background has to be obfuscated. 

I was not satisfied with the grading system for the class. I had them complete personal reflection essays at the end of each iteration, and as usual, I found this to be a good exercise. That is to say, I felt like the work was fruitful in that it made them think. However, many times, students misread the instructions, or submitted something clearly done in haste and without care. Students never seemed to react to my feedback on these either: several times, I pointed to concrete steps and improvements that I thought would benefit the student, but these never came up in action nor in discussion. As I have written about many times before, I have no way of knowing if students even read my feedback. 

It was clear that asking students to self-report on whether they followed the methodology was unfruitful. It was obvious by late in the semester that many had not read the methodology, and many certainly did not understand it. The methodology opens with the seven properties of Crystal Clear, which, if you understand them, basically give your team super powers. Never during the semester, not even when prompted fairly directly, did anyone reference or ask about those seven properties. I think the best solution is to take what I have learned from my CS315 Game Programming course and apply it here: use checklists. Students, who are brainwashed into doing the least possible work, will easily waste the opportunity for improvement given by reflective self-assessments. Making them complete a checklist puts the issues in front of them. 

What exactly goes on the checklists is a different issue, but this segues into another important issue. There were certain ideals I posed to them at the beginning of the semester, such as the importance of having everyone be able to access, update, run, and test the game. Many team members still could not do this at the end of the semester, and this contributed to several of the team's problems. I suspect they didn't even realize this, since they had no point of reference: it's like having a student in CS222 look at code they wrote before studying Clean Code and realizing how much they didn't know at the time. A potential solution here, and one that I've written about before, is to use something like specifications grading or a skill tree. I know I have written about this before, and it's still not clear to me how this can be done in an interdisciplinary and equitable way. At the very least, I think it would be useful to have clear criteria such as "I can access and modify the game," "I have playtested the game with someone outside the studio," and "I have automated something tedious."

Thirteen is just too many for one person to manage, and I think it's likely too large for novices to self-organize within. I suspect that having some kind of assistant producer could have really helped. Here's an example: at the end of each iteration, we held a retrospective meeting, and this meeting resulted in action items. The team approved these action items, and I recorded them in a Retrospective Report... which I suspect was never read by anybody. A great many of these action items were never mentioned again, or some that were mentioned took weeks to implement; stranger yet, one that was not approved was implemented. My attention was occupied with helping the critical path, and so I did not allocate much of my attention to reminding students that they had committed to something. A student assistant producer could take this kind of role, helping guide and shape the team.

This team seemed to struggle longer than most at understanding the task board. This may be in part because of their difficulty in understanding the idea that everyone is responsible for the project. I found myself wondering if a Kanban approach would have been more clear to them. I am unsure, though, because the fact is that the team only succeeded in one out of six sprints. This makes me think that they had internalized the idea that it didn't matter if they met their commitments or not, which also robbed them of the ability to actually learn from mistakes. It was nice to see a few students tackle this concept in their final exam essays, though I suspect many of them still don't see this. To put it another way, if a team says, "It is OK if we fail," and then they fail, it becomes a positive feedback loop that normalizes failure. It's not obvious to me that a different way of organizing the work would help, but I plan to spend some time this summer following up with some friends in industry to collect some new ideas on how to structure tasks.

These things have been on my mind in part because I'm helping spin up my department's new Game Design and Development concentration. Next year, I will be teaching the inaugural versions of CS414 and CS415, which will be a year-long project sequence. In future years, it will be an interdisciplinary course taught with musicians and artists, though for now, I think it will be just Computer Science students. A significant portion of my summer attention will be devoted to figuring out how to get that sequence up and running.

Friday, May 7, 2021

Reflecting on CS490 Software Development Studio, Spring 2021

It was an interesting experience teaching CS490 Software Development Studio this past semester. This was the second year of my experiment in letting students pursue their passion projects. The first year was intended as a one-off, but with the pandemic, it seemed a bad time to try to bring back the element of community engagement. However, next year (Spring 2022), we are already approved and funded to do another community-engaged project with Minnetrista, and I'm really looking forward to that. It means, though, that any breadcrumbs I leave for myself here may not be followed for some time.

We started the semester by having each student participate in two rounds of pitches, we spent some time building prototypes from a subset of these, and by the middle of February had set on three projects to pursue. All three were completed and published to itch.io:

The students followed the distributed methodology that I put together early in the semester. The only major edit I can remember from the semester was removing Trello as an option, since it didn't support the rest of the process as clearly as HacknPlan. At the end of each iteration, each student had to submit a statement that they had met the nine-hours-per-week labor commitment and that they had followed the methodology to the best of their understanding and ability. I thought about having them keep labor logs, as I did in game design last semester, but I decided to go with the more lightweight periodic self-reports instead. This kind of self-reporting is always open to dishonesty, given that grades were connected, but I did not have any sense of this as an endemic problem during the semester. Most students reported that they met all the goals all the time, although some admitted to cases where they were not on point. 

Last Spring, the teams started up in person and then had to move online, and I felt like we had a good working relationship throughout. This semester, we technically had a room that we could use, but that would have been extremely awkward for presentations, conversations, and collaboration. We ended up moving almost everything online, including pitches, work days, planning meetings, and presentations. Given the constraints of the alternative, it was better in all ways. We only met face-to-face a few times during the semester for whole studio meetings. 

A result of this is that I felt very much "outside" the group, and there was not a real sense of community between the teams. The most surprising way that this manifested was on Slack. Originally, different teams used different communication channels, but this was hard for me to manage and prevented people from reaching out for help across teams. I brought everyone together into one Slack instance, but people, by and large, ended up staying in their team's channels anyway. I used the #general channel to share thoughts, ideas, reminders, and the like, but I am not sure that anyone else ever really used it for general studio chat. Put another way, there was no general studio chat. A few people posted links on #random, but even that was really quiet, devoid of any good memes. It makes me wonder, if I were to run a course this way again, should more effort be spent on building up a sense of commonality, or should more effort be spent helping teams become self-sufficient and independent.

Early in the semester, I required that each team come up with a vision statement, explaining to the students that the main value of this is to be able to say "no" to good ideas. The rest of the semester was essentially just a series of sprints, then, in terms of what students were required to do. I advised the teams toward good practices of early playtesting and setting a feature freeze date, but I did not require either of these. Conversations with the teams, including our whole-studio semester retrospective meeting, make me wonder if it would be better to frontload some deadlines around these. For example, part of the methodology could give a date by which the first playtesting is expected, so that teams can move toward this.

Now, in some ways, I see this as inherently dysfunctional. I explained to all the teams—more than once—that "done" meant tested, and so for gameplay issues, "done" meant playtested. I talked about the relative merits of internal vs external playtesting, but I got a real sense that this was falling on deaf ears. When mentoring one, large team, I am active in the regular maintenance of the task board and discussion of "done", and the students get this; without this, I think the teams fell into a conventional undergraduate perspective about it, that "done" meant something much less rigorous. It is possible, then, that this another case where one method might work well for whole-studio projects and another one for small-team projects.

Along with vision statement articulation, external playtesting, and feature freeze dates, another item I have contemplated including is a style guide. Many of my teams have gone without these, but I have been inspired most recently by Ben Humphrey's talks and blog to think about the UX problems my students face. When there is one team, with good fortune, we have an artist who sets up a palette, fonts, and visual style, but even that is not guaranteed. Still, I think students understand that releasing a game using the engine's default styles is a faux pas. I do not want to push every team toward design bibles, but I think there might be some value in at least articulating a basic style guide early in the process.

I am proud of the work these students did, and honestly, their reliability and hard work enabled me to redirect efforts toward the unexpected picking up of CS431 these past few weeks. I hope they do not feel abandoned, but I consider myself fortunate that I had a CS490 group so consistent that I could trust them to wrap things up on their own.

In closing, I would like to share a final thought about the semester. Due to scheduling constraints and all the chaos of the pandemic, the CS490 class this semester was overloaded with technical people, mostly CS majors. I also had some of the most motivated and talented technical teams I have ever seen, and I actually think that's a separate issue from the first. That is, I had more technical people, but I also had better technical people. It was great to see these students deploy lifetime learning skills to teach themselves how to move forward on these different projects. The only downer for me, then, is that I didn't have these people together to work on a community-engaged project! I cannot help but think of what we could have accomplished if we had been able to all be together, in RB368, with coffee and donuts, meeting up for game nights, working with a community partner. 

Sunday, March 28, 2021

Contrasting Godot Engine and UE4 for use in undergraduate game programming and game production

The university is planning on having all our courses back in-person in Fall, and this means that my CS315 Game Programming class will once again be able to meet in the lab. Last year, I switched from UE4 to Godot Engine in large part because of the lack of access to lab machines: every student has a personal machine that can run Godot Engine, but many cannot run UE4. This Fall, I could just go back to UE4, but I have spent a lot of time with Godot Engine, and it has some distinct advantages for teaching.

I shared with my CS490 studio, via Slack, that I was thinking about the tradeoffs between these two engines. There are students in the virtual studio who have taken CS315 in both forms: some took it in Fall 2019 with UE4, and some in Fall 2020 with Godot Engine. Additionally, several of these people have since started learning different engines, such as some students who learned UE4 in class now using Godot Engine, and some who learned Godot Engine now using Unity. One student provided an excellent overview of his experience, but sadly, no one else has engaged in my request for comments.

Last Friday was the end of the third sprint for CS490, and we all met on Zoom so that the three teams could show their increments to each other. It so happens that during the discussion, a few items came up that are salient to the issue of engine selection. One team, which is using Godot Engine, had underestimated the time it took to write an AI for some of their game characters. In another project, also using Godot Engine, a student pointed out that the original compositions seemed to blend together well when switching between them. This was a happy accident and not by the design of the music nor its playback.

I commented to the students that both of these issues are great examples of the differences between engines. I pointed them to my tutorial video about UE4 behavior trees, which provide a higher level of abstraction for writing AI compared to having to do it directly in GDScript. For the audio, I mentioned that UE4 has a very robust audio framework, again with custom authoring tools well beyond what is integrated with Godot Engine. In a follow up post on Slack, I noted to them that one can crossfade audio in Godot Engine with an animation player, and I also pointed them to a reference about UE4 Sound Cues, a contextual example of crossfading using them, an amazing presentation of the advanced audio capabilities used by sonic artists, and a useful tutorial about how to write a music stitching system

So, if UE4 has all this cool stuff, why not just use it? As I discussed with my students during the meeting, a lot of this great engine content is beyond what a student can get to in a one-semester introduction to game programming. They would be useful things to know now, having wrapped up the third iteration of a project and having already studied the basics, but I've never seen a student in CS315 get as far as really needing advanced audio or behavior trees. For behavior trees in particular, when I have offered some A-level credit for exploring them, it's always been a bit disappointing because it's the tail wagging the dog: students make a simple behavior tree to show they can rather than use them to solve an actual problem they have.

Godot Engine, by contrast, makes it really easy to stand up a prototype. Additionally, as one of my students put it, 3D in Godot Engine is fine, but 2D in Unreal Engine is a pain. A student also suggested that most students in CS315 would rather make 2D games than 3D anyway, especially given the extra complexities inherent in 3D game production. 

I am not yet putting together formal course plans for Fall—that will came later. For now, I am undecided but leaning toward sticking with Godot Engine for another year. I have been approved to run another full-year immersive learning collaboration with Minnetrista: we will make prototypes in the Fall and produce a game in the Spring studio. Although CS315 is not part of this sequence, many CS majors come into CS490 via CS315. This will be a factor in my decision. I also have two outstanding grant proposals that would impact how I want to devote my own attention next academic year: it's much easier to teach an engine I am currently using than one I am not.

Wednesday, February 17, 2021

User stories vs Sprint Planning in the Spring 2021 Game Studio

I'm excited to be able to teach my Spring Game Studio course this semester, even if it's not ideal. The ways in which it falls short are that we cannot use the actual studio space due to physical distancing restrictions, the time it's offered was determined by the college rather than the department and is terrible, and it's almost all Computer Science majors--in part because of that awful scheduling that prevented our being able to synchronize with the other majors that regularly send students. That said, I am very grateful to have about a dozen eager and excited students, and I am able to continue to explore how to manage multiple simultaneous teams. When it comes to distributed work, we can be more intentional about it than we were when the Spring 2020 studio had to suddenly change character due to the pandemic.

The Spring 2021 studio started with six different game ideas, and it took us until the fourth week of the semester to narrow it down to just three. This process involved building MVPs that were valuable in explaining the games and building teams around them. I wrote up a methodology for distributed development, based on my notes from last year, and that is currently available on GitHub

On Friday of last week, all three teams met together so that I could talk about the planning and production process. This was designed to help them bridge the pre-production work and prototyping they had been doing to get us into the expected rhythm and rigors of the methodology. I showed them how we could use HackNPlan to manage parts of the project, and conveniently, I was able to use the context of one of the abandoned projects: it was something new and unbuilt and yet still familiar, which is much better than talking about re-engineering something that already exists.

This allowed the teams then to come to our Monday meetings and plan their projects. Here, however, is where I think I made a mistake. I had hoped that they would be able to articulate their features as user stories, then break them down into tasks, and then assign tasks during the meeting. Even though I knew how hard it was to turn ideas into user stories, I don't think I gave them enough time here. When I am leading an immersive learning team, I will often just do this for them, based on their discussions. The result was that it took well over an hour for each team just to figure out what a few of their stories are, let alone the problem of prioritizing them, committing to them, and planning the work required to satisfy them. The meetings went too long, with the concomitant fatigue.

I think I could have handled it differently. One option would have been to schedule separate meetings, one for story identification and one for sprint planning. This also would have given me a chance between meetings to give feedback on story articulation, whereas what I had to do on Monday was traipse between three different online meetings. An alternative would have been to allow the teams to designate one or more team members who would devote their time to the story identification. That is, they may have served as I have done in a Product Owner role.

I'm still happy with the work they have done and proud of their progress, but I wanted to make sure that I tracked this idea here on my blog so that next time, I might remember to separate these processes. Also, the methodology (as of this writing) mentions Trello as an option, but I had forgotten that Trello does not allow for convenient separation of story articulation and commitment vs. task articulation and planning. I should simply remove it from the methodology next time, because it does not allow for the precision that I would normally have on the studio's whiteboard. Unfortunately, I learned this lesson because one of the three teams tried to use Trello and then broke down in their planning meeting when they couldn't follow the steps I had shown in the previous meeting.