Thursday, December 14, 2017

Reflecting on the Fall 2017 CS315 Game Programming Class

Several times during the semester, I have wanted to share thoughts about my game programming course here, but time and other obligations have gotten the best of me. Now that the semester is wrapping up, I am making the time to focus on the last few weeks of the course and the roller-coaster of emotion it has been for me.


To set up the story, it had been two or three years since I was able to teach my game programming course (CS315), and I revised the course design over the summer to center on Unreal Engine 4 and my Collaboration Station project. The students started the semester with a two-week project designed to get them into the UE4 basics. The project was called Angry Whatevers, and students had to create a simple playable game that involved physics and parabolic motion. I started a new playlist on YouTube to support the class during this period, starting with this pair of videos that give an appropriate framework using UE4's 3D primitives and Paper2D sprite system, respectively:

Because I was recording these videos on Windows, I had to learn a new video production pipeline as well. My first attempt used OBS Studio for the screen capture and OpenShot for video editing, in part since OpenShot is what I've been using on Linux for some time. I realized after posting the first edit of the first video that there were significant editing glitches. When I brought this up in class, the students expressed disdain at OpenShot and strongly encouraged that I learn Blender. I didn't even know that Blender had video editing support! It took some time to learn Blender's video editing system, but I ended up becoming fairly productive at doing simple editing with it.

In any case, all the students who did not withdraw from the class did fine in their Angry Whatevers project. Several students withdrew during this period, but none talked to me about their experience, so I have nothing but conjecture about causality here. We had a class meeting after Angry Whatevers during which I gave them the option to either move forward with another tutorial-style assignment or to move into forming teams for a bigger final project. They chose the latter. I prepared and delivered a presentation about Collaboration Station, showing the three minigames that I had not yet implemented, and I encouraged students to choose one of these as their semester project; however, I also gave them the freedom to propose something different, with the cautionary explanation that game design is hard. As you can probably predict, dear reader, all of the teams decided to pursue their own projects.

I had the teams write their pitches in Tim Ryan's game concept format, which I had to approve before they could move forward, and they organized their work as user stories expressed in Trello. More specifically, they used the "front" of each Trello card for the name of a user story in Mike Cohn's format, and they used the "back" to track the conditions of satisfaction, hour estimate, and team member assignment. My intention was to monitor teams' progress via their changes on Trello, but I quickly had to drop this plan as out of scope: I was overwhelmed this semester with the number of different projects I was supervising, and I could not afford this level of oversight. I was surprised when, after the first iteration, the students elected to continue using Trello the way I required for the first iteration. They found value in it, and so they required themselves to show their progress via Trello at each end-of-iteration presentation.

I'll note here that toward the end of the second iteration, the students expressed interest in learning more about the C++ side of UE4. Some students had thought about shifting some of their existing work from Blueprints to C++, but they were unsure how to dig in. I recorded a series of five videos that walked through an example designed for students with no C++ experience, although these were not ready until we were into the third iteration. No students ended up using C++, and avoiding such a thing in their third iteration is wise. However, looking at the video series, the first video has 49 views, the second has only 28, the third has 13, the fourth has 37, and the fifth has 29. If you can make any sense out of that, let me know. For the time being, I interpret that as meaning that students by and large didn't watch the series for their own edification. It took roughly 10 hours to make that video series, so hopefully I can recoup the investment either in a future class or in the long tail of YouTube.

The tumultuous end of the third iteration

I know that was a heck of a wind-up, but now we can get into the story of the last few weeks. At the end of the first iteration, I was pretty lenient in what I considered "satisfactory" since the teams were still really getting their feet wet with UE4. The second iteration, some teams did not have working core gameplay, and so I told them the work was unsatisfactory. What surprised me was that at the end of the third iteration, a majority of the projects had fundamental problems such as broken core gameplay, no audio or visual feedback, or no attention paid to user experience. This made me go look at their version control logs, and these pointed to the idea that many had simply not put in adequate effort. Before grading these final iterations, I went back to check the course description to see how specifically I had defined "satisfactory." To my great embarrassment, I didn't. I honestly remembered having written it up, but either I wrote it somewhere else and misplaced it, or I simply thought about it and never typed it up. In the absence of a rubric, the best I could do was fall back on the clear dictum: "Note that, because this is a three credit-hour course, you should expect to invest nine hours of attention to it per week." Clearly, some teams did not do this, with gaps of 15-19 days between commits.

There's another important piece to understanding my emotional reaction to the third iteration. The teams' presentations were split over two 75-minute class periods, with the day of a team's presentation being essentially random. It turns out that the Tuesday group was the disheartening one: no team presented something that I would call, in any way, "done." This led to an interesting conversation on Slack with a few students who were willing to speak up, though. I expressed my distress that they had not produced shippable games, but some students said that they never wanted to finish a project in the first place—they really just wanted to tinker around. That's interesting for several reasons, one being that they clearly didn't understand what I had tried to teach them about Scrum, but the other is that they claim to have had a fundamentally different perspective of the course goals all along. One student put it very clearly that he thought he was "learning game programming in order to learn Unreal Engine," which is vastly different from my intention, which was for them to learn Unreal Engine in order to do game programming. Students in the former category would probably have been disappointed at my lack of didactic lectures about specific UE4 features, whereas students in the latter category would appreciate the seminar-style of individual exploration and sharing of lessons learned.

After these conversations came the Thursday group, where two groups really nailed it, showing off games that were truly shippable. In their presentations, they shared specific and interesting tech tips in such a way that everyone in the room learned something. (We called this requirement the Feature Focus™). I thought about going back to Slack to try to understand how people in the other teams felt about the excellent Thursday presentations, but I did not. I did not want to come across as crass or petty, although I do really want to know what they thought, as I try to understand their perspective—or what they claim their perspective is, anyway.

This isn't even my finals form

When we wrapped up the second iteration, I gave the students the option of either working on their final project through to the end of the semester or to end a week early and do a different final assessment. The result of our discussion was that the students wanted to be able to choose either a traditional written final exam or a "jam" final. I had used the jam format back in 2011, and they seemed to like the idea when I told them about it.

I tried to make the two equitable, each designed to take four to six hours. I designed the jam format first, using my university's recently-announced new branding campaign, "We Fly," which appears to be beloved of upper administration and generally denounced by students. I put the following constraints and requirements on their submission to this final format:
  • Playable game implemented in Unreal Engine 4.17 or 4.18
  • Implemented by an individual student in CS315
  • The creators of any third-party assets are credited and licenses identified as per licensing requirements
  • Player input is managed through project settings
  • Core gameplay includes interaction of multiple actors in a level
  • Dynamic game state is tracked in the user interface (e.g. score or health trackers)
  • Clear goal and ending conditions
  • Include music and/or sound effects
  • Captures an interpretation of the theme
A casual observer will notice that these include some of the specific features whose absence surprised me in the third iteration.

The students presented their submissions during our university-assigned final exam slot, and I was delighted at the quality of their work. Students whose technical contributions and understanding were unclear to me showed that they had minimum expected competence with UE4, and students who had really pushed themselves showed some impressive jam-quality work. It was fun to see how many students poked gentle fun at the university's marketing efforts, with several games involving the destruction of money, amassing money, or throwing money into bottomless pits in order to create new slogans.

Seven students opted for the written final format, which consisted of three questions. The first had students research and report on binary space partitioning and its role in BSP brushes; we did not address this in class, and so I figured this would be a good assessment of their ability to explore new topics related to game programming. The second question involved deconstructing a classic arcade game and discussing how it would be implemented in UE4, along with a project management plan such as a Scrum product backlog. The third was a post-mortem of their team project in classic GDMag format.

Their responses to the first question were a mixed bag, the biggest problem being that some did not distinguish between BSP and UE4's BSP brushes. In a sense, the assessment "worked" because I was able to see their confusion.

The problems with the second question were more interesting. Some students showed significant trouble analyzing the game to determine its formal components and then arranging those into a series of steps. In retrospect, I had not given them much feedback on the project management portion of the class, and I know from experience with immersive learning teams that this is hard enough just by itself. Similarly, these students generally had no experience with game design, and so they lacked some vocabulary and mental models for how to separate a player experience from formal components. This part of the question may have been "unfair" in that it wasn't something I had really emphasized throughout the semester. There was also variance in the second part of the question, where some students waved their hands and talked about implementation in broad strokes, where others were more specific about what was actors, what was pawns, who should have what responsibilities, etc. I think if I were to give this type of exercise again, I would have to include more a specific description of what a correct answer would contain, since it was hard to distinguish whether students thought their submissions were adequate or whether they were writing and hoping for points (a phenomenon any instructor is familiar with). In the end, however, everyone that did anything reasonable on the written format got a satisfactory mark, so it's not worth fretting the details at this point.

Students' Review of the Course

At the end of our finals week meeting, I ran a quick semester retrospective with the students. I wrote four columns on the board: + (for positives), - (for negatives), Δ (for things to change), and ? (for lingering questions). I encouraged students to speak up and share their thoughts, and I logged them onto the board without commentary except occasionally to ask for clarification. I then asked if there was anything on the board that anyone strongly disagreed with, and the only point of contention was that some people seemed to dislike Blueprints, while others thought they were valuable. Finally, I went through each item and asked for a show of hands to get a sense of whether it was a majority issue or a minority issue. I've transcribed the list below, marking majority issues with a capital M and minority issues with a lower-case m; the one item without a marking was instead discussed as being a mix of positive and negative.


  • Two-week intro (M)
  • Choice of project (M)
  • Open-endedness (M)
  • Working off of the same basis for the two-week project (M)
  • Video tutorials for the two-week project (M)
  • Showcases at the end of iteration and the final (M)
  • Feature Focus™ (M)
  • Slack (M)
  • Ambiguous grading scale (M)
  • GitHub
  • Infrequent use of Slack (m)
  • More smaller projects (M)
  • Require participation in game jams (M)
  • More emphasis on scoping and estimating projects (m)
  • Host more class-based jams like the final (as opposed to external jams) (M)
  • Use "jam" format for intro weeks (m)
  • More C++ (m)
  • Smaller projects with specific technology learning outcomes (M)
  • More videos on specific Blueprint features (e.g. variables, conditionals) (M)
  • Encouraging more student-presentations on UE4 technology (M)
  • Theme the entire class, in part to help control scope (m)
  • Encourage use of Slack (M)
  • Less distribution of course content between Canvas and the course site (m)
  • I may have learned more from the final jam than from the big project (M)
  • More jams? (M)
  • Would having more jams  early in the semester positively influence the formation of final project teams, their preparation, their estimation? (m)
  • Blueprints vs conventional text-based code (esp. for things like data structures) (m)
  • Production pipeline, e.g. working between technical artists and gameplay programmers (m)
  • Canvas is better than Blackboard but Canvas is less than ideal (M)
The student who brought up that first item in the questions, he spoke for a few minutes about how the scale of the final project contributed to his putting off working on it. By contrast, he said, the final jam had a very tight timeline and very clear requirements, and so he knew he had to dive right into it. He did not address whether or not he could have done the final jam without having worked on the major project.

At the end of this discussion, I shared with them something that had been heavily on my mind the past two weeks or so. Many of them had experienced problems or complained about the challenges of their final project with respect to scope, estimation, and creative direction. I looked across the room and I asked them to think back to when we were first planning the final projects, and specifically, how I had laid out for them three minigames from Collaboration Station that were already designed and were perfectly scoped for their work during the semester, and how I had warned them that going in a different direction was not advisable for reasons of design and scope. Ah, I wish I could have captured the flood of emotions that I could see on their faces! At first, a moment of shock and surprise, and then a look of understanding, and then nervous laughter as all the pieces came together. When I followed this with, "I don't want to be an 'I-told-you-so...'" they really let loose and had a good laugh. It's true, though, and I told them that I hoped that this was a good lesson for them in understanding how complex this area of study really is. I think we were all able to leave with high spirits despite having been a bit in the doldrums a week earlier.

Jam in the place where you work

Clearly, the idea of "jams" came up quite a bit in our semester retrospective, I'm sure in part because they had just had a generally positive experience working on the final exam jam. As I got to thinking about it, though, I still had some questions about what they were really talking about. A single-person effort isn't really a "jam": the term comes from musicians' jam sessions. The metaphor is also just a sort of fun wrapper around an academic assignment, and isn't it strange that students would say, effectively, "give us more assignments!" Maybe not so strange, given the one student's comment about learning more from the short experience rather than the longer one. In any case, I turned back to the class' Slack and posed the problem back to them. I articulated what I thought were the five characteristics of the jam format, and asked them to thumb-up the ones that they thought were most important or essential. Here's where the votes have stabilized:

  • Individual completion (1)
  • Themed (6)
  • Timeboxed at no more than one week (4)
  • General guidelines of satisfactory completion, as in the final exam jam format (9)
  • Presented in class (0)

I am a little perplexed at the zero votes on the bottom one, given some of the other results of the retrospective, but that one was also last in the list just before a block of text, so maybe people missed it.

What stands out to me is that the things they most wanted were the guidelines, which are really standard fare, and their omission from the final project was an oversight on my part. I've been thinking more about specifications grading and how that might be useful for a class like this. My colleague David Largent has been using specs grading in his classes and has started giving some regional presentations about his efforts. I could see criteria for a future iteration of the course looking something like this:

  • D: I can run the game without crashing.
  • C: The core gameplay is functional and gameplay state is tracked on-screen
  • B: The game includes audio and visual feedback for core gameplay
  • A: The game can be packaged and launched 
That's just a sketch, but I think you get the idea. I can set up the hurdles that teams have to cross more explicitly to get the grade they want. Of course, this opens up a problem I had this semester and alluded to above: I had a student come to my office upset with his grade, and I told him the core gameplay didn't work. He asked, "What's the core gameplay?" I would have to be careful in the specifications and future course design to qualify any game design terminology, since clearly not everyone will be familiar with it, even if they are hobbyist gamers.

Wrapping up

The coarse-grained grading scheme that I adopted turned out working OK, although not without its own struggles. When students mentioned "ambiguous grading scheme," I assumed they meant that I had not rigorously defined "satisfactory," but judging from some of the questions I've received the last few days, it's possible they were confused by the variable MacGuffin system. This is another area where I wish I knew what the students actually were referencing, but I'm afraid that moment has passed. It's possible each student meant something different, but they generally didn't like ambiguity, so may as well vote for that one. However, looking at the grades on my spreadsheet, I think they lined up to fairly represent my intuitive understanding of what students learned during the semester.

There were also opportunities for achievements that I didn't think of until it was too late. Regular readers know that I enjoy painting miniatures, and the last few weeks I decided to listen to a few GDC lectures on YouTube while painting. I realized, in retrospect, that I should have had an achievement for students to do the same. The 2016 Tech Toolbox, for example, would have been a great item for students to study and respond to. The way that I had articulated the Connected achievement gave preference to in-person meetings, but there is such a valuable repository of knowledge distributed on the Internet, I could have used the course as a chance to get students connected to that as well.

That about wraps it up for CS315 in Fall 2017. Word on the street is that we may be able to go back to offering this every Fall, now that we have had some successful faculty searches. I would like to get it back on a regular rotation and hone the structure of it the way that I have CS222. This semester felt a bit choppy, but I had some very talented students who were willing to talk with me honestly about it, and I'm grateful for that.

Thanks for reading!

No comments:

Post a Comment