Friday, July 21, 2023

Summer Course Planning: CS222 Advanced Programming

Regular readers may have sense that I was running out of steam when I wrote my reflection of Spring's CS222 class. It wasn't a bad class, but the frustrations around the lack of participation drained my will. It was good to step away from it for a few months. This week, I continued the course planning work that I started last week, and my goal was to determine what I should do with CS222. The results are posted in the public course plan (CC-BY, as usual).

My reflection on Spring's class ends with some ideas about completely overhauling the class. It did not take long for me to realize I would not have time for that, especially not while teaching three classes and leading two independent research groups in the Fall. I did make one major change to the grading policy: I changed the final grades to be determined by specifications rather than weighted average. The major difference is that completing some achievements is now necessary for a C or better grade whereas they used to be a way to separate high performance from standard performance. I took this trick from David Largent, who regularly teaches this course as well. Many of the "achievements" are in fact remedial for students who have bad study habits, and so I think it will be beneficial to be able to push students toward ways that they can use this system to improve those skills. For example, if someone does badly on an assignment from the reading, I can easily point them to the "Re-reader" achievement (also one of Largent's inventions). 

The other major change that I am trying is using EMRF grading rather than triage grading of assignments. Triage grading is elegant although many students refuse to engage with it. (I blame the brainwashing of the rest of their educational experience, particularly the abuse of tools like Canvas and the administrative state's succumbing to the tyranny of metrics.) For years, students in CS222 have been free to resubmit any assignment from the class, whether they got a middling (2/3), low (1/3), or no (0/3) marks on it. However, unless my perceptions are wrong, I have seen a drop-off in the number of students taking advantage of this. To be clear, I don't know exactly why this is, and there are many competing as well as overlapping hypotheses. Suffice it to say, I don't have much to lose by trying something different. 

I came across EMRF grading from Robert Talbert's blog. What strikes me about it is that it's not that different philosophically from triage grading. However, because numbers are removed and replaced with letters, students cannot turn to a quantiative understanding. Whereas a student might look at "2/3 points" and think of it as "66% That's failing!" rather than "This is halfway good and halfway poor," I don't think I will see the same thing with someone who gets an "M" despite its serving a similar purpose. It's not exactly the same purpose, and I wonder if I will miss triage grading's ability to call something "kind of right, kind of wrong," whereas the output of EMRF grading essentially Boolean: either you pass or you don't. The choice of "F" for "fragmentary" is the one place where I am sure students will think of it as "failure," and that's an unfortunate impact here: it continues to put a negative rather than a hopeful connotation on the necessary step of getting things wrong on the way to getting things right. I suppose I have time before Fall to replace my link to EMRF grading with the invention of a variation such as EMRX. 

The EMRF system works well with the change to specifications-based final grade. I took another page from my colleague Dave when I characterized the requirements for A, B, C, etc. as having all the assignments passed, all save one, all save two, etc. I have not yet decided whether I will use EMRF grading, triage grading, or something else when it comes to the projects. Right now, the final course grade is based on the letter-grade achievement in the final project, and so I can punt on the decision on how exactly that will be graded; that will give me time to see how the first few weeks with these other changes works.

One other structural change to the course is worth mentioning. For years, I've used a structure where we have a three week ramp up to the two-week project, then a short break, then nine weeks in the final project, which is completed in three three-week iterations. When I switched to Dart and Flutter, I had to remove some interesting content from those three weeks so that the students could have time to learn more about the technical details of these. I missed some of the elements I pulled out, though, and I desired to add them back in. As another experiment, I annotated each assignment with an estimate of how long I expect it to take, and this exercise actually helped me see that I needed more time for this introductory portion of the course. There was just no way to get all the things in there that I thought were important to practice before getting into bigger projects. Hence, I added an extra week at the beginning of the semester and shrunk down the duration of the iterations of the final project. I think this will be a positive change: it will give students more time to practice some fundamentals such as TDD and Dart programming, and I think shorter iterations of the final project will reduce sandbagging. 

Last semester, an administrator forwarded this blog post by Alanna Gillis to all faculty in my college. To summarize, she frames participation as a skill to be formatively developed rather than summatively graded, managing this with student participation self-reflections at the beginning, middle, and end of the semester. I think this is a great idea, and I tinkered with different ways of incorporating it into this class. In the end, though, I ran into a problem that always strikes me when I think about this course: there's already enough (or too much) going on here. There is a definite overlap between the course's four essential questions and Gillis' participation grades, but I fear that students may not have the wisdom and experience to see this. Similarly, many of the achievements could help students directly make progress in these forms of participation, but I already have students reflect on this regularly, such as at the end of each iteration of the final project and the final exam. I do want the students to think about how their participation in the class shapes their learning experience, and I already do that through these reflections. I fear that I would get less coherent reflection by adding more requirements and nomenclature. In the end, then, I decided to work some of Gillis' ideas into my discourse around participation and to use this to encourage students' engagement with achievements (which, again, is now more "required" than it was before), but not to go so far as to bring in new surveys and additional self-assessments.

I am eager to see how this semester goes. The course is back to Tuesday/Thursday, which I think is much better for the students, but I had to rearrange all my plans to deal with it. Also, it will be directly after my game programming class, so I am little worried about my own ability to maintain focus. It may be a two-coffee semester.

Monday, July 17, 2023

Summer Course Planning: CS414 Game Studio 1

Last semester, I taught my department's new game preproduction class for the first time. I wrote a few blog posts about it [1,2,3,4,5], but it seems I didn't write a summarizing essay. It was a great experience, and I have been tasked with teaching our new games capstone sequence in the coming academic year. It is a two-course Game Studio sequence. In the future, it will be interdisciplinary, but because my department is one catalog ahead of our collaborators, our vanguard is entirely Computer Science majors.

I was glum after having killed my summer side project about a week and a half ago, and so it was originally with some reluctance that I moved into doing some university work. Once I got into it, though, I quite enjoyed pulling my plans together for the first course of the game studio sequence, CS414. I do enjoy this work, but it was also good to have some time away from it.

I posted my course plan as a GitHub repository rather than following my custom the last several years of making a web site for the course. I was inspired in part by Robert Talbert's repositories (example), which I came across after reading through some of his alternative grading posts. His presentation of EMRF grading stuck in my head for potential use, although I decided against using in CS414. It was definitely faster to type up a course plan as a directory of Markdown files, but the result is unarguably less elegant than my usual course sites. Consider my CS315 Game Programming site, where Javascript functions remove the redunancy necessary for allowing both presentation and download of checklists, or CS222, where custom components and stylesheets make achievements look enticing. In my mind, the jury is still out, and I'm not sure which approach I will take for the third course I need to prep this summer. (Also, an unrelated discussion on the SIGCSE mailing list brought me to a coherent argument to stop using GitHub, and now that's stuck in the back of my head as well.)

As part of my preparatory work, I read the second half of Lemarchand's text that we had used for the preproduction class. I decided that we will continue to follow his process, taking the conventional production approach of dividing the rest of our time into an alpha phase, a beta phase, and postproduction. I'm following his advice on the proportion of time to each of these activities, and this has the alpha phase wrapping up about a week before the end of the Fall semester, and the beta phase will finish two weeks before Spring Break in the subsequent course. I contemplated applying instead the Scrum-based approach to planning that I have used in my one-semester immersive game studio classes, but this is a good opportunity for us to follow Lemarchand's advice from the trenches and also for me to learn something new.

I spent a lot of time thinking and tinkering around the best process and tools to recommend to my students. There were two key concepts that kept coming to mind. The first is that I want to gain the benefit of how I use two-week sprints in my usual game studio courses. We do this by planning out a two-week increment and estimating the time required for all the relevant tasks. I manage the backlog grooming, story articulation, and prioritization; the students determine which stories to address, the tasks required, and the estimated hours for each task. The students update their hours after every work session, and this lets me draw a burndown chart to map steady progress against actual progress. At the end of each sprint, we talk about how it went, using a retrospective format to update the methodology. This approach gives students ownership over the execution of the project, forces them to communicate with each other, gets them into a mode of reflective practice, and makes them think about estimation. It has the disadvantage of not actually being agile, as many methodologists have pointed out: committing to stories for a sprint is the polar opposite of embracing change. This has the corresponding issue that students (and sometimes I!) confuse estimate and commitment; this sometimes manifests as students becoming martyrs for the project, but more often it manifests as students not meeting their goals because they haven't managed their time prudently.

The second concept that I've wrestled with is that I want to suggest to them an appropriate level of tool support that will help them succeed without getting in their way. This led to a lot of research and consideration. This is not an exhaustive list, but I want to capture a few notes here so that I can find them later.

  • Taiga: Disadvantage: In Scrum mode, forces the use of user stories and story points for tracking work. I do not want to use story points, and I would like to be able to track tasks outside of stories (which is something Scrum permits).
  • HacknPlan: Disadvantage: Enabling hour estimates also enables hour logging, which is something I don't want to use. It has no built-in way to track changing estimates over time: changing the estimate of a task or story is a lossy operation.
  • GitHub Projects: Everything is an "issue," which is not ontologically right, and it has no way to articulate something like a story as comprising of tasks. (There seems to be such a feature in beta for public repositories, and I think that's being used in Godot's project trackers, but it looks like I do not have access to it.)
  • Lemarchand's Google Sheets Templates: Require manual copying of data across sheets, producing redundancies that any good project management system should eliminate.
  • Yoda: I could not make this work for private repositories.
In the end, I chose HacknPlan. I made a video a few years ago about how to incorporate a manually-updated burndown chart into HacknPlan. Unfortunately, there was a part of the process that I forgot to put into the video: where do the data come from? Since HacknPlan's estimate changes are lossy, one needs to get the daily data from HacknPlan into the spreadsheet. The way this is done is by using the information ("i") button on a HacknPlan board, which will show you the current number of work items and hours remaining. 

Lemarchand's presentation of the production process leaves a gaping hole in moving from putting together a schedule to executing the work. He points out that if you have more than a few weeks of work, it should be broken down into smaller chunks; I am sure he is thinking of something like Scrum sprints here. What, exactly, is the relationship between the original schedule, the macro document, and sprints, though? 

Here's a small example to illustrate how this becomes complicated. For preproduction, one of my teams articulated the task to model a spaceship; it's a top priority task and is estimated to take four hours. For the purposes of the alpha milestone, however, it doesn't have to be completely finished: it only has to be representative, enough to allow feedback to be collected and taken into account before polishing. What, then, happens to that original work item on the schedule? A four-hour, high-priority modeling task that has to be done before the end of the project is actually at least two different modeling tasks, with two different priorities, potentially happening at different times or even by different people. One way to manage this is to decompose the task into two that sit in different user stories, each with different conditions of satisfaction: the first could specify that the model be "good enough for alpha," for example. 

Coincidentally, I recently re-watched Allen Holub's 2015 NoEstimates talk. It's worth watching, and one of the most important points he makes is that projections are more valuable than estimates. He also advocates (elsewhere, I think) eliminating sprints in favor of simply doing the most important thing next. This inspired me to try something new: abandon Scrum-style sprints in favor of a more agile alpha phase. The students will be responsible for slicing the problem into stories and tasks, and we will use their progress early in production to project whether or not they will meet their goal. I created a spreadsheet template to facilitate this, using data about the number of work items remaining and the number of hours remaining to forecast when the work will be complete. Here are two charts that it outputs based on three weeks of completely fabricated data.


I have scheduled biweekly studio meetings where the teams will be required to interpret these projections and determine what it means for their project goals. I am eager to see how this compares to two-week Scrum sprints, especially since several of the students have already experienced that approach to game project management. Incidentally, I am happy to receive feedback on my use of the forecast function in the spreadsheet. I've never used it before, and I'm not a statistician.

The final piece of the course design puzzle is grading. My first draft of the course involved some intentional scaffolding and deadline around things like telemetry and playtesting. As I began to work more toward an agile, student-driven approach, I felt a corresponding desire to remove such requirements and replace them with mentoring. That is, rather than demand a priori that teams have game metrics set up by week number four, I will have them read and discuss the book and then consider for themselves when these need to be in place. That may sound like inviting foolishness, but Lemarchand provides some very good, clear checklists around things like metrics and testing. Whereas the portion about managing work is light, these other sections are extremely detailed. Indeed, it's clear from reading the book that perhaps the most important, most stressful role on the project will be formal testing coordinator.

This resulted in a minimalist grading system. Keep in mind that this is a small class, and I have had all of these students in at least one class prior; most I have had in two or three other courses. The alpha milestone is satisfactory if it meets the following criteria, the first of which is a meta-criterion.

  • The project meets the Alpha Milestone criteria described in Lemarchand Chapter 28.
  • At least one formal playtest was conducted, with the results analyzed and presented in a studio meeting, during the alpha phase.
  • Game metrics are incorporated as per the checklist in Lemarchand Chapter 26 (page 275). The integration, analysis, visualization, and use of feedback to refine the game design have been presented to the studio during the alpha phase.
  • An appropriate bug tracking process is established, documented, and put in use.
  • Coding standards are established, documented, and followed.
  • The milestone is presented at the Alpha Milestone Review.
That is most of the semester's work. For individual student grades, I took a specs-based approach:
Meeting these criteria earns a D or better grade.
  • Your alpha milestone does not satisfy its requirements before the final exam period begins.
  • You complete one of your one-on-one meetings with the professor.
Meeting all the previous criteria as well as the following earns a C or better grade:
  • Your team's alpha milestone satisfies its requirements before the final exam period begins.
  • You have completed both of your one-on-one meetings with the professor.
Meeting all of the previous criteria as well as the following earns a B or better grade:
  • Your team's alpha milestone satisfies its requirements by the published deadline.
  • You have successfully started your beta production phase before the start of finals week.
  • You have resolved all issues arising from your one-on-one meeting with the professor.
Meeting all of the previous criteria as well as one of the following earns an A grade:
  • You have served as a playtest coordinator and ensured that all of the requirements and goals of the playtest are met.
  • Another team member gives you a commendation for excellence via email to the professor. Each student may give only one such commendation.

There are few enough students that one-on-one meetings will not be onerous for me, and it will give an opportunity for honest conversation outside of the studio and out of teammates' hearing. The teams should be able to hit the milestone as long as they are paying attention, since they themselves control the scope of their project. I thought about using something like achievements to reward someone for being a playtest coordinator, but I ended up just slotting it into one of two A options. I haven't used a commendation system before, but with such a small group, and where all the work is really transparent to the whole studio, I want to give this a try. I fully expect pairs of students to agree to give each other commendations, and that's fine; they still cannot satisfy the alpha milestone requirements unless more than one student picks up the playtest coordinator role.

That's a good summary of my work last week in preparing for this course. I am not exactly in a rush for the semester to start, but if it started tomorrow, I'd be happy to start working with these students.