Tuesday, December 31, 2019

A happy cat story

I'm cleaning my home office as part of the end-of-the-year activities. I want to share here one of the most curious things I came across. My kids make a lot of crafty things, but this particular one really tickles my fancy.

My youngest son is four, and he recently learned how to play Ticket to Ride: First Journey. After playing this with him, some of the other boys and I got into a game of Clank! The youngest one was flitting around the table and making a bunch of noise, so I tried to think of a good creative challenge to occupy him. First Journey was still on the table, and something about the art caught my eye.

I pointed to the orange cat that is being held by the girl in yellow. Sarcastically, I pointed out how very happy the cat seemed to be.
I mean, look at that face. It's practically a meme in the making.

I suggested that my son create his own drawing of that ever-so-happy cat. He was very excited and ran off to the crafting table. He came back in a few minutes with this:


It's rather faintly drawn in pencil on lined paper, so here's a digitally-enhanced version.


Now, look at that face! That cat is actually happy, and it would love to be held by a girl in yellow on the cover of any train-related board game.

He took a separate sheet of paper, rolled it up, and  taped it behind the drawing so that the whole thing would stand up. I believe he had just seen the Mr. Whiskers standee from the Clank! Expeditions: Gold and Silk expansion, and this inspired him to make his also a freestanding piece.


It was back in 2011 that I wrote about my oldest son's being inspired by a game box and recreating the artwork in a drawing, when he was just a little older than the youngest son is now.

I hope you enjoyed this end-of-year story. I expect to return tomorrow with my traditional summary of the year in games. Enjoy the last day of 2019!

Monday, December 30, 2019

Ideas for UE4 video tutorials to teach Computer Science concepts

I'm pleased to announce here that I have received an Epic MegaGrant to create video tutorials designed to teach Computer Science concepts through Unreal Engine 4. For those who don't follow me on YouTube, I have a Game Programming playlist with twenty public videos that I have created for my classes. Several are introductory or cover specific tips about version control, but some of my favorite ones cover more technical Computer Science concepts, such as decoupling modules through interfaces and the Observer design pattern. My proposal to Epic Games was to build upon this style of video, teaching real and interesting Computer Science ideas through their UE4 technology. I am glad that they agreed with me that this was a worthwhile pursuit.

The grant provides me with some extra time in the Spring 2020 semester to devote to making video tutorials. I have the freedom to choose the number, duration, and content of the videos, so I'm starting the project by reviewing my notes from teaching Game Programming using UE4 last semester. There were a few topics that came up during consulting meeting with students that point me toward specific videos, many of which are reinforcing ideas from earlier classes in the context of game development. Also, since writing my reflective blog post, I have been able to read the student teaching evaluations from last semester. Some of the comments there reinforced one of my observations from last semester, which is that students don't see that they can deploy the techniques they have already learned about object-oriented programming to UE4, both in Blueprint and in C++. That is, students who already understand topics from earlier courses did not recognize the affordances to use them to create more interesting or robust game software.

Before the new year and the new semester's classes kick off, then, here is a list of some of the videos that I'm considering developing in the Spring:

  • Type coercion through casting: What it is, why it is necessary in statically-typed languages, and how it manifests in Blueprint.
  • Refactoring Blueprint spaghetti by introducing new abstractions.
  • Comparing two techniques of implementing state machines: using enumerated types vs. the State design pattern.
  • Places where Blueprint expressiveness exceeds text's capabilities, such as the Select node.
Just after I posted by last video on the playlist, which is about getting started with C++ development, I learned about subsystems through an Inside Unreal livestream. I would like to explore the implications of this feature for software architecture. I want to see how much of what I love about entity system architectures I might be able to bring into UE4 using this technology.

What do you think, dear reader? If you have any suggestions for Computer Science concepts that can be explored in UE4 through video tutorials, leave a note in the comments. Thanks for reading!

[Update: Over on the UE4 Developers Facebook Group, there was a suggestion for a discussion of Big-Oh analysis and how it manifests in game programming, as related to performance. This is a great idea for a topic. I am adding it here so that I won't forget it when I come back and start scheduling production.]

[Update 2: A conversation with a friend online made me think that another good entry might be fundamentals of debugging: a quick tutorial on using the integrated debugger for Blueprint and also for C++.]

Thursday, December 26, 2019

Family Painting Clank! Legacy: Acquisitions Incorporated

Last year, I bought Charterstone as a family game for Christmas. Playing through the campaign with my wife and two oldest sons may have been my favorite board gaming experience. We also love Clank!, and so I have been excited for Clank! Legacy since I first heard about it. Once the positive reviews started coming in, I ordered a copy while I could and sat on it for this year's Christmas game.

We are all excited to get started, so last night, we did a "one night paint job" on the four hero miniatures. My sons have always used cheap craft paints for their miniatures, but for Christmas, I got them the Vallejo Model Color Basic Set. They enjoyed working with the new paints, although I think they will appreciate them even more as we move into more relaxed painting sessions. Both commented on how quickly the paints dried compared to the craft paints. Indeed, when I've painted with them using craft paints, the gloopiness and slow dry times are two things I found most frustrating, compared to doing quick, thinned layers with VMC.

My intention was that we would draft figures in age order, but before I could suggest that, the two boys had already picked theirs. #1 Son (12) wanted to be the "child in the dungeon" figure. This riffs off of his regular figure when we play Thunderstone Quest, since he almost always plays the one that looks like it's just a kid thrown into the battle. #2 Son (9) chose the elf, I think because he likes elves, although he was not specific in his choices. This left the tough lady fighter and the shouting dwarf, so I took the dwarf and let my wife take the other.

I used the airbrush to zenithal prime the figures, doing just a little bit of cleaning up of mold lines. Each of us will be playing our traditional board game colors, and we worked those colors into the models. Here's mine, blue:

Blue was a challenging color to put on a dwarf, which I tend to think of in muted and earthy tones. The only thing I could see to make blue at first was the tabard. As I worked with it, I realized I could do some blue trim on the helmet as well. I finished the model by using the same blue on the base, which I think ties it together.

Although we had a verbal agreement for one-session painting, "No shading, no highlighting", I couldn't help myself from doing just a little. I used a darker brown to pin wash the backpack, silver drybrushing to highlight the chainmail, two drybrush highlights on the beard, just a little brown to get more definition on the muscles, and P3 Armor Wash on the hammer. Otherwise, I used thinned paints to let the zenithal priming do a lot of the work.


This is my wife's warrior. She plays yellow, and so it made sense for the big, sweeping cape to take that color. She pointed out that it made the character look more like a superhero than a fighter, but between the cartoonish sculpt and the strange pose, I think it fits.

Although the plan was to use the boys' new Vallejo paints, I also brought down a few secret weapons from my painting arsenal, including the aforementioned P3 Armor Wash. I had really brought it down for this figure, since I figured my wife could do a quick silver paint on the plate mail and then use the armor wash for instant tabletop quality. I admit I was a bit dumbfounded when I saw her working on the orange! However, as she kept working on it, I could see it coming together. The very last step was adding the white trim around the armor plates, which I think really makes it pop. She spent three hours on this one, while I spent two on the dwarf, and the boys spent about 1-1/2 hours on theirs. I think she did a fine job, especially given the constraints.


Here's the dungeon kid—the red character. I recall my son starting with the Flat Flesh color in the basic set and then commenting that he wanted darker skin. I don't know what inspired him to do so, but I think it looks quite good: dark hair, dark skin, and bright blue eyes. He put some thoughtful discoloration into the crate as well, although it's subtle. I believe he called it "a moldy crate". Notice the nice job he did with the flagstone base as well. The whole thing has a subdued palate that really brings out the red and warm browns.


Finally, here's the green player's elf. I think it's pretty solid for a nearly-ten-year-old painter. The robe is a little splotchy, which is an unintended side effect of trying to work with the zenithal priming, like I wrote about in my JiME post: if you do one thin coat, it looks fine, but if you touch it up, you get splotches of higher saturation. He added a little thinned gold to the hair to give it some sparkle, but I'm afraid that is lost in the photo and was also greatly subdued by the varnish. He got a nice color for blonde hair, which is hard to do. He also really nailed the cobblestone base.

If you look carefully you can also see that he did some weathering on the robes, stippling on a little brown. I have written before about how I tend to lack the courage to dirty up a figure I spend so long to paint, but my son and I do watch several painters on YouTube who regularly incorporate weathering as a finishing touch. It's neat to see how he was inspired by this.

After the boys were done, and while my wife was finishing up her warrior, I retreated to my study to work on the dragon miniature. Here's it is:
I laid down the base colors by wet-blending three colors: a mix of VMC Deep Sky Blue and Grey, a mix of VMC Dark Blue and Black, and a mix of Black with the first mix. These were heavily mixed with Vallejo Glaze Medium to give me lots of open time for wet-blending. I let that set overnight. This morning, I mixed up a wash of roughly 3:1:4 blue, green, and black inks. I used this to pin wash the edges and accent all the scratches, using a second brush to feather out the wash in many places. The last step was just to paint the eyes with a mix of white and a touch of green, followed by a glaze of green ink to get just a little more green. All told, this was also just about two hours of painting; unlike the heroes, though, I think if I had more time I would probably just do it like this anyway: as an iconic representation of the draconic villain, I think it's a good piece.


Here they are all together, ready for adventure! The plan is to get the game to the table later tonight. That gives me a few hours to come up with a clever name for my dwarf.

Saturday, December 21, 2019

Back in the Saddle: Preparing to teach CS222 in Spring 2020

I taught CS222 for many semesters in a row, and then I was surprised to get a semester's reprieve in Spring 2018. Imagine my surprise when this reprieve extended through Fall 2019! Now, after several semesters away, I'm scheduled to teach CS222 again in Spring 2020. I want to share here some of the changes I've made and what I'm hoping to accomplish. I am excited to teach this class again: it is a formative experience for our majors at an inflection point in the curriculum, and I think it plays to many of my strengths.

There are a few relatively superficial changes in the course plan that I have put online. I added "The Big Idea" section to the main overview page. This was a direct response to doing a routine evaluation of a colleague's course plan. Our committee uses a form to drive the review, and one of the questions on the form asks whether the instructor has any statements about their goals for the course, separate from the catalog description and departmentally-approved learning outcomes. I realized that I frequently talk about such things but did not have them in writing. "The Big Idea" section describes how CS222 is positioned in the curriculum and what I hope students get from it.

Another addition to the course site is the Tips page. This section began as a short collection of writing tips meant, primarily, to help students understand what I mean by the word "essay." It is one of those words that has unfortunately been beaten senseless by the educational establishment. As I worked on this section, it grew to include an excerpt from the 1920 edition of The Elements of Style and some process advice adapted in part from Jordan Peterson's. I tacked on a few programming tips that I often share with students. I am still tempted to add more to it, including tips for how to take notes during meetings and from reading. I realize, however, that if students don't read the course plan, then I'm writing more for me than for them. I expect to keep adding to this page as the semester progresses, monitor whether students reference it in their speech and writing, and ask them about it a few weeks into the class.

I seriously considered dropping the whole Achievements system that I innovated in this course some ten years ago. I love the idea that students have agency in deciding what to pursue, but I don't like the idea that students who are already bad at time management can easily dig themselves into a hole. I decided to keep the system with a few tweaks, although it's hard for me to explain why. I am afraid it is inertia. The major change I made to the achievements system was to formalize the levels of validation into "stars": a student can turn in anything they self-validate for one star, or they can get a peer validation for two stars, or they can get me to validate it for three stars. The most efficient path, then, is to do something well, get a peer's and then my validation, and then get three stars in one submission. We shall see if students go this way, or if anyone purposefully hammers out sequential low-quality one-star submissions.

I want to spend more time working with students in class on refactoring exercises, making sure I help them both see the affordances for action and learn the techniques required to perform the refactoring. To this end, I have prepared a series of relatively simple example programs that we will work on in class. This means less of my show-and-tell and more students getting their hands dirty.  This should help impediments and confusion rise to the top, where ideally I can act on it. Right now, there are only about twenty students in the class, which is much more manageable than filling the room to its ~35-person capacity.

As I wrote about in a Fall semester reflection, I noticed that my students in upper division courses do not understand version control. Students in Game Programming talked about version control as if it were just for back-ups, and my HCI students admitted to being terrified of pull requests. I plan to do more careful scaffolding around git and version control this semester, with more structured exercises both in- and out of class. I have not designed these interventions yet.

I would like to keep the schedule where the first three weeks introduce the major topics, the next two are spent in a rigorous, well-defined project, and the last nine weeks are spent in three three-week iterations of an open-ended project. I am still not sure how to align this goal with the more structured activities I want to add except, perhaps, to make the two-week project much more tightly connected to in-class activities. That is, I can make it almost more like a lab than a project. For example, on a given day, I could introduce the idea of a merge conflict, and then we could actually make one in our projects. I have not set aside the time to plan this part of the course yet, following the design dictum that one should put off design decisions to the last responsible moment: if I can get to know the class a bit, then I can put together the two-week project as we need it, once I have a sense of how they are responding to the other material. If we need to cut a week or two from the major project, I am not opposed to that either: we can always cut it to three two-week iterations, for example.

Many years ago, I requested to only teach this course in 75-minute blocks. This means it would be offered Tuesdays and Thursdays instead of Mondays, Wednesdays, and Fridays. I have taught it in 50-minute blocks before, and I found that we would always get interrupted in the middle of a complex activities. Unfortunately, the administrative staff in charge of scheduling forgot about this request and gave me the MWF schedule. It will be convenient in some ways, since my other class is MWF mornings and this will be MWF afternoons, but I remain concerned about the level of depth we will be able to get into in any given class meeting. I hope that my targeted exercises and careful planning will give us tight learning loops rather than interrupted longer loops.

Thanks for reading. Feel free to check out the course plan and let me know if you have any thoughts, feedback, or suggestions.

Tuesday, December 17, 2019

Reflecting on the Fall 2019 CS439 Game Design seminar

One of the great joys this past semester was teaching my game design course, which, as I wrote about over the summer, was offered as a Computer Science department seminar (CS439) rather than an immersive-oriented Honors College colloquium. I had a few students in the course who had worked closely with me before. Of course, we had good rapport from day one, because otherwise they would not have signed up for an elective with me. I think this helped raise morale for everyone, or if nothing else, at least it was a friendly environment for me. For example, rather than my trying to force a conversation on a student who is staring into a smartphone before class, I was always able to have a legitimate, on-topic conversation.

I dropped the traditional prerequisite to just CS120—our introductory programming class—and despite my attempts to recruit more students, the course still ended up predominantly Computer Science majors and minors. I left the CS120 prerequisite there because I intended to draw more parallels with systems thinking and programming than I actually did, and if I am able to teach the course again, I would drop that prerequisite as well.

I followed a similar structure as I have done for several years, still relying on Ian Schreiber's excellent online readings despite its examples being a little long in the tooth. Also, even though we did not have an educational-games mission, I still assigned a reading from Klopfer et al. "Moving Learning Games Forward" since I find their identification of principles to be such an intriguing bit of design research.

During the five weeks of production, students were required to complete one design cycle each week, meaning that they had to identify a problem, build a solution into their prototype, and test that solution. My original plan for their presentations, then, would be as I had done in the past: students give a brief status report each week to keep the class up on their progress and solicit feedback. As always, the students asked whether they could also play each others' games in class, and I, as always, described how I had never been able to come up with an equitable model for this when people are pursuing independent projects: different games require different amounts of times and different numbers of players. One of my students came up with a showcase model that privileged playing parts of each others' games over player- or playing-equity, and the class agreed to try this. It was, taken as a whole, a great success. We kept the division between "Group A" presenting on Tuesday and "Group B" presenting on Thursday. However, instead of oral presentations, each player set up their prototype and gave a two minute oral summary of what changed. Then, students were free to just roam the room and check out each others' work, based in large part on the oral summary. After trying this for one week and reflecting on it, the only change we made was that we also had a 10-minute repeating timer running, just so that students could keep track of time's elapsing. The only real problem with this approach was the one that I feared, I got out in front of, and as predicted, I was ignored over: many students did not or could not distinguish between the showcase-style review of each others work and the playtesting required of the weekly iteration. I know I said it many times in class, but when it came to their writing their progress reports, it was clear that they had the wrong model: they thought of the meeting as a testing session rather than a showcase. The reason it cannot be the former is again really that issue of equity. It gives unreasonable benefit to someone who designs a game that can be played in ten minutes. I will need to make this distinction crystal clear in future courses.

The students' projects were quite good overall, given the context. It's the first time in years that I've taught game design without enforcing any particular theme on the class, and so it was an opportunity for me to see what students are intrinsically motivated to complete. While a few games were clearly examples of "I will choose something unambitious so that I can get it done," all of the designers of such games were also pretty bored with their work and, I think, regretted the choice. Many of the games became party games, even if they started as strategy games; the party games that were made were broadly in the Apples to Apples category, and that's fine. None were direct reskins of existing games, and all had a unique appeal. Almost every other game was themed as direct conflict: battles for territory, opposing sides, reducing hit points. It struck me how overt and, if I may, banal the conflict was, but we did not have the opportunity to discuss this as a class. Perhaps it is necessary for someone deep in video game culture to make a hit-point-based game before they can make one about Portuguese tiles—or, perhaps, this says something more about the subcultures of the slice of students who happened to take the course.

The final essays from the class echoed some of the conversations I had with the students, and I am happy with the outcomes of the course. Students came to recognize how hard it is to design a game that will be fun for other people, but also how very rewarding it is to hit the mark. I think some are inspired to continue their games or pursue new opportunities. I hope that some of them might show up at Global Game Jam or other such events to keep stretching these muscles. I did not ask the students to write explicitly about how to tie game design concepts into their majors, in part because it was so overwhelmingly Computer Science majors, and so that would have turned to insider baseball too quickly.

Finally, I want to mention that one of the interesting new spins for the class was that I had two community members audit the class, one also being a university employee. I really think that having them involved raised the bar for the whole class. Because one of the auditors was noticeably older than the other students, this forced us to explain ideas and movements in gaming that were otherwise left implicit and, hence, subjective or ambiguous. Both of the auditors also brought into the student side of the class some real serious interest and, in a way, became role models for undergraduates who are just learning how to "adult," as they say.

Incidentally, the class was also shadowed by a graduate assistant from the College of Sciences and Humanities, who was assigned to help promote some hidden gems of the college. He wrote a flattering blog post just a few days ago, and so if you haven't seen it already, check it out.

Thursday, December 12, 2019

Reflecting on the Fall 2019 CS445/545 HCI Course

I have to start by saying that this was one of the strangest classes I have ever taught. We were continuing my series of collaborations with the David Owsley Museum of Art, and as I wrote about in July, I had a great meeting with them to set up some tighter constraints around how we work with the students. There were only nine students in the class, which I thought would be really exciting: small team, a couple of graduate students, working with a partner on exploratory software to enhance the visitor experience. It seems like the recipe for an excellent learning experience, but in truth, this was one of the most frustrating classes I have ever taught.

I have never had a class where I have to "pull" so hard to get them to do, well, anything. On the very first day of class, there were about ten people in a room that seats over thirty. When I walked in, they were all sitting apart from each other, staring into their phones. I commented on how quiet it was, and I encouraged them to move in toward the front. Nobody moved. We repeated this ritual basically every class meeting. It became something of a joke after a while—a sad, sad joke. On a few occasions, I forced them to rearrange the furniture so that we could sit in a circle for discussion, and these meetings were always better, but they didn't seem to have any impact on the de facto standards. One time, I came into class, and two or three students were talking to each other. I heaped praise on them in hopes of some positive reinforcement. I got a few smiles, but again, no real change.

When we moved into working as one big team, I let them follow their own path for the first two week sprint. After a structured reflection, I told them that I would be scaffolding their improvement by providing a methodology. I used one based on my immersive learning teams, which have been roughly the same size. One of the rules of the methodology is to work in pairs whenever possible. They ostensibly read the methodology and we got to work... everybody working silently at their own laptops. I paused for a minute or two, then I interrupted, pointing out that they were all in violation of the methodology, and that they should pair up. Some of them still did not. At that point, I feel like I just have to throw up my hands.

With only nine people, attendance irregularities are easy to notice, and many people missed many class meetings. It's not like it didn't hurt their grade either: they had work to complete and then discuss almost every class meeting. I think it's fair to say that it's disheartening for everybody in the room to look around and see that only five or six out of nine people are there. In the latter part of the semester, we worked as one consultancy, and there was work for everyone to do; even here, people missed critical planning and reflection meetings.

The point of this writing is not just to complain, though. We did have some real stand-out meetings. In one of them, we talked honestly about their past team experiences. They acknowledged that none of them had ever really been on a non-dysfunctional student team before, and also that they didn't really know what a successful team looks like. This is invaluable to me as an educator, because it makes me realize that it's not just enough to give them guidance: I think we have to work harder to show them examples of successful teamwork to model. I am still not sure how to do that, except maybe by filming one of my high-functioning immersive learning teams. Another important part of this discussion was their acknowledgement that in the prerequisite course (CS222), they learned to fear feature branches and pull requests and, generally, GitHub. That is, they saw these tools as impediments to their success rather than what they are: critical parts of a healthy and productive work environment. This again points to some specific actions, to make sure that the prerequisite course is not accidentally teaching counter to its purposes. Conveniently, I've been assigned to teach CS222 in the Spring, so I will be able to pilot a few interventions here.

One of the frustrating outcomes of this semester is that I am not really sure whether the students learned anything or not. We studied some of my favorite theories of HCI during the semester, always embedded within the context of our collaboration with the art museum. My plan was, near the end, to return to those theories and frame our work within them. We ended up having to declare a failed sprint a few weeks from the end of the semester in order to produce a barely-testable digital prototype. This ate up the time that I was hoping to use to close the loops. Looking at the work that the students did on the project at the end of the semester, while technically competent, I didn't see any consideration for any of the theories we discussed earlier in the semester, like Don Norman's action cycle or Gestalt vision principles. Instead, I saw what it looked like they would have done if they had never taken the course. This is disheartening.

In our final meeting, our museum partner was kind to point out that the prototype these students created was the highest quality of any that students made in the previous semesters. I generally agree, and this is in large part because we had one group and one consistent series of conversations. From a teaching point of view, it's a completely different thing to have one team of students to engage with than it is to say, "Get into your groups and talk about this." Also, because we all worked together, I could give more direct guidance on some of the technical issues of the implementation as well, which prevented them from getting caught in amateur's dead-ends. For example, there's always someone who thinks copying and pasting code will do no harm, and then you end up with an unmaintainable mess that collapses under its own weight; I was able to work with students to refactor such solutions, teaching both process and techniques for refactoring along the way.

Although our partner was positive and right to be so, I remain concerned at how it seemed the students didn't really think about what they were saying or doing—they were not critical. For example, they liked to mention that they used journey maps, but they didn't do this well. I graded all the journey maps and provided feedback about the parts that were good and bad, but in their presentation and final essays, they wrote about their journey maps as strictly virtuous. I mentioned Falk's theories of museum visitor motivation in class, and the students latched on to parts of this; in the presentation, though, they made it sound like they had actually studied and applied these theories. In fact, they had done the equivalent of a standard contemporary undergraduate practice: hear of something, Google it, put some buzzwords in, and call it satisfactory. Truly, their presentation of their knowledge of Falk's model bordered on lying to the client, and I just wasn't prepared for how to react.

If you got this far into this essay, you can understand why this class that seemed like it should be so good would actually be so frustrating. I'm still not sure about the root cause, but I'll share the best thought I have. I don't know why the department administration thought this was a good idea, but we've offered HCI as an upper-level elective for something like five straight semesters—including summers—when it used to be a biannual course. The number of students in my classes has gone down each time, and with only nine completing this time, I have to think that a majority of this small group didn't really want to be there. I don't think they had any motivation to either take HCI or to take a class with me. (Not to toot my own horn, but there are some students who just want to study with me, regardless of the course.) In the absence of motivation, the course is perceived like the methodology I wrote up for them: a hurdle to be cleared rather than an idea to explore. I suspect I could have just given them readings and assignments, and they still would have skipped a bunch of classes and earned their C and B grades, and I would have been able to sleep at night. But you know, that's not how I roll.

I had one day where I was feeling particularly frustrated as well as low on physical and mental energy. I don't remember where we were on the project, but I asked the class to give an update on their progress. Nobody in attendance had made any. It was one of those times when I had to seriously think about just leaving the room, just walking away and letting them do whatever it was they thought they should be doing instead of contributing to the class. I even said out loud, "I have done more work on this project than you have collectively," which is probably not exactly true with a class of nine people, but I think it was dangerously close. I took a minute or two to collect my thoughts and, with some grace, turned it into a discussion of how to move forward. After this, one of the students—one with whom I had some struggles earlier in the semester but with whom we grew into mutual respect—started occasionally thanking me for my work. He would say something like, "Thanks for taking the time to put this collaboration together," and he really meant it. This may seem like a small thing, but it wasn't. It's possible that some of the students still see faculty generally as some kind of automata. I think this student saw that I cared, and because I cared, I hurt. We talk about helping students build empathy, and here's a case where it actually worked, not in some abstract social justice sense, but in a very concrete, local-community sense.

This post is a bit long, but I have had a strong desire to try to capture these stories. In truth, I'm not even entirely sure why, because in conclusion, when I consider what I would do differently next time, I honestly don't know. One thing I would do is absolutely put my foot down on the "nine people spread around a classroom" on day one, though. That was just ridiculous.

Thanks for reading. Feel free to share your thoughts and suggestions. The next one will be more cheerful, I promise.

Wednesday, December 11, 2019

CS315 Reflection Addendum: Misunderstanding Version Control as Back-up

I just finished reading the final exam responses from the CS315 Game Programming class that I wrote about yesterday. One of the questions invited the students to write about their understanding of depots, changelists, and workspace mapping in Perforce Helix. A surprisingly large minority of responses equated version control to backing-up files. That is, students said that the main purpose of version control was, essentially, to have a back-up of your project in case you need it. This strikes me as a particularly naïve perspective. In every case, I provided written feedback encouraging the student to think about the project in the depot as being the real one whereas anything that is checked out is just a shadow of that.

I wanted to mention this here on the blog in part because I am getting back into teaching CS222 in the Spring. I have had some hallway conversations with the professor who is assigned to teach the other section, and we've been talking about how to improve students' understanding of version control as a critical piece of contemporary software development workflow. One of my experiences this semester that inspired me to do this was the realization—confirmed in an honest conversation—that the students in my upper-division HCI class were terrified of pull requests. Their experience in CS222 had inadvertantly taught them to avoid contemporary best practices of version control rather than to depend on them. This could be related to the misunderstanding that I saw in these CS315 exams, where students see version control as an awkward back-up system rather than, well, version control.

Tuesday, December 10, 2019

Reflecting on the Fall 2019 CS315 Game Programming course

My students are currently taking their final exam, so this seems like a good time to start my end-of-semester blog post about CS315 Game Programming. This was my third Fall semester teaching this upper-level elective course using Unreal Engine 4. The semester ended up consisting of four "mini-projects", each about two weeks, and one six-week final project, which was completed in two iterations. By and large, I am happy with how the semester went: students learned how to work with some contemporary tools, including Perforce Helix for centralized version control, and they made interesting final projects. What I want to document here are some of the struggles, because it will do me more good when planning next year's class than focusing on the successes.

It turns out that almost all of my frustrations from the semester stem from my decision to use Specifications Grading again. For people who are not familiar, you can easily hop over to the course plan's projects page to see what they are. Briefly, I laid out all the criteria for which student work would be evaluated ahead of time, and like last year, I asked students to submit self-evaluations in which they graded their own work.

This leads quickly into the first problem: students did not seem to understand how to use checklists. It feels so strange to even type that, but it's true. As part of their submission, the students had to complete a checklist, and then based on what was satisfied, they could know—and had to say—what their grade would be. However, more often than not, I would read through the student's submission and have to point out that they didn't actually satisfy some criteria. I designed a little leniency for students who legitimately did not understand a criterion or two, but what I didn't expect was that several students made the same mistakes again and again and again. I forced the students to rotate partners during the Mini-Projects, thinking that this would ensure that mistakes would be caught by the partner; instead, what I saw was that the misunderstanding (not the understanding!) spread to new partners.

I suspect that a major reason for the checklist problem is that students are so deeply brainwashed into the "turn this in and hope for points" model that they cannot conceive of an alternative. Certainly, in my years of teaching, I've had plenty of push-back on unconventional things I do. (I continue to do unconventional things, partially because I want students to learn to question conventions.) I can work on clarifying the language around the specifications themselves of course, but I feel like this is treating a symptom rather than a cause.

There is one place where my instantiation of specifications grading contrasts, as I recall, against the presentation in Nilson's well-known work. She describes making a choice between more hurdles and higher hurdles, but my version of specifications grading is both more hurdles and higher hurdles: students have to do more and better work to earn higher grades. This is sensible to me, but I wanted to mention it here because it is a lever that I could pull in an experimental assignment or section.

Another problem I encountered with the specifications grading this semester was that a minority of students were able to follow the specifications I provided to earn relatively high marks but, in my professional opinion, without really meeting the learning objectives. For example, I had a B-level criterion which was, basically, that the project should have all the parts of a conventional video game: a title screen, gameplay, an ending, and the ability to play again. An alarming number of teams did not handle mouse input controls properly, so that once you click in the game, the mouse is captured and the cursor made invisible. This means that technically you can still navigate a UI menu, but without being able to see the cursor, it's awfully difficult. Their conventional solution seemed to be to use Ctrl-F1 to release the cursor so they could see it again: an editor kludge for a runtime problem. Did such teams satisfy the criterion? Well, yes, but also no. I liberally allowed it, leaving notes in my review that they should fix this, which almost nobody did. I could, of course, add text to the already-wordy criterion to say "If you are developing for a Desktop application, and you are using mouse navigation, make sure etc." That's just one special case of a particular environment, though. What I think I'm really running in to is the problem if specifications grading in the face of creative, wide-open projects.

Several students took my examples wholesale, brought them into their projects, and then submitted them as satisfying the relevant criteria. For example, I showed C++ code to count how many shots a character fired; student teams put this into their game and then checked the box saying that they included C++ code. Technically yes, but without any semblance of understanding. Another student took my dynamic material instance example wholesale and put it into his final project. Again, no indication of understanding the pieces, just copying and pasting it into his project and claiming that he included dynamic material instances. Yes, they're in there; no, there's no evidence of understanding. Some of this could, in theory, be cleaned up by changing the specifications, but then it gets into the same kind of problem as measuring productivity in programming. Exactly how different from my example does a student's work have to be to demonstrate that they understand the comments? "Exactly" is the key word here if the specifications are going to be objective.

I'm left with this sinking feeling that specifications grading are not worth the effort and that I should return to my tried and true minority opinion on grading: use triage grading for everything. This allows me freedom to say something like this: "Use dynamic material instances in your project in a way that shows you understand them." Then, I can fall back on saying that a student's work either clearly shows this (3/3 points), clearly doesn't (1/3 points), or is somewhere in between (2/3) points. This clear and coarse-grained numeric feedback can be combined with precise and crystal-clear written feedback to show students where they need more work, which appeals to me much more than my grimacing at the student's submitted checklist, then at the student's code, and then saying, "Yeah, I guess."

Sunday, December 1, 2019

Typesetting music with mup after a 20-year hiatus

Back around 1999-2002, I was composing and performing music pretty regularly as a stress-relief from graduate studies. This got me looking into creating printer-friendly versions of a few of my songs. I came across mup—a tool that takes plain text sheet music descriptions and creates snazzy postscript output. Mup supported Linux, where I was doing all of my serious work, and it allowed for a LaTeX-style separation of document content from document format. It was not free software, but I was happy to pay for a license to this excellent tool.

Fast-forward to today, when I was struck by the desire to typeset "Istanbul (Not Constantinople)" for my kids. I introduced them to this song via the classic video some time in November, and of course they loved it. I picked out the chords on the piano, and it's become a fun song for me to sit at the piano and sing with them. Three of my boys are taking piano lessons, and so I thought it might be fun for them to see it written out. It's much more syncopated than what they are playing in their lessons, but I thought the oldest in particular might enjoy the rhythmic challenge.

I did a little Googling and found what appeared to be viable options, but I stopped when I saw that mup is still around. Not only that: the authors made it free software back in 2012! Right on, gentlemen!

It was fun to re-learn mup's syntax after twenty years or so. Here's a quick example from the bridge:

1: 4c#;;;8;8~;
rom chord above 1: 1 "A";
lyrics 1: "Why they changed it, I_";
bar

1: 8c;8~;8;8~;2;
lyrics 1: "can't say.";
bar

The lines starting with "1:" are specifying the notes to go on the first staff—in my case, the first and only one. The "4c#;" means quarter note C#, and each empty semicolon after means to repeat that note. The "8" switches to an eighth note without changing the pitch, and the tilde is a tie to the next measure. The "rom chord above" is placing a chord above the staff at that location. The real killer feature, in my opinion, is the typesetting of the lyrics: the lyrics are automatically bound to the rhythm of the line.

The resulting typeset music looks like this:
I've put my whole arrangement up as a gist on GitHub in case you want to see all the source, and I've put the resulting PDF online as well.

Thanks to John and Bill at Arkkra Enterprises for this amazing piece of software. On the project's main page, they introduce themselves as "musicians and computer programmers," and mup is a great example of how computational thinking can lead to productive tools. It's a shade of the same point I made in my reflection about my November game design project, and it echoes my recent frustrations at work with having to use Box, Microsoft Word, and emailing files around for collaboration when LaTeX and GitHub would have been the perfect tools. In any case, if you're a programmer and a musician, make sure you at least take a look at mup.