I'm cleaning my home office as part of the end-of-the-year activities. I want to share here one of the most curious things I came across. My kids make a lot of crafty things, but this particular one really tickles my fancy.
My youngest son is four, and he recently learned how to play Ticket to Ride: First Journey. After playing this with him, some of the other boys and I got into a game of Clank! The youngest one was flitting around the table and making a bunch of noise, so I tried to think of a good creative challenge to occupy him. First Journey was still on the table, and something about the art caught my eye.
I pointed to the orange cat that is being held by the girl in yellow. Sarcastically, I pointed out how very happy the cat seemed to be.
I mean, look at that face. It's practically a meme in the making.
I suggested that my son create his own drawing of that ever-so-happy cat. He was very excited and ran off to the crafting table. He came back in a few minutes with this:
It's rather faintly drawn in pencil on lined paper, so here's a digitally-enhanced version.
Now, look at that face! That cat is actually happy, and it would love to be held by a girl in yellow on the cover of any train-related board game.
He took a separate sheet of paper, rolled it up, and taped it behind the drawing so that the whole thing would stand up. I believe he had just seen the Mr. Whiskers standee from the Clank! Expeditions: Gold and Silk expansion, and this inspired him to make his also a freestanding piece.
It was back in 2011 that I wrote about my oldest son's being inspired by a game box and recreating the artwork in a drawing, when he was just a little older than the youngest son is now.
I hope you enjoyed this end-of-year story. I expect to return tomorrow with my traditional summary of the year in games. Enjoy the last day of 2019!
Tuesday, December 31, 2019
Monday, December 30, 2019
Ideas for UE4 video tutorials to teach Computer Science concepts
I'm pleased to announce here that I have received an Epic MegaGrant to create video tutorials designed to teach Computer Science concepts through Unreal Engine 4. For those who don't follow me on YouTube, I have a Game Programming playlist with twenty public videos that I have created for my classes. Several are introductory or cover specific tips about version control, but some of my favorite ones cover more technical Computer Science concepts, such as decoupling modules through interfaces and the Observer design pattern. My proposal to Epic Games was to build upon this style of video, teaching real and interesting Computer Science ideas through their UE4 technology. I am glad that they agreed with me that this was a worthwhile pursuit.
The grant provides me with some extra time in the Spring 2020 semester to devote to making video tutorials. I have the freedom to choose the number, duration, and content of the videos, so I'm starting the project by reviewing my notes from teaching Game Programming using UE4 last semester. There were a few topics that came up during consulting meeting with students that point me toward specific videos, many of which are reinforcing ideas from earlier classes in the context of game development. Also, since writing my reflective blog post, I have been able to read the student teaching evaluations from last semester. Some of the comments there reinforced one of my observations from last semester, which is that students don't see that they can deploy the techniques they have already learned about object-oriented programming to UE4, both in Blueprint and in C++. That is, students who already understand topics from earlier courses did not recognize the affordances to use them to create more interesting or robust game software.
Before the new year and the new semester's classes kick off, then, here is a list of some of the videos that I'm considering developing in the Spring:
The grant provides me with some extra time in the Spring 2020 semester to devote to making video tutorials. I have the freedom to choose the number, duration, and content of the videos, so I'm starting the project by reviewing my notes from teaching Game Programming using UE4 last semester. There were a few topics that came up during consulting meeting with students that point me toward specific videos, many of which are reinforcing ideas from earlier classes in the context of game development. Also, since writing my reflective blog post, I have been able to read the student teaching evaluations from last semester. Some of the comments there reinforced one of my observations from last semester, which is that students don't see that they can deploy the techniques they have already learned about object-oriented programming to UE4, both in Blueprint and in C++. That is, students who already understand topics from earlier courses did not recognize the affordances to use them to create more interesting or robust game software.
Before the new year and the new semester's classes kick off, then, here is a list of some of the videos that I'm considering developing in the Spring:
- Type coercion through casting: What it is, why it is necessary in statically-typed languages, and how it manifests in Blueprint.
- Refactoring Blueprint spaghetti by introducing new abstractions.
- Comparing two techniques of implementing state machines: using enumerated types vs. the State design pattern.
- Places where Blueprint expressiveness exceeds text's capabilities, such as the Select node.
Just after I posted by last video on the playlist, which is about getting started with C++ development, I learned about subsystems through an Inside Unreal livestream. I would like to explore the implications of this feature for software architecture. I want to see how much of what I love about entity system architectures I might be able to bring into UE4 using this technology.
What do you think, dear reader? If you have any suggestions for Computer Science concepts that can be explored in UE4 through video tutorials, leave a note in the comments. Thanks for reading!
[Update: Over on the UE4 Developers Facebook Group, there was a suggestion for a discussion of Big-Oh analysis and how it manifests in game programming, as related to performance. This is a great idea for a topic. I am adding it here so that I won't forget it when I come back and start scheduling production.]
[Update 2: A conversation with a friend online made me think that another good entry might be fundamentals of debugging: a quick tutorial on using the integrated debugger for Blueprint and also for C++.]
[Update: Over on the UE4 Developers Facebook Group, there was a suggestion for a discussion of Big-Oh analysis and how it manifests in game programming, as related to performance. This is a great idea for a topic. I am adding it here so that I won't forget it when I come back and start scheduling production.]
[Update 2: A conversation with a friend online made me think that another good entry might be fundamentals of debugging: a quick tutorial on using the integrated debugger for Blueprint and also for C++.]
Thursday, December 26, 2019
Family Painting Clank! Legacy: Acquisitions Incorporated
Last year, I bought Charterstone as a family game for Christmas. Playing through the campaign with my wife and two oldest sons may have been my favorite board gaming experience. We also love Clank!, and so I have been excited for Clank! Legacy since I first heard about it. Once the positive reviews started coming in, I ordered a copy while I could and sat on it for this year's Christmas game.
We are all excited to get started, so last night, we did a "one night paint job" on the four hero miniatures. My sons have always used cheap craft paints for their miniatures, but for Christmas, I got them the Vallejo Model Color Basic Set. They enjoyed working with the new paints, although I think they will appreciate them even more as we move into more relaxed painting sessions. Both commented on how quickly the paints dried compared to the craft paints. Indeed, when I've painted with them using craft paints, the gloopiness and slow dry times are two things I found most frustrating, compared to doing quick, thinned layers with VMC.
My intention was that we would draft figures in age order, but before I could suggest that, the two boys had already picked theirs. #1 Son (12) wanted to be the "child in the dungeon" figure. This riffs off of his regular figure when we play Thunderstone Quest, since he almost always plays the one that looks like it's just a kid thrown into the battle. #2 Son (9) chose the elf, I think because he likes elves, although he was not specific in his choices. This left the tough lady fighter and the shouting dwarf, so I took the dwarf and let my wife take the other.
I used the airbrush to zenithal prime the figures, doing just a little bit of cleaning up of mold lines. Each of us will be playing our traditional board game colors, and we worked those colors into the models. Here's mine, blue:
Blue was a challenging color to put on a dwarf, which I tend to think of in muted and earthy tones. The only thing I could see to make blue at first was the tabard. As I worked with it, I realized I could do some blue trim on the helmet as well. I finished the model by using the same blue on the base, which I think ties it together.
Although we had a verbal agreement for one-session painting, "No shading, no highlighting", I couldn't help myself from doing just a little. I used a darker brown to pin wash the backpack, silver drybrushing to highlight the chainmail, two drybrush highlights on the beard, just a little brown to get more definition on the muscles, and P3 Armor Wash on the hammer. Otherwise, I used thinned paints to let the zenithal priming do a lot of the work.
This is my wife's warrior. She plays yellow, and so it made sense for the big, sweeping cape to take that color. She pointed out that it made the character look more like a superhero than a fighter, but between the cartoonish sculpt and the strange pose, I think it fits.
Although the plan was to use the boys' new Vallejo paints, I also brought down a few secret weapons from my painting arsenal, including the aforementioned P3 Armor Wash. I had really brought it down for this figure, since I figured my wife could do a quick silver paint on the plate mail and then use the armor wash for instant tabletop quality. I admit I was a bit dumbfounded when I saw her working on the orange! However, as she kept working on it, I could see it coming together. The very last step was adding the white trim around the armor plates, which I think really makes it pop. She spent three hours on this one, while I spent two on the dwarf, and the boys spent about 1-1/2 hours on theirs. I think she did a fine job, especially given the constraints.
Here's the dungeon kid—the red character. I recall my son starting with the Flat Flesh color in the basic set and then commenting that he wanted darker skin. I don't know what inspired him to do so, but I think it looks quite good: dark hair, dark skin, and bright blue eyes. He put some thoughtful discoloration into the crate as well, although it's subtle. I believe he called it "a moldy crate". Notice the nice job he did with the flagstone base as well. The whole thing has a subdued palate that really brings out the red and warm browns.
Finally, here's the green player's elf. I think it's pretty solid for a nearly-ten-year-old painter. The robe is a little splotchy, which is an unintended side effect of trying to work with the zenithal priming, like I wrote about in my JiME post: if you do one thin coat, it looks fine, but if you touch it up, you get splotches of higher saturation. He added a little thinned gold to the hair to give it some sparkle, but I'm afraid that is lost in the photo and was also greatly subdued by the varnish. He got a nice color for blonde hair, which is hard to do. He also really nailed the cobblestone base.
If you look carefully you can also see that he did some weathering on the robes, stippling on a little brown. I have written before about how I tend to lack the courage to dirty up a figure I spend so long to paint, but my son and I do watch several painters on YouTube who regularly incorporate weathering as a finishing touch. It's neat to see how he was inspired by this.
After the boys were done, and while my wife was finishing up her warrior, I retreated to my study to work on the dragon miniature. Here's it is:
I laid down the base colors by wet-blending three colors: a mix of VMC Deep Sky Blue and Grey, a mix of VMC Dark Blue and Black, and a mix of Black with the first mix. These were heavily mixed with Vallejo Glaze Medium to give me lots of open time for wet-blending. I let that set overnight. This morning, I mixed up a wash of roughly 3:1:4 blue, green, and black inks. I used this to pin wash the edges and accent all the scratches, using a second brush to feather out the wash in many places. The last step was just to paint the eyes with a mix of white and a touch of green, followed by a glaze of green ink to get just a little more green. All told, this was also just about two hours of painting; unlike the heroes, though, I think if I had more time I would probably just do it like this anyway: as an iconic representation of the draconic villain, I think it's a good piece.
Here they are all together, ready for adventure! The plan is to get the game to the table later tonight. That gives me a few hours to come up with a clever name for my dwarf.
We are all excited to get started, so last night, we did a "one night paint job" on the four hero miniatures. My sons have always used cheap craft paints for their miniatures, but for Christmas, I got them the Vallejo Model Color Basic Set. They enjoyed working with the new paints, although I think they will appreciate them even more as we move into more relaxed painting sessions. Both commented on how quickly the paints dried compared to the craft paints. Indeed, when I've painted with them using craft paints, the gloopiness and slow dry times are two things I found most frustrating, compared to doing quick, thinned layers with VMC.
My intention was that we would draft figures in age order, but before I could suggest that, the two boys had already picked theirs. #1 Son (12) wanted to be the "child in the dungeon" figure. This riffs off of his regular figure when we play Thunderstone Quest, since he almost always plays the one that looks like it's just a kid thrown into the battle. #2 Son (9) chose the elf, I think because he likes elves, although he was not specific in his choices. This left the tough lady fighter and the shouting dwarf, so I took the dwarf and let my wife take the other.
I used the airbrush to zenithal prime the figures, doing just a little bit of cleaning up of mold lines. Each of us will be playing our traditional board game colors, and we worked those colors into the models. Here's mine, blue:
Blue was a challenging color to put on a dwarf, which I tend to think of in muted and earthy tones. The only thing I could see to make blue at first was the tabard. As I worked with it, I realized I could do some blue trim on the helmet as well. I finished the model by using the same blue on the base, which I think ties it together.
Although we had a verbal agreement for one-session painting, "No shading, no highlighting", I couldn't help myself from doing just a little. I used a darker brown to pin wash the backpack, silver drybrushing to highlight the chainmail, two drybrush highlights on the beard, just a little brown to get more definition on the muscles, and P3 Armor Wash on the hammer. Otherwise, I used thinned paints to let the zenithal priming do a lot of the work.
This is my wife's warrior. She plays yellow, and so it made sense for the big, sweeping cape to take that color. She pointed out that it made the character look more like a superhero than a fighter, but between the cartoonish sculpt and the strange pose, I think it fits.
Although the plan was to use the boys' new Vallejo paints, I also brought down a few secret weapons from my painting arsenal, including the aforementioned P3 Armor Wash. I had really brought it down for this figure, since I figured my wife could do a quick silver paint on the plate mail and then use the armor wash for instant tabletop quality. I admit I was a bit dumbfounded when I saw her working on the orange! However, as she kept working on it, I could see it coming together. The very last step was adding the white trim around the armor plates, which I think really makes it pop. She spent three hours on this one, while I spent two on the dwarf, and the boys spent about 1-1/2 hours on theirs. I think she did a fine job, especially given the constraints.
Here's the dungeon kid—the red character. I recall my son starting with the Flat Flesh color in the basic set and then commenting that he wanted darker skin. I don't know what inspired him to do so, but I think it looks quite good: dark hair, dark skin, and bright blue eyes. He put some thoughtful discoloration into the crate as well, although it's subtle. I believe he called it "a moldy crate". Notice the nice job he did with the flagstone base as well. The whole thing has a subdued palate that really brings out the red and warm browns.
Finally, here's the green player's elf. I think it's pretty solid for a nearly-ten-year-old painter. The robe is a little splotchy, which is an unintended side effect of trying to work with the zenithal priming, like I wrote about in my JiME post: if you do one thin coat, it looks fine, but if you touch it up, you get splotches of higher saturation. He added a little thinned gold to the hair to give it some sparkle, but I'm afraid that is lost in the photo and was also greatly subdued by the varnish. He got a nice color for blonde hair, which is hard to do. He also really nailed the cobblestone base.
If you look carefully you can also see that he did some weathering on the robes, stippling on a little brown. I have written before about how I tend to lack the courage to dirty up a figure I spend so long to paint, but my son and I do watch several painters on YouTube who regularly incorporate weathering as a finishing touch. It's neat to see how he was inspired by this.
After the boys were done, and while my wife was finishing up her warrior, I retreated to my study to work on the dragon miniature. Here's it is:
I laid down the base colors by wet-blending three colors: a mix of VMC Deep Sky Blue and Grey, a mix of VMC Dark Blue and Black, and a mix of Black with the first mix. These were heavily mixed with Vallejo Glaze Medium to give me lots of open time for wet-blending. I let that set overnight. This morning, I mixed up a wash of roughly 3:1:4 blue, green, and black inks. I used this to pin wash the edges and accent all the scratches, using a second brush to feather out the wash in many places. The last step was just to paint the eyes with a mix of white and a touch of green, followed by a glaze of green ink to get just a little more green. All told, this was also just about two hours of painting; unlike the heroes, though, I think if I had more time I would probably just do it like this anyway: as an iconic representation of the draconic villain, I think it's a good piece.
Saturday, December 21, 2019
Back in the Saddle: Preparing to teach CS222 in Spring 2020
I taught CS222 for many semesters in a row, and then I was surprised to get a semester's reprieve in Spring 2018. Imagine my surprise when this reprieve extended through Fall 2019! Now, after several semesters away, I'm scheduled to teach CS222 again in Spring 2020. I want to share here some of the changes I've made and what I'm hoping to accomplish. I am excited to teach this class again: it is a formative experience for our majors at an inflection point in the curriculum, and I think it plays to many of my strengths.
There are a few relatively superficial changes in the course plan that I have put online. I added "The Big Idea" section to the main overview page. This was a direct response to doing a routine evaluation of a colleague's course plan. Our committee uses a form to drive the review, and one of the questions on the form asks whether the instructor has any statements about their goals for the course, separate from the catalog description and departmentally-approved learning outcomes. I realized that I frequently talk about such things but did not have them in writing. "The Big Idea" section describes how CS222 is positioned in the curriculum and what I hope students get from it.
Another addition to the course site is the Tips page. This section began as a short collection of writing tips meant, primarily, to help students understand what I mean by the word "essay." It is one of those words that has unfortunately been beaten senseless by the educational establishment. As I worked on this section, it grew to include an excerpt from the 1920 edition of The Elements of Style and some process advice adapted in part from Jordan Peterson's. I tacked on a few programming tips that I often share with students. I am still tempted to add more to it, including tips for how to take notes during meetings and from reading. I realize, however, that if students don't read the course plan, then I'm writing more for me than for them. I expect to keep adding to this page as the semester progresses, monitor whether students reference it in their speech and writing, and ask them about it a few weeks into the class.
I seriously considered dropping the whole Achievements system that I innovated in this course some ten years ago. I love the idea that students have agency in deciding what to pursue, but I don't like the idea that students who are already bad at time management can easily dig themselves into a hole. I decided to keep the system with a few tweaks, although it's hard for me to explain why. I am afraid it is inertia. The major change I made to the achievements system was to formalize the levels of validation into "stars": a student can turn in anything they self-validate for one star, or they can get a peer validation for two stars, or they can get me to validate it for three stars. The most efficient path, then, is to do something well, get a peer's and then my validation, and then get three stars in one submission. We shall see if students go this way, or if anyone purposefully hammers out sequential low-quality one-star submissions.
I want to spend more time working with students in class on refactoring exercises, making sure I help them both see the affordances for action and learn the techniques required to perform the refactoring. To this end, I have prepared a series of relatively simple example programs that we will work on in class. This means less of my show-and-tell and more students getting their hands dirty. This should help impediments and confusion rise to the top, where ideally I can act on it. Right now, there are only about twenty students in the class, which is much more manageable than filling the room to its ~35-person capacity.
As I wrote about in a Fall semester reflection, I noticed that my students in upper division courses do not understand version control. Students in Game Programming talked about version control as if it were just for back-ups, and my HCI students admitted to being terrified of pull requests. I plan to do more careful scaffolding around git and version control this semester, with more structured exercises both in- and out of class. I have not designed these interventions yet.
I would like to keep the schedule where the first three weeks introduce the major topics, the next two are spent in a rigorous, well-defined project, and the last nine weeks are spent in three three-week iterations of an open-ended project. I am still not sure how to align this goal with the more structured activities I want to add except, perhaps, to make the two-week project much more tightly connected to in-class activities. That is, I can make it almost more like a lab than a project. For example, on a given day, I could introduce the idea of a merge conflict, and then we could actually make one in our projects. I have not set aside the time to plan this part of the course yet, following the design dictum that one should put off design decisions to the last responsible moment: if I can get to know the class a bit, then I can put together the two-week project as we need it, once I have a sense of how they are responding to the other material. If we need to cut a week or two from the major project, I am not opposed to that either: we can always cut it to three two-week iterations, for example.
Many years ago, I requested to only teach this course in 75-minute blocks. This means it would be offered Tuesdays and Thursdays instead of Mondays, Wednesdays, and Fridays. I have taught it in 50-minute blocks before, and I found that we would always get interrupted in the middle of a complex activities. Unfortunately, the administrative staff in charge of scheduling forgot about this request and gave me the MWF schedule. It will be convenient in some ways, since my other class is MWF mornings and this will be MWF afternoons, but I remain concerned about the level of depth we will be able to get into in any given class meeting. I hope that my targeted exercises and careful planning will give us tight learning loops rather than interrupted longer loops.
Thanks for reading. Feel free to check out the course plan and let me know if you have any thoughts, feedback, or suggestions.
There are a few relatively superficial changes in the course plan that I have put online. I added "The Big Idea" section to the main overview page. This was a direct response to doing a routine evaluation of a colleague's course plan. Our committee uses a form to drive the review, and one of the questions on the form asks whether the instructor has any statements about their goals for the course, separate from the catalog description and departmentally-approved learning outcomes. I realized that I frequently talk about such things but did not have them in writing. "The Big Idea" section describes how CS222 is positioned in the curriculum and what I hope students get from it.
Another addition to the course site is the Tips page. This section began as a short collection of writing tips meant, primarily, to help students understand what I mean by the word "essay." It is one of those words that has unfortunately been beaten senseless by the educational establishment. As I worked on this section, it grew to include an excerpt from the 1920 edition of The Elements of Style and some process advice adapted in part from Jordan Peterson's. I tacked on a few programming tips that I often share with students. I am still tempted to add more to it, including tips for how to take notes during meetings and from reading. I realize, however, that if students don't read the course plan, then I'm writing more for me than for them. I expect to keep adding to this page as the semester progresses, monitor whether students reference it in their speech and writing, and ask them about it a few weeks into the class.
I seriously considered dropping the whole Achievements system that I innovated in this course some ten years ago. I love the idea that students have agency in deciding what to pursue, but I don't like the idea that students who are already bad at time management can easily dig themselves into a hole. I decided to keep the system with a few tweaks, although it's hard for me to explain why. I am afraid it is inertia. The major change I made to the achievements system was to formalize the levels of validation into "stars": a student can turn in anything they self-validate for one star, or they can get a peer validation for two stars, or they can get me to validate it for three stars. The most efficient path, then, is to do something well, get a peer's and then my validation, and then get three stars in one submission. We shall see if students go this way, or if anyone purposefully hammers out sequential low-quality one-star submissions.
I want to spend more time working with students in class on refactoring exercises, making sure I help them both see the affordances for action and learn the techniques required to perform the refactoring. To this end, I have prepared a series of relatively simple example programs that we will work on in class. This means less of my show-and-tell and more students getting their hands dirty. This should help impediments and confusion rise to the top, where ideally I can act on it. Right now, there are only about twenty students in the class, which is much more manageable than filling the room to its ~35-person capacity.
As I wrote about in a Fall semester reflection, I noticed that my students in upper division courses do not understand version control. Students in Game Programming talked about version control as if it were just for back-ups, and my HCI students admitted to being terrified of pull requests. I plan to do more careful scaffolding around git and version control this semester, with more structured exercises both in- and out of class. I have not designed these interventions yet.
I would like to keep the schedule where the first three weeks introduce the major topics, the next two are spent in a rigorous, well-defined project, and the last nine weeks are spent in three three-week iterations of an open-ended project. I am still not sure how to align this goal with the more structured activities I want to add except, perhaps, to make the two-week project much more tightly connected to in-class activities. That is, I can make it almost more like a lab than a project. For example, on a given day, I could introduce the idea of a merge conflict, and then we could actually make one in our projects. I have not set aside the time to plan this part of the course yet, following the design dictum that one should put off design decisions to the last responsible moment: if I can get to know the class a bit, then I can put together the two-week project as we need it, once I have a sense of how they are responding to the other material. If we need to cut a week or two from the major project, I am not opposed to that either: we can always cut it to three two-week iterations, for example.
Many years ago, I requested to only teach this course in 75-minute blocks. This means it would be offered Tuesdays and Thursdays instead of Mondays, Wednesdays, and Fridays. I have taught it in 50-minute blocks before, and I found that we would always get interrupted in the middle of a complex activities. Unfortunately, the administrative staff in charge of scheduling forgot about this request and gave me the MWF schedule. It will be convenient in some ways, since my other class is MWF mornings and this will be MWF afternoons, but I remain concerned about the level of depth we will be able to get into in any given class meeting. I hope that my targeted exercises and careful planning will give us tight learning loops rather than interrupted longer loops.
Thanks for reading. Feel free to check out the course plan and let me know if you have any thoughts, feedback, or suggestions.
Tuesday, December 17, 2019
Reflecting on the Fall 2019 CS439 Game Design seminar
One of the great joys this past semester was teaching my game design course, which, as I wrote about over the summer, was offered as a Computer Science department seminar (CS439) rather than an immersive-oriented Honors College colloquium. I had a few students in the course who had worked closely with me before. Of course, we had good rapport from day one, because otherwise they would not have signed up for an elective with me. I think this helped raise morale for everyone, or if nothing else, at least it was a friendly environment for me. For example, rather than my trying to force a conversation on a student who is staring into a smartphone before class, I was always able to have a legitimate, on-topic conversation.
I dropped the traditional prerequisite to just CS120—our introductory programming class—and despite my attempts to recruit more students, the course still ended up predominantly Computer Science majors and minors. I left the CS120 prerequisite there because I intended to draw more parallels with systems thinking and programming than I actually did, and if I am able to teach the course again, I would drop that prerequisite as well.
I followed a similar structure as I have done for several years, still relying on Ian Schreiber's excellent online readings despite its examples being a little long in the tooth. Also, even though we did not have an educational-games mission, I still assigned a reading from Klopfer et al. "Moving Learning Games Forward" since I find their identification of principles to be such an intriguing bit of design research.
During the five weeks of production, students were required to complete one design cycle each week, meaning that they had to identify a problem, build a solution into their prototype, and test that solution. My original plan for their presentations, then, would be as I had done in the past: students give a brief status report each week to keep the class up on their progress and solicit feedback. As always, the students asked whether they could also play each others' games in class, and I, as always, described how I had never been able to come up with an equitable model for this when people are pursuing independent projects: different games require different amounts of times and different numbers of players. One of my students came up with a showcase model that privileged playing parts of each others' games over player- or playing-equity, and the class agreed to try this. It was, taken as a whole, a great success. We kept the division between "Group A" presenting on Tuesday and "Group B" presenting on Thursday. However, instead of oral presentations, each player set up their prototype and gave a two minute oral summary of what changed. Then, students were free to just roam the room and check out each others' work, based in large part on the oral summary. After trying this for one week and reflecting on it, the only change we made was that we also had a 10-minute repeating timer running, just so that students could keep track of time's elapsing. The only real problem with this approach was the one that I feared, I got out in front of, and as predicted, I was ignored over: many students did not or could not distinguish between the showcase-style review of each others work and the playtesting required of the weekly iteration. I know I said it many times in class, but when it came to their writing their progress reports, it was clear that they had the wrong model: they thought of the meeting as a testing session rather than a showcase. The reason it cannot be the former is again really that issue of equity. It gives unreasonable benefit to someone who designs a game that can be played in ten minutes. I will need to make this distinction crystal clear in future courses.
The students' projects were quite good overall, given the context. It's the first time in years that I've taught game design without enforcing any particular theme on the class, and so it was an opportunity for me to see what students are intrinsically motivated to complete. While a few games were clearly examples of "I will choose something unambitious so that I can get it done," all of the designers of such games were also pretty bored with their work and, I think, regretted the choice. Many of the games became party games, even if they started as strategy games; the party games that were made were broadly in the Apples to Apples category, and that's fine. None were direct reskins of existing games, and all had a unique appeal. Almost every other game was themed as direct conflict: battles for territory, opposing sides, reducing hit points. It struck me how overt and, if I may, banal the conflict was, but we did not have the opportunity to discuss this as a class. Perhaps it is necessary for someone deep in video game culture to make a hit-point-based game before they can make one about Portuguese tiles—or, perhaps, this says something more about the subcultures of the slice of students who happened to take the course.
The final essays from the class echoed some of the conversations I had with the students, and I am happy with the outcomes of the course. Students came to recognize how hard it is to design a game that will be fun for other people, but also how very rewarding it is to hit the mark. I think some are inspired to continue their games or pursue new opportunities. I hope that some of them might show up at Global Game Jam or other such events to keep stretching these muscles. I did not ask the students to write explicitly about how to tie game design concepts into their majors, in part because it was so overwhelmingly Computer Science majors, and so that would have turned to insider baseball too quickly.
Finally, I want to mention that one of the interesting new spins for the class was that I had two community members audit the class, one also being a university employee. I really think that having them involved raised the bar for the whole class. Because one of the auditors was noticeably older than the other students, this forced us to explain ideas and movements in gaming that were otherwise left implicit and, hence, subjective or ambiguous. Both of the auditors also brought into the student side of the class some real serious interest and, in a way, became role models for undergraduates who are just learning how to "adult," as they say.
Incidentally, the class was also shadowed by a graduate assistant from the College of Sciences and Humanities, who was assigned to help promote some hidden gems of the college. He wrote a flattering blog post just a few days ago, and so if you haven't seen it already, check it out.
I dropped the traditional prerequisite to just CS120—our introductory programming class—and despite my attempts to recruit more students, the course still ended up predominantly Computer Science majors and minors. I left the CS120 prerequisite there because I intended to draw more parallels with systems thinking and programming than I actually did, and if I am able to teach the course again, I would drop that prerequisite as well.
I followed a similar structure as I have done for several years, still relying on Ian Schreiber's excellent online readings despite its examples being a little long in the tooth. Also, even though we did not have an educational-games mission, I still assigned a reading from Klopfer et al. "Moving Learning Games Forward" since I find their identification of principles to be such an intriguing bit of design research.
During the five weeks of production, students were required to complete one design cycle each week, meaning that they had to identify a problem, build a solution into their prototype, and test that solution. My original plan for their presentations, then, would be as I had done in the past: students give a brief status report each week to keep the class up on their progress and solicit feedback. As always, the students asked whether they could also play each others' games in class, and I, as always, described how I had never been able to come up with an equitable model for this when people are pursuing independent projects: different games require different amounts of times and different numbers of players. One of my students came up with a showcase model that privileged playing parts of each others' games over player- or playing-equity, and the class agreed to try this. It was, taken as a whole, a great success. We kept the division between "Group A" presenting on Tuesday and "Group B" presenting on Thursday. However, instead of oral presentations, each player set up their prototype and gave a two minute oral summary of what changed. Then, students were free to just roam the room and check out each others' work, based in large part on the oral summary. After trying this for one week and reflecting on it, the only change we made was that we also had a 10-minute repeating timer running, just so that students could keep track of time's elapsing. The only real problem with this approach was the one that I feared, I got out in front of, and as predicted, I was ignored over: many students did not or could not distinguish between the showcase-style review of each others work and the playtesting required of the weekly iteration. I know I said it many times in class, but when it came to their writing their progress reports, it was clear that they had the wrong model: they thought of the meeting as a testing session rather than a showcase. The reason it cannot be the former is again really that issue of equity. It gives unreasonable benefit to someone who designs a game that can be played in ten minutes. I will need to make this distinction crystal clear in future courses.
The students' projects were quite good overall, given the context. It's the first time in years that I've taught game design without enforcing any particular theme on the class, and so it was an opportunity for me to see what students are intrinsically motivated to complete. While a few games were clearly examples of "I will choose something unambitious so that I can get it done," all of the designers of such games were also pretty bored with their work and, I think, regretted the choice. Many of the games became party games, even if they started as strategy games; the party games that were made were broadly in the Apples to Apples category, and that's fine. None were direct reskins of existing games, and all had a unique appeal. Almost every other game was themed as direct conflict: battles for territory, opposing sides, reducing hit points. It struck me how overt and, if I may, banal the conflict was, but we did not have the opportunity to discuss this as a class. Perhaps it is necessary for someone deep in video game culture to make a hit-point-based game before they can make one about Portuguese tiles—or, perhaps, this says something more about the subcultures of the slice of students who happened to take the course.
The final essays from the class echoed some of the conversations I had with the students, and I am happy with the outcomes of the course. Students came to recognize how hard it is to design a game that will be fun for other people, but also how very rewarding it is to hit the mark. I think some are inspired to continue their games or pursue new opportunities. I hope that some of them might show up at Global Game Jam or other such events to keep stretching these muscles. I did not ask the students to write explicitly about how to tie game design concepts into their majors, in part because it was so overwhelmingly Computer Science majors, and so that would have turned to insider baseball too quickly.
Finally, I want to mention that one of the interesting new spins for the class was that I had two community members audit the class, one also being a university employee. I really think that having them involved raised the bar for the whole class. Because one of the auditors was noticeably older than the other students, this forced us to explain ideas and movements in gaming that were otherwise left implicit and, hence, subjective or ambiguous. Both of the auditors also brought into the student side of the class some real serious interest and, in a way, became role models for undergraduates who are just learning how to "adult," as they say.
Incidentally, the class was also shadowed by a graduate assistant from the College of Sciences and Humanities, who was assigned to help promote some hidden gems of the college. He wrote a flattering blog post just a few days ago, and so if you haven't seen it already, check it out.
Thursday, December 12, 2019
Reflecting on the Fall 2019 CS445/545 HCI Course
I have to start by saying that this was one of the strangest classes I have ever taught. We were continuing my series of collaborations with the David Owsley Museum of Art, and as I wrote about in July, I had a great meeting with them to set up some tighter constraints around how we work with the students. There were only nine students in the class, which I thought would be really exciting: small team, a couple of graduate students, working with a partner on exploratory software to enhance the visitor experience. It seems like the recipe for an excellent learning experience, but in truth, this was one of the most frustrating classes I have ever taught.
I have never had a class where I have to "pull" so hard to get them to do, well, anything. On the very first day of class, there were about ten people in a room that seats over thirty. When I walked in, they were all sitting apart from each other, staring into their phones. I commented on how quiet it was, and I encouraged them to move in toward the front. Nobody moved. We repeated this ritual basically every class meeting. It became something of a joke after a while—a sad, sad joke. On a few occasions, I forced them to rearrange the furniture so that we could sit in a circle for discussion, and these meetings were always better, but they didn't seem to have any impact on the de facto standards. One time, I came into class, and two or three students were talking to each other. I heaped praise on them in hopes of some positive reinforcement. I got a few smiles, but again, no real change.
When we moved into working as one big team, I let them follow their own path for the first two week sprint. After a structured reflection, I told them that I would be scaffolding their improvement by providing a methodology. I used one based on my immersive learning teams, which have been roughly the same size. One of the rules of the methodology is to work in pairs whenever possible. They ostensibly read the methodology and we got to work... everybody working silently at their own laptops. I paused for a minute or two, then I interrupted, pointing out that they were all in violation of the methodology, and that they should pair up. Some of them still did not. At that point, I feel like I just have to throw up my hands.
With only nine people, attendance irregularities are easy to notice, and many people missed many class meetings. It's not like it didn't hurt their grade either: they had work to complete and then discuss almost every class meeting. I think it's fair to say that it's disheartening for everybody in the room to look around and see that only five or six out of nine people are there. In the latter part of the semester, we worked as one consultancy, and there was work for everyone to do; even here, people missed critical planning and reflection meetings.
The point of this writing is not just to complain, though. We did have some real stand-out meetings. In one of them, we talked honestly about their past team experiences. They acknowledged that none of them had ever really been on a non-dysfunctional student team before, and also that they didn't really know what a successful team looks like. This is invaluable to me as an educator, because it makes me realize that it's not just enough to give them guidance: I think we have to work harder to show them examples of successful teamwork to model. I am still not sure how to do that, except maybe by filming one of my high-functioning immersive learning teams. Another important part of this discussion was their acknowledgement that in the prerequisite course (CS222), they learned to fear feature branches and pull requests and, generally, GitHub. That is, they saw these tools as impediments to their success rather than what they are: critical parts of a healthy and productive work environment. This again points to some specific actions, to make sure that the prerequisite course is not accidentally teaching counter to its purposes. Conveniently, I've been assigned to teach CS222 in the Spring, so I will be able to pilot a few interventions here.
One of the frustrating outcomes of this semester is that I am not really sure whether the students learned anything or not. We studied some of my favorite theories of HCI during the semester, always embedded within the context of our collaboration with the art museum. My plan was, near the end, to return to those theories and frame our work within them. We ended up having to declare a failed sprint a few weeks from the end of the semester in order to produce a barely-testable digital prototype. This ate up the time that I was hoping to use to close the loops. Looking at the work that the students did on the project at the end of the semester, while technically competent, I didn't see any consideration for any of the theories we discussed earlier in the semester, like Don Norman's action cycle or Gestalt vision principles. Instead, I saw what it looked like they would have done if they had never taken the course. This is disheartening.
In our final meeting, our museum partner was kind to point out that the prototype these students created was the highest quality of any that students made in the previous semesters. I generally agree, and this is in large part because we had one group and one consistent series of conversations. From a teaching point of view, it's a completely different thing to have one team of students to engage with than it is to say, "Get into your groups and talk about this." Also, because we all worked together, I could give more direct guidance on some of the technical issues of the implementation as well, which prevented them from getting caught in amateur's dead-ends. For example, there's always someone who thinks copying and pasting code will do no harm, and then you end up with an unmaintainable mess that collapses under its own weight; I was able to work with students to refactor such solutions, teaching both process and techniques for refactoring along the way.
Although our partner was positive and right to be so, I remain concerned at how it seemed the students didn't really think about what they were saying or doing—they were not critical. For example, they liked to mention that they used journey maps, but they didn't do this well. I graded all the journey maps and provided feedback about the parts that were good and bad, but in their presentation and final essays, they wrote about their journey maps as strictly virtuous. I mentioned Falk's theories of museum visitor motivation in class, and the students latched on to parts of this; in the presentation, though, they made it sound like they had actually studied and applied these theories. In fact, they had done the equivalent of a standard contemporary undergraduate practice: hear of something, Google it, put some buzzwords in, and call it satisfactory. Truly, their presentation of their knowledge of Falk's model bordered on lying to the client, and I just wasn't prepared for how to react.
If you got this far into this essay, you can understand why this class that seemed like it should be so good would actually be so frustrating. I'm still not sure about the root cause, but I'll share the best thought I have. I don't know why the department administration thought this was a good idea, but we've offered HCI as an upper-level elective for something like five straight semesters—including summers—when it used to be a biannual course. The number of students in my classes has gone down each time, and with only nine completing this time, I have to think that a majority of this small group didn't really want to be there. I don't think they had any motivation to either take HCI or to take a class with me. (Not to toot my own horn, but there are some students who just want to study with me, regardless of the course.) In the absence of motivation, the course is perceived like the methodology I wrote up for them: a hurdle to be cleared rather than an idea to explore. I suspect I could have just given them readings and assignments, and they still would have skipped a bunch of classes and earned their C and B grades, and I would have been able to sleep at night. But you know, that's not how I roll.
I had one day where I was feeling particularly frustrated as well as low on physical and mental energy. I don't remember where we were on the project, but I asked the class to give an update on their progress. Nobody in attendance had made any. It was one of those times when I had to seriously think about just leaving the room, just walking away and letting them do whatever it was they thought they should be doing instead of contributing to the class. I even said out loud, "I have done more work on this project than you have collectively," which is probably not exactly true with a class of nine people, but I think it was dangerously close. I took a minute or two to collect my thoughts and, with some grace, turned it into a discussion of how to move forward. After this, one of the students—one with whom I had some struggles earlier in the semester but with whom we grew into mutual respect—started occasionally thanking me for my work. He would say something like, "Thanks for taking the time to put this collaboration together," and he really meant it. This may seem like a small thing, but it wasn't. It's possible that some of the students still see faculty generally as some kind of automata. I think this student saw that I cared, and because I cared, I hurt. We talk about helping students build empathy, and here's a case where it actually worked, not in some abstract social justice sense, but in a very concrete, local-community sense.
This post is a bit long, but I have had a strong desire to try to capture these stories. In truth, I'm not even entirely sure why, because in conclusion, when I consider what I would do differently next time, I honestly don't know. One thing I would do is absolutely put my foot down on the "nine people spread around a classroom" on day one, though. That was just ridiculous.
Thanks for reading. Feel free to share your thoughts and suggestions. The next one will be more cheerful, I promise.
I have never had a class where I have to "pull" so hard to get them to do, well, anything. On the very first day of class, there were about ten people in a room that seats over thirty. When I walked in, they were all sitting apart from each other, staring into their phones. I commented on how quiet it was, and I encouraged them to move in toward the front. Nobody moved. We repeated this ritual basically every class meeting. It became something of a joke after a while—a sad, sad joke. On a few occasions, I forced them to rearrange the furniture so that we could sit in a circle for discussion, and these meetings were always better, but they didn't seem to have any impact on the de facto standards. One time, I came into class, and two or three students were talking to each other. I heaped praise on them in hopes of some positive reinforcement. I got a few smiles, but again, no real change.
When we moved into working as one big team, I let them follow their own path for the first two week sprint. After a structured reflection, I told them that I would be scaffolding their improvement by providing a methodology. I used one based on my immersive learning teams, which have been roughly the same size. One of the rules of the methodology is to work in pairs whenever possible. They ostensibly read the methodology and we got to work... everybody working silently at their own laptops. I paused for a minute or two, then I interrupted, pointing out that they were all in violation of the methodology, and that they should pair up. Some of them still did not. At that point, I feel like I just have to throw up my hands.
With only nine people, attendance irregularities are easy to notice, and many people missed many class meetings. It's not like it didn't hurt their grade either: they had work to complete and then discuss almost every class meeting. I think it's fair to say that it's disheartening for everybody in the room to look around and see that only five or six out of nine people are there. In the latter part of the semester, we worked as one consultancy, and there was work for everyone to do; even here, people missed critical planning and reflection meetings.
The point of this writing is not just to complain, though. We did have some real stand-out meetings. In one of them, we talked honestly about their past team experiences. They acknowledged that none of them had ever really been on a non-dysfunctional student team before, and also that they didn't really know what a successful team looks like. This is invaluable to me as an educator, because it makes me realize that it's not just enough to give them guidance: I think we have to work harder to show them examples of successful teamwork to model. I am still not sure how to do that, except maybe by filming one of my high-functioning immersive learning teams. Another important part of this discussion was their acknowledgement that in the prerequisite course (CS222), they learned to fear feature branches and pull requests and, generally, GitHub. That is, they saw these tools as impediments to their success rather than what they are: critical parts of a healthy and productive work environment. This again points to some specific actions, to make sure that the prerequisite course is not accidentally teaching counter to its purposes. Conveniently, I've been assigned to teach CS222 in the Spring, so I will be able to pilot a few interventions here.
One of the frustrating outcomes of this semester is that I am not really sure whether the students learned anything or not. We studied some of my favorite theories of HCI during the semester, always embedded within the context of our collaboration with the art museum. My plan was, near the end, to return to those theories and frame our work within them. We ended up having to declare a failed sprint a few weeks from the end of the semester in order to produce a barely-testable digital prototype. This ate up the time that I was hoping to use to close the loops. Looking at the work that the students did on the project at the end of the semester, while technically competent, I didn't see any consideration for any of the theories we discussed earlier in the semester, like Don Norman's action cycle or Gestalt vision principles. Instead, I saw what it looked like they would have done if they had never taken the course. This is disheartening.
In our final meeting, our museum partner was kind to point out that the prototype these students created was the highest quality of any that students made in the previous semesters. I generally agree, and this is in large part because we had one group and one consistent series of conversations. From a teaching point of view, it's a completely different thing to have one team of students to engage with than it is to say, "Get into your groups and talk about this." Also, because we all worked together, I could give more direct guidance on some of the technical issues of the implementation as well, which prevented them from getting caught in amateur's dead-ends. For example, there's always someone who thinks copying and pasting code will do no harm, and then you end up with an unmaintainable mess that collapses under its own weight; I was able to work with students to refactor such solutions, teaching both process and techniques for refactoring along the way.
Although our partner was positive and right to be so, I remain concerned at how it seemed the students didn't really think about what they were saying or doing—they were not critical. For example, they liked to mention that they used journey maps, but they didn't do this well. I graded all the journey maps and provided feedback about the parts that were good and bad, but in their presentation and final essays, they wrote about their journey maps as strictly virtuous. I mentioned Falk's theories of museum visitor motivation in class, and the students latched on to parts of this; in the presentation, though, they made it sound like they had actually studied and applied these theories. In fact, they had done the equivalent of a standard contemporary undergraduate practice: hear of something, Google it, put some buzzwords in, and call it satisfactory. Truly, their presentation of their knowledge of Falk's model bordered on lying to the client, and I just wasn't prepared for how to react.
If you got this far into this essay, you can understand why this class that seemed like it should be so good would actually be so frustrating. I'm still not sure about the root cause, but I'll share the best thought I have. I don't know why the department administration thought this was a good idea, but we've offered HCI as an upper-level elective for something like five straight semesters—including summers—when it used to be a biannual course. The number of students in my classes has gone down each time, and with only nine completing this time, I have to think that a majority of this small group didn't really want to be there. I don't think they had any motivation to either take HCI or to take a class with me. (Not to toot my own horn, but there are some students who just want to study with me, regardless of the course.) In the absence of motivation, the course is perceived like the methodology I wrote up for them: a hurdle to be cleared rather than an idea to explore. I suspect I could have just given them readings and assignments, and they still would have skipped a bunch of classes and earned their C and B grades, and I would have been able to sleep at night. But you know, that's not how I roll.
I had one day where I was feeling particularly frustrated as well as low on physical and mental energy. I don't remember where we were on the project, but I asked the class to give an update on their progress. Nobody in attendance had made any. It was one of those times when I had to seriously think about just leaving the room, just walking away and letting them do whatever it was they thought they should be doing instead of contributing to the class. I even said out loud, "I have done more work on this project than you have collectively," which is probably not exactly true with a class of nine people, but I think it was dangerously close. I took a minute or two to collect my thoughts and, with some grace, turned it into a discussion of how to move forward. After this, one of the students—one with whom I had some struggles earlier in the semester but with whom we grew into mutual respect—started occasionally thanking me for my work. He would say something like, "Thanks for taking the time to put this collaboration together," and he really meant it. This may seem like a small thing, but it wasn't. It's possible that some of the students still see faculty generally as some kind of automata. I think this student saw that I cared, and because I cared, I hurt. We talk about helping students build empathy, and here's a case where it actually worked, not in some abstract social justice sense, but in a very concrete, local-community sense.
This post is a bit long, but I have had a strong desire to try to capture these stories. In truth, I'm not even entirely sure why, because in conclusion, when I consider what I would do differently next time, I honestly don't know. One thing I would do is absolutely put my foot down on the "nine people spread around a classroom" on day one, though. That was just ridiculous.
Thanks for reading. Feel free to share your thoughts and suggestions. The next one will be more cheerful, I promise.
Wednesday, December 11, 2019
CS315 Reflection Addendum: Misunderstanding Version Control as Back-up
I just finished reading the final exam responses from the CS315 Game Programming class that I wrote about yesterday. One of the questions invited the students to write about their understanding of depots, changelists, and workspace mapping in Perforce Helix. A surprisingly large minority of responses equated version control to backing-up files. That is, students said that the main purpose of version control was, essentially, to have a back-up of your project in case you need it. This strikes me as a particularly naïve perspective. In every case, I provided written feedback encouraging the student to think about the project in the depot as being the real one whereas anything that is checked out is just a shadow of that.
I wanted to mention this here on the blog in part because I am getting back into teaching CS222 in the Spring. I have had some hallway conversations with the professor who is assigned to teach the other section, and we've been talking about how to improve students' understanding of version control as a critical piece of contemporary software development workflow. One of my experiences this semester that inspired me to do this was the realization—confirmed in an honest conversation—that the students in my upper-division HCI class were terrified of pull requests. Their experience in CS222 had inadvertantly taught them to avoid contemporary best practices of version control rather than to depend on them. This could be related to the misunderstanding that I saw in these CS315 exams, where students see version control as an awkward back-up system rather than, well, version control.
I wanted to mention this here on the blog in part because I am getting back into teaching CS222 in the Spring. I have had some hallway conversations with the professor who is assigned to teach the other section, and we've been talking about how to improve students' understanding of version control as a critical piece of contemporary software development workflow. One of my experiences this semester that inspired me to do this was the realization—confirmed in an honest conversation—that the students in my upper-division HCI class were terrified of pull requests. Their experience in CS222 had inadvertantly taught them to avoid contemporary best practices of version control rather than to depend on them. This could be related to the misunderstanding that I saw in these CS315 exams, where students see version control as an awkward back-up system rather than, well, version control.
Tuesday, December 10, 2019
Reflecting on the Fall 2019 CS315 Game Programming course
My students are currently taking their final exam, so this seems like a good time to start my end-of-semester blog post about CS315 Game Programming. This was my third Fall semester teaching this upper-level elective course using Unreal Engine 4. The semester ended up consisting of four "mini-projects", each about two weeks, and one six-week final project, which was completed in two iterations. By and large, I am happy with how the semester went: students learned how to work with some contemporary tools, including Perforce Helix for centralized version control, and they made interesting final projects. What I want to document here are some of the struggles, because it will do me more good when planning next year's class than focusing on the successes.
It turns out that almost all of my frustrations from the semester stem from my decision to use Specifications Grading again. For people who are not familiar, you can easily hop over to the course plan's projects page to see what they are. Briefly, I laid out all the criteria for which student work would be evaluated ahead of time, and like last year, I asked students to submit self-evaluations in which they graded their own work.
This leads quickly into the first problem: students did not seem to understand how to use checklists. It feels so strange to even type that, but it's true. As part of their submission, the students had to complete a checklist, and then based on what was satisfied, they could know—and had to say—what their grade would be. However, more often than not, I would read through the student's submission and have to point out that they didn't actually satisfy some criteria. I designed a little leniency for students who legitimately did not understand a criterion or two, but what I didn't expect was that several students made the same mistakes again and again and again. I forced the students to rotate partners during the Mini-Projects, thinking that this would ensure that mistakes would be caught by the partner; instead, what I saw was that the misunderstanding (not the understanding!) spread to new partners.
I suspect that a major reason for the checklist problem is that students are so deeply brainwashed into the "turn this in and hope for points" model that they cannot conceive of an alternative. Certainly, in my years of teaching, I've had plenty of push-back on unconventional things I do. (I continue to do unconventional things, partially because I want students to learn to question conventions.) I can work on clarifying the language around the specifications themselves of course, but I feel like this is treating a symptom rather than a cause.
There is one place where my instantiation of specifications grading contrasts, as I recall, against the presentation in Nilson's well-known work. She describes making a choice between more hurdles and higher hurdles, but my version of specifications grading is both more hurdles and higher hurdles: students have to do more and better work to earn higher grades. This is sensible to me, but I wanted to mention it here because it is a lever that I could pull in an experimental assignment or section.
Another problem I encountered with the specifications grading this semester was that a minority of students were able to follow the specifications I provided to earn relatively high marks but, in my professional opinion, without really meeting the learning objectives. For example, I had a B-level criterion which was, basically, that the project should have all the parts of a conventional video game: a title screen, gameplay, an ending, and the ability to play again. An alarming number of teams did not handle mouse input controls properly, so that once you click in the game, the mouse is captured and the cursor made invisible. This means that technically you can still navigate a UI menu, but without being able to see the cursor, it's awfully difficult. Their conventional solution seemed to be to use Ctrl-F1 to release the cursor so they could see it again: an editor kludge for a runtime problem. Did such teams satisfy the criterion? Well, yes, but also no. I liberally allowed it, leaving notes in my review that they should fix this, which almost nobody did. I could, of course, add text to the already-wordy criterion to say "If you are developing for a Desktop application, and you are using mouse navigation, make sure etc." That's just one special case of a particular environment, though. What I think I'm really running in to is the problem if specifications grading in the face of creative, wide-open projects.
Several students took my examples wholesale, brought them into their projects, and then submitted them as satisfying the relevant criteria. For example, I showed C++ code to count how many shots a character fired; student teams put this into their game and then checked the box saying that they included C++ code. Technically yes, but without any semblance of understanding. Another student took my dynamic material instance example wholesale and put it into his final project. Again, no indication of understanding the pieces, just copying and pasting it into his project and claiming that he included dynamic material instances. Yes, they're in there; no, there's no evidence of understanding. Some of this could, in theory, be cleaned up by changing the specifications, but then it gets into the same kind of problem as measuring productivity in programming. Exactly how different from my example does a student's work have to be to demonstrate that they understand the comments? "Exactly" is the key word here if the specifications are going to be objective.
I'm left with this sinking feeling that specifications grading are not worth the effort and that I should return to my tried and true minority opinion on grading: use triage grading for everything. This allows me freedom to say something like this: "Use dynamic material instances in your project in a way that shows you understand them." Then, I can fall back on saying that a student's work either clearly shows this (3/3 points), clearly doesn't (1/3 points), or is somewhere in between (2/3) points. This clear and coarse-grained numeric feedback can be combined with precise and crystal-clear written feedback to show students where they need more work, which appeals to me much more than my grimacing at the student's submitted checklist, then at the student's code, and then saying, "Yeah, I guess."
It turns out that almost all of my frustrations from the semester stem from my decision to use Specifications Grading again. For people who are not familiar, you can easily hop over to the course plan's projects page to see what they are. Briefly, I laid out all the criteria for which student work would be evaluated ahead of time, and like last year, I asked students to submit self-evaluations in which they graded their own work.
This leads quickly into the first problem: students did not seem to understand how to use checklists. It feels so strange to even type that, but it's true. As part of their submission, the students had to complete a checklist, and then based on what was satisfied, they could know—and had to say—what their grade would be. However, more often than not, I would read through the student's submission and have to point out that they didn't actually satisfy some criteria. I designed a little leniency for students who legitimately did not understand a criterion or two, but what I didn't expect was that several students made the same mistakes again and again and again. I forced the students to rotate partners during the Mini-Projects, thinking that this would ensure that mistakes would be caught by the partner; instead, what I saw was that the misunderstanding (not the understanding!) spread to new partners.
I suspect that a major reason for the checklist problem is that students are so deeply brainwashed into the "turn this in and hope for points" model that they cannot conceive of an alternative. Certainly, in my years of teaching, I've had plenty of push-back on unconventional things I do. (I continue to do unconventional things, partially because I want students to learn to question conventions.) I can work on clarifying the language around the specifications themselves of course, but I feel like this is treating a symptom rather than a cause.
There is one place where my instantiation of specifications grading contrasts, as I recall, against the presentation in Nilson's well-known work. She describes making a choice between more hurdles and higher hurdles, but my version of specifications grading is both more hurdles and higher hurdles: students have to do more and better work to earn higher grades. This is sensible to me, but I wanted to mention it here because it is a lever that I could pull in an experimental assignment or section.
Another problem I encountered with the specifications grading this semester was that a minority of students were able to follow the specifications I provided to earn relatively high marks but, in my professional opinion, without really meeting the learning objectives. For example, I had a B-level criterion which was, basically, that the project should have all the parts of a conventional video game: a title screen, gameplay, an ending, and the ability to play again. An alarming number of teams did not handle mouse input controls properly, so that once you click in the game, the mouse is captured and the cursor made invisible. This means that technically you can still navigate a UI menu, but without being able to see the cursor, it's awfully difficult. Their conventional solution seemed to be to use Ctrl-F1 to release the cursor so they could see it again: an editor kludge for a runtime problem. Did such teams satisfy the criterion? Well, yes, but also no. I liberally allowed it, leaving notes in my review that they should fix this, which almost nobody did. I could, of course, add text to the already-wordy criterion to say "If you are developing for a Desktop application, and you are using mouse navigation, make sure etc." That's just one special case of a particular environment, though. What I think I'm really running in to is the problem if specifications grading in the face of creative, wide-open projects.
Several students took my examples wholesale, brought them into their projects, and then submitted them as satisfying the relevant criteria. For example, I showed C++ code to count how many shots a character fired; student teams put this into their game and then checked the box saying that they included C++ code. Technically yes, but without any semblance of understanding. Another student took my dynamic material instance example wholesale and put it into his final project. Again, no indication of understanding the pieces, just copying and pasting it into his project and claiming that he included dynamic material instances. Yes, they're in there; no, there's no evidence of understanding. Some of this could, in theory, be cleaned up by changing the specifications, but then it gets into the same kind of problem as measuring productivity in programming. Exactly how different from my example does a student's work have to be to demonstrate that they understand the comments? "Exactly" is the key word here if the specifications are going to be objective.
I'm left with this sinking feeling that specifications grading are not worth the effort and that I should return to my tried and true minority opinion on grading: use triage grading for everything. This allows me freedom to say something like this: "Use dynamic material instances in your project in a way that shows you understand them." Then, I can fall back on saying that a student's work either clearly shows this (3/3 points), clearly doesn't (1/3 points), or is somewhere in between (2/3) points. This clear and coarse-grained numeric feedback can be combined with precise and crystal-clear written feedback to show students where they need more work, which appeals to me much more than my grimacing at the student's submitted checklist, then at the student's code, and then saying, "Yeah, I guess."
Sunday, December 1, 2019
Typesetting music with mup after a 20-year hiatus
Back around 1999-2002, I was composing and performing music pretty regularly as a stress-relief from graduate studies. This got me looking into creating printer-friendly versions of a few of my songs. I came across mup—a tool that takes plain text sheet music descriptions and creates snazzy postscript output. Mup supported Linux, where I was doing all of my serious work, and it allowed for a LaTeX-style separation of document content from document format. It was not free software, but I was happy to pay for a license to this excellent tool.
Fast-forward to today, when I was struck by the desire to typeset "Istanbul (Not Constantinople)" for my kids. I introduced them to this song via the classic video some time in November, and of course they loved it. I picked out the chords on the piano, and it's become a fun song for me to sit at the piano and sing with them. Three of my boys are taking piano lessons, and so I thought it might be fun for them to see it written out. It's much more syncopated than what they are playing in their lessons, but I thought the oldest in particular might enjoy the rhythmic challenge.
I did a little Googling and found what appeared to be viable options, but I stopped when I saw that mup is still around. Not only that: the authors made it free software back in 2012! Right on, gentlemen!
It was fun to re-learn mup's syntax after twenty years or so. Here's a quick example from the bridge:
The lines starting with "1:" are specifying the notes to go on the first staff—in my case, the first and only one. The "4c#;" means quarter note C#, and each empty semicolon after means to repeat that note. The "8" switches to an eighth note without changing the pitch, and the tilde is a tie to the next measure. The "rom chord above" is placing a chord above the staff at that location. The real killer feature, in my opinion, is the typesetting of the lyrics: the lyrics are automatically bound to the rhythm of the line.
The resulting typeset music looks like this:
I've put my whole arrangement up as a gist on GitHub in case you want to see all the source, and I've put the resulting PDF online as well.
Thanks to John and Bill at Arkkra Enterprises for this amazing piece of software. On the project's main page, they introduce themselves as "musicians and computer programmers," and mup is a great example of how computational thinking can lead to productive tools. It's a shade of the same point I made in my reflection about my November game design project, and it echoes my recent frustrations at work with having to use Box, Microsoft Word, and emailing files around for collaboration when LaTeX and GitHub would have been the perfect tools. In any case, if you're a programmer and a musician, make sure you at least take a look at mup.
Fast-forward to today, when I was struck by the desire to typeset "Istanbul (Not Constantinople)" for my kids. I introduced them to this song via the classic video some time in November, and of course they loved it. I picked out the chords on the piano, and it's become a fun song for me to sit at the piano and sing with them. Three of my boys are taking piano lessons, and so I thought it might be fun for them to see it written out. It's much more syncopated than what they are playing in their lessons, but I thought the oldest in particular might enjoy the rhythmic challenge.
I did a little Googling and found what appeared to be viable options, but I stopped when I saw that mup is still around. Not only that: the authors made it free software back in 2012! Right on, gentlemen!
It was fun to re-learn mup's syntax after twenty years or so. Here's a quick example from the bridge:
1: 4c#;;;8;8~;
rom chord above 1: 1 "A";
lyrics 1: "Why they changed it, I_";
bar
1: 8c;8~;8;8~;2;
lyrics 1: "can't say.";
bar
The lines starting with "1:" are specifying the notes to go on the first staff—in my case, the first and only one. The "4c#;" means quarter note C#, and each empty semicolon after means to repeat that note. The "8" switches to an eighth note without changing the pitch, and the tilde is a tie to the next measure. The "rom chord above" is placing a chord above the staff at that location. The real killer feature, in my opinion, is the typesetting of the lyrics: the lyrics are automatically bound to the rhythm of the line.
The resulting typeset music looks like this:
Thanks to John and Bill at Arkkra Enterprises for this amazing piece of software. On the project's main page, they introduce themselves as "musicians and computer programmers," and mup is a great example of how computational thinking can lead to productive tools. It's a shade of the same point I made in my reflection about my November game design project, and it echoes my recent frustrations at work with having to use Box, Microsoft Word, and emailing files around for collaboration when LaTeX and GitHub would have been the perfect tools. In any case, if you're a programmer and a musician, make sure you at least take a look at mup.
Wednesday, November 27, 2019
My sons' reflective essays for NaGaDeMon 2019
Yesterday, I wrote about my experience participating in this year's National Game Design Month (NaGaDeMon), but I was not the only one in my household to participate. My 12-year-old and 9-year-old sons each created their own projects as well. Following the Create-Play-Talk structure of NaGaDeMon, I am pleased to share their reflective essays as guest posts here on my blog.
Both of their games can be played online: #1 and #2.
Here is the reflective essay from #1 Son:
Thanks for reading! I'll be happy to relay any comments you have to the boys, whether they are about the games or the reflections.
Both of their games can be played online: #1 and #2.
Here is the reflective essay from #1 Son:
For NaGaDeMon, I made a dungeon exploration game with Construct 2. I used Construct because it is the platform I am most familiar with. Construct is a drag-and-drop system, and events are Construct’s way of structuring actions.
I am glad that I was able to add an avatar creation system where you click on arrows to cycle through different colors of clothes and hair. To do that I had to learn how to use the modulus operator to cycle through a list. Modulus is when you divide a number and take the remainder, so one divided by five has a remainder of one. The cool part about this is that when you divide six by five, the remainder is still one! You can use that to go through a list using only one click. The way I did it before was using a double-click for when you reached the last item in the list so that it wouldn’t loop around and skip a color.
I am quite happy with the outcome, although the free version of Construct that I am using has a 100-event limit. Originally, I had the player unlock a new weapon with each level, which worked well. The biggest problem I had was getting all of the enemies and bosses in, because the free version has no way to group all of the enemies into one class, so I had to make a different event for each weapon and enemy. Since that took me way over 100 events, I had to take that feature out and replace it with one weapon that got more powerful with each dungeon. I still wasn’t able to add everything that I wanted, but I was able to get all the different enemies in.Here is the reflective essay from #2 Son:
I have always liked programming. My first programming system was Kodu. I made this in Construct 2 because in Construct 2 I can draw my own sprites.
I was inspired by my brother’s game about a wizard exploring dungeons. To make it I had to learn how to make more than one layout. I feel like I spent more time than necessary drawing, but with all the left over time I think it was worth it.
I think I am happy with how it went. I accomplished what I wanted, but what I wanted could have been put in a better gameIt was fun to have these guys participate in the fun of NaGaDeMon. I gave them a little coaching in how to write a reflective essay, showing them one of my favorite forms that I use with my college students: What went well? What did you learn? What would you do differently? What still puzzles you? I offered to let them read my reflection as well, which #2 Son did after writing a rough draft of his own. I sat individually with each of them to give them feedback on their handwritten drafts, and we reviewed them again after they typed them up. Changes were made both times, and I think the results are quite strong. With #1 Son, I encouraged him to expand why the modulus was important to him and how the 100-event technical limitation affected his process. With #2 Son, I encouraged him to expand on why he chose Construct and how to articulate a bit better what "I think I am happy with it" means.
Thanks for reading! I'll be happy to relay any comments you have to the boys, whether they are about the games or the reflections.
Labels:
family,
game design,
guest post,
NaGaDeMon,
reflections
Tuesday, November 26, 2019
KAPOW! NaGaDeMon 2019 Project Reflection
As I mentioned in my previous post, my 2019 National Game Design Month (NaGaDeMon) project is KAPOW! The Campy Superhero Role-Playing Game. Earlier today, I published the rules and all the sources on GitHub. The game is free to download, play, and modify, with all the resources provided under the CC BY-NC-SA 4.0 International license. NaGaDeMon has simple rules: Create, Play, and Talk. I created the game was able to play two full sessions, and this post is crafted to satisfy the third rule.
One of my scholarly activities in Spring 2019 that I have not mentioned before is that I had two students in a CS499 Independent Study experience. We each were working on our own game design projects, mine being Race to the Moon. One of the students was Austin Tinkel—whom I will mention by name here because I want to link to his work—who developed a pulp action tabletop RPG called That Belongs in a Museum! He gave an excellent presentation about this at the 2019 Symposium on Games, where he talked about how it transitioned from an abstract idea about action momentum into an Indiana Jones-style adventure with PbtA mechanisms. (That is, it drew upon design ideas established in Apocalypse World.)
Working with Austin inspired me to re-read Apocalypse World and reinvigorated my interest in narrative-first tabletop role-playing. I don't know what made me connect this to 1960s Batman, but my family and I have been slowly working through the series since I got myself the box set for Christmas in 2017. We're on the final season, and in fact, we just watched the famous "Bat Shark Repellent" episode, which I have to say—as a fan of campy superhero stories—is one of the worst pieces of television I have ever seen. In any case, the idea tickled my fancy to try to combine campy superhero stories with a PbtA ruleset. This seemed like a good hook for trying to get my head into the PbtA space.
I sketched no more than a page of ideas in the intervening months until around September or October, when I started thinking about whether I should participate in NaGaDeMon again. I had some professional and personal obligations looming (including the aforementioned Symposium), so I decided ahead of time not to pursue a programming project: the odds of getting caught in a debugging death spiral was too high. My mind drifted back to the idea of marrying 1960s Batman sensibilities with ideas and systems from Apocalypse World. This seemed like the kind of thing that would fit into the time I could afford, and I'm glad I made this choice.
Notice that the custom narratormove environment has two arguments, the first of which is the move's name and the second of which is the explanation. Then, in the rulebook, I can define the narratormove environment before importing the source file like this, which will print each move as an unnumbered subsection with its explanation:
Meanwhile, in the Narrator handout, I can define it in such a way that it produces only the moves' names in an itemized list:
Not too shabby! I'm happy with this approach, which is used throughout the project. It reminds me that, somewhere in my courses, I should be showing my students this kind of thing to break them from their trained dependence on Microsoft Word—to show them how thinking like a Computer Scientist lets you solve complex problems in more interesting ways.
Given unbounded resources, I know of a few places where I would shore up the current design. Obviously, the rulebook and the handouts would be greatly enhanced with some genre-appropriate art and graphic design. A brilliant idea from Apocalypse World is providing sample answers to open-ended questions in the playbooks. For KAPOW, I would like to give players a list of choices for hero name, real name, appearance, occupation, and contacts, but this was both too onerous to implement and would have broken the one-page design of the character creation sheets, such as it is. Having such options would greatly speed up play, especially in my testing sessions: I was able to confirm that many players enjoyed the open-ended questions and creativity of making characters, but this was also fundamentally individual-creativity time rather than collective storytelling time. Finally, I think it would be helpful to new Narrators to have an appendix of sample villains and schemes. Many of the examples that are provided in the game come from the simple designs I put together for playtesting, but an easier on-ramp could be provided for new players by giving them something more clearly canned.
Inspiration
In my formative years, I used to play a lot of Dungeons and Dragons as well as some Shadowrun. I also crafted and playtested some of my own systems, the notes and sites for which have long since disappeared. For the curious, the two that I actually got as far as testing included a time-traveling system in which you roll boatloads of dice and, on the opposite end of the spectrum, a completely diceless fantasy system. These days, I read many more rulebooks than I have a chance to play. The Clay That Woke and Apocalypse World are two amazing systems that I supported on Kickstarter and for which I have read the rules, but which I have never actually played.One of my scholarly activities in Spring 2019 that I have not mentioned before is that I had two students in a CS499 Independent Study experience. We each were working on our own game design projects, mine being Race to the Moon. One of the students was Austin Tinkel—whom I will mention by name here because I want to link to his work—who developed a pulp action tabletop RPG called That Belongs in a Museum! He gave an excellent presentation about this at the 2019 Symposium on Games, where he talked about how it transitioned from an abstract idea about action momentum into an Indiana Jones-style adventure with PbtA mechanisms. (That is, it drew upon design ideas established in Apocalypse World.)
Working with Austin inspired me to re-read Apocalypse World and reinvigorated my interest in narrative-first tabletop role-playing. I don't know what made me connect this to 1960s Batman, but my family and I have been slowly working through the series since I got myself the box set for Christmas in 2017. We're on the final season, and in fact, we just watched the famous "Bat Shark Repellent" episode, which I have to say—as a fan of campy superhero stories—is one of the worst pieces of television I have ever seen. In any case, the idea tickled my fancy to try to combine campy superhero stories with a PbtA ruleset. This seemed like a good hook for trying to get my head into the PbtA space.
I sketched no more than a page of ideas in the intervening months until around September or October, when I started thinking about whether I should participate in NaGaDeMon again. I had some professional and personal obligations looming (including the aforementioned Symposium), so I decided ahead of time not to pursue a programming project: the odds of getting caught in a debugging death spiral was too high. My mind drifted back to the idea of marrying 1960s Batman sensibilities with ideas and systems from Apocalypse World. This seemed like the kind of thing that would fit into the time I could afford, and I'm glad I made this choice.
The Tech Stack
I started by writing my notes on paper, and then I transcribed them roughly into Markdown. However, I knew that I didn't want to compose the whole thing in Markdown because of its lack of expressiveness for structure. Consider the concrete case of defining the basic moves. At the time I was writing, I wasn't exactly sure what all the basic moves would be, but I knew that I didn't want to have to redundantly define them in a rulebook and in a handout. This would be a DRY violation—anathema to my personal idiom.
Choosing the best method to compose my rules required me to make a decision about how to disseminate the final product. On one hand, I considered a Web-first writing approach using lit-html. I have significant experience with lit-html since I have used it for my course sites such as this semester's CS315 class as well as some side projects such as my Call to Adventure scorecard. I knew that I could do some tricks with JSON-formatted data and html templates in order to separate, for example, the basic moves' definitions from their representation in the rulebook or in handouts. On the other hand, I considered using LaTeX for a print-first writing approach. This would let me easily generate PDFs and niceties like a table of contents and an index. However, I have never done anything like a "model-view separation" in order to keep my writing DRY in LaTeX; it has simply never come up.
I decided to go Web-first, which seemed like a very well-considered and contemporary choice. I wrote this way for really only a few hours before I ran into a significant problem: how exactly was I going to produce the kinds of handouts that players would want to have at the table? Playbooks are a standard technique in PbtA games, and lists of basic moves and narrator moves seemed like a good idea. I started tinkering with using different CSS styles for screen and print media, and very quickly realized that this was basically an instance of the "debugging deathtrap" that I had wanted to avoid by making a tabletop game.
I turned instead to LaTeX and immediately started to consider how I could separate my basic moves' definition from their explanation in the rulebook and their summary on a handout. It dawned on me that I could do something akin to concrete data structure like JSON by having different documents interpret environments in different ways. The narrator moves provide a good example. The project has a file narratormoves.tex which contains several narratormove definitions, like this:
\begin{narratormove}{Separate them}
{
Use this move to separate the members of the team or to separate
the team from their objective.
}
\end{narratormove}
\newenvironment{narratormove}[2]
{\subsection*{#1} #2}
{}
\input{narratormoves}
Meanwhile, in the Narrator handout, I can define it in such a way that it produces only the moves' names in an itemized list:
\newenvironment{narratormove}[2]
{\item #1}
{}
\begin{itemize}\itemsep0px
\input{narratormoves}
\end{itemize}
Not too shabby! I'm happy with this approach, which is used throughout the project. It reminds me that, somewhere in my courses, I should be showing my students this kind of thing to break them from their trained dependence on Microsoft Word—to show them how thinking like a Computer Scientist lets you solve complex problems in more interesting ways.
The Unexpected
There were a few unexpected twists in the design of this game. I wrote about several of them in my playtesting notes post, and the most important of these was really that I was coming in green to the PbtA scene. I had read Apocalypse World several times and recently also read Dungeon World, but the only exposure I had to playing a PbtA game was Austin's That Belongs in a Museum—and that was pretty early in testing. I have been playing ICRPG again with my boys using a traditional fantasy setting, but the kinds of DungeonMastering advice that I listen to on YouTube is very much aligned with D&D-style systems, not PbtA. That said, I do feel like this project met my design goal of helping me put my head into the PbtA space a little better. Each time I tried to either talk through a scenario myself or run a session with playtesters, I got a little better at thinking within the balance of hero moves and Narrator moves.
The 1960s Batman stories, and my dad's 1960s CCA-approved superhero comics that I grew up reading, are almost always mysteries: the hero needs to figure out the who, what, why, and where of the problem before the Villain gets away with it. This stands in stark contrast to the character-driven harsh open world of Apocalypse World or high-school melodrama of Monsterhearts. Dungeon World is somewhere in between, in that it tries to take the traditional D&D approach and wrap it up in PbtA clothing. KAPOW ends up looking more like Dungeon World than Monsterhearts, where parts of the system really sing while other parts are strung loosely together.
The principles section of the rulebook was one of the last things that I wrote, and it represents my current thoughts about how to reconcile these different pieces. I challenged myself to think about the Narrator's ground rules: what would help them creatively tie together the Villain's scheme with the player's agency? I think I did a fair job of articulating these, but I think it's the sort of thing that can only really be tested by another gamesmaster trying to run the game from what I have articulated. Unfortunately, that was out of scope for this November, but I'm eager to hear from anyone who tries. After all, with a CC BY-NC-SA license and GitHub's collaboration tools, there's always an opportunity for improvement.
Conclusions
This was a great creative exercise. It allowed me to explore an idea that has been on my mind in a rigorous, timeboxed way, with the additional help of a supportive community. I think the resulting game rules provide a fun setting, and I legitimately enjoyed all of my playtesting sessions. If other people can also get some joy out of it, then that's all the better for it. In the meantime, I feel like I was able to strengthen my writing muscles while learning some new perspectives on design.
Given unbounded resources, I know of a few places where I would shore up the current design. Obviously, the rulebook and the handouts would be greatly enhanced with some genre-appropriate art and graphic design. A brilliant idea from Apocalypse World is providing sample answers to open-ended questions in the playbooks. For KAPOW, I would like to give players a list of choices for hero name, real name, appearance, occupation, and contacts, but this was both too onerous to implement and would have broken the one-page design of the character creation sheets, such as it is. Having such options would greatly speed up play, especially in my testing sessions: I was able to confirm that many players enjoyed the open-ended questions and creativity of making characters, but this was also fundamentally individual-creativity time rather than collective storytelling time. Finally, I think it would be helpful to new Narrators to have an appendix of sample villains and schemes. Many of the examples that are provided in the game come from the simple designs I put together for playtesting, but an easier on-ramp could be provided for new players by giving them something more clearly canned.
Thanks for reading! Feel free to post here if you have any questions, and feel free to share the link to KAPOW to any of your tabletop roleplaying or campy superhero friends.
Saturday, November 16, 2019
Playtesting notes for my NaGaDeMon 2019 project: KAPOW!
Last year was the first time I participated in National Game Design Month (NaGaDeMon). I used the month to chase down a game design idea that had been tickling my curiosity for a long time, and the result was Heroic Uncertainty. This year, I decided to participate again but with a very different mode of game design this time around. Since last year, I have been intrigued by the "Powered by the Apocalypse" movement in tabletop role-playing game design. I have also wondered whether this kind of narrative-forward game design would work well with something else my boys and I have been enjoying: the 1960s Batman television show. Hence, I'm glad to announce here—publicly for the first time—that my 2019 NaGaDeMon project is KAPOW: The Campy Superhero RPG. The project is on GitHub, but I have not made it public yet because I have not yet written the introductory fluff to establish exactly what the game is about: I don't want someone to trip across it and misunderstand the design space.
Yesterday, I was able to gather five volunteers for my first full playtesting session. My primary objective for the session was to test whether the systems were adequately supporting the "caper" tropes that come up in every Batman episode: a Villain sets up an unnecessarily complicated plot for fame or fortune, and the heroes need to unravel it to lead up to an epic showdown. Everyone had a great time, and I was overall pleased with how the story and systems worked together.
I know that for you, dear readers, the rest of this post will appear like I have my cart in front of my horse, because what I wanted to write up this morning are some of my concrete playtesting notes. I want to capture my main observations and provide a little bit of context while they are fresh in my head. It will also give me something concrete to reference when I write up my summary post at the end of November.
Without further ado, then, here are some notes and action items for myself.
Yesterday, I was able to gather five volunteers for my first full playtesting session. My primary objective for the session was to test whether the systems were adequately supporting the "caper" tropes that come up in every Batman episode: a Villain sets up an unnecessarily complicated plot for fame or fortune, and the heroes need to unravel it to lead up to an epic showdown. Everyone had a great time, and I was overall pleased with how the story and systems worked together.
I know that for you, dear readers, the rest of this post will appear like I have my cart in front of my horse, because what I wanted to write up this morning are some of my concrete playtesting notes. I want to capture my main observations and provide a little bit of context while they are fresh in my head. It will also give me something concrete to reference when I write up my summary post at the end of November.
Without further ado, then, here are some notes and action items for myself.
- Each hero has a contact, but the articulation of how they know each other is not clear. Also, I did not include in the playbooks that the contacts had to be introduced, but during the introduction phase, the Narrator really needs those names. Perhaps one way to deal with this is for each playbook to also have a "secrets" sheet that is filled out and shared only with the Narrator.
- Having stock options for real names, occupations, and contact options would speed up character creation, especially for those who are new to or uncomfortable with tabletop RPGs. This may take a significant amount of my time to assemble, but could be worth it, especially for one-offs.
- A player pointed out that secret identity names should be alliterative. Yes, they should, unless they are both first names. This should go into the rules.
- The Enigma playbook could instruct them not to share their real name at all to make them a mystery to the players, or it could acknowledge that the camera knows their real name, and so the players do also, even though their characters do not.
- I definitely do need some kind of "undercover" rules for exposing secret identities, since this was one of the first disagreements the players had. This would keep with the Batman-style genre. An alternative is to let the players choose this in the Team intro: do they have secret identities (Batman) or are they simply known as who they are (Johnny Quest)? I need to be careful here that the number of rules does not snowball out of control, since this is a one-month design project!
- At the last minute, I changed Nimble to Focused, and this change worked well. However, the earlier change from Charming to Amazing didn't go so well. The players wondered why Amazing was used for influencing people and why it was not used for a heroic feat. "Amazing" is too metaphorical; I should return to something like "Charming" that implies social savvy. Hm, "Savvy" is a nice word.
- The Second Wind ability on the Tough may be redundant since he already has so much Endurance.
- At the end of the session, they pointed out that the Investigate move, although described as being for "situations or locations", is really more for locations. We discussed having a selection such as "What is out of the ordinary here?" or "What is an important clue?" They really wanted an option like "Where should we go next?" but the problem I had with this—which they understood when I explained—is that this could too easily remove the mystery of the villain's caper. I need to consider the balance here; there may be a role for Contacts.
- Most players did not use their Contacts, but one player used the Contact almost too much, to the point where the Contact was basically a member of the team. I should consider making the Contact option only usable once per session, or risk losing them as a contact.
- I forgot for half the game to tell them to mark Experience when failing a roll, so I probably need some kind of visible reminder on the character sheet about this.
- Similar to the above, I should add reminders on the character sheet about which attribute goes with which basic action, like Apocalypse World does, so that they don't have to scan the whole Basic Moves sheet so often.
- Also, I forgot to tell them about the new "use the scenery" and "onomatopoeia" options for brawling, which need to be on the basic moves overview sheet and not just in the rulebook. However, brawling was also one of the least interesting parts of the game, although this could have been because of the lack of those rules. (Also, we went over time, so the big battle at the end turned slightly perfunctory.)
- One of the impediments to the story was my own discomfort with PbtA-style gamesmastering. I had never done this before, so I was learning along with them. I struggled with the idea of what Apocalypse World calls "announcing off-screen badness." In fact, I think one of the reasons Apocalypse World caught my attention is that this concept is something I didn't understand when I DMed regularly (decades ago) and still struggle with occasionally in my handful of gamesmastering opportunities per year. One of my players was very familiar with PbtA, and he encouraged thinking of it like a movie, quickly describing a scene for the players that their characters know nothing about, but that moves the action forward, such as "A car door slams and the car speeds away from the hotel." I do not know if I need to more prescriptive in the rules about this, practice it myself, or both.
And now, hopefully, a quiet Saturday with lots of work on the rulebook.
Monday, November 11, 2019
Department Vision and the Reflecting, Brainstorming and Imagining Worksheet
My department recently concluded a self-study that included a visit from two external evaluators. We are now beginning discussions about what to do in light of our self-study and the evaluators' recommendations. This comes at a time of organizational change within the university, since we have relatively new administration, a new strategic plan, and a new financial model.
In preparation for an upcoming departmental meeting, the department chair emailed the faculty a "Reflecting, Brainstorming, Imagining Worksheet" that was provided by the office in charge of institutional assessment. I took some time to write up answers to the five questions on the worksheet, and I am sharing them below. I am not sure that these are my best answers, but I've used up the timebox that I gave this exercise. I'm happy to take any feedback or questions about the responses, as I expect to refine them later.
1. What is the purpose of Computer Science to you? To you, what does Computer Science endeavor to accomplish?
Computer Science is the study of the social and technical processes around computing systems, including their inception, development, and maintenance. It is a "science" in the sense of developing falsifiable theories that are supported by principles and rigor. It is distinct from information technology and information systems, which applies extant systems to problem domains, and computer engineering, which seeks to more efficiently or effectively manufacture hardware. "Software engineering" is a common application of Computer Science.
Computer Science drives us to a better understanding of computing systems which, in turn, leads to the development of improvements in those systems.
2. What is the purpose of BSU Computer Science? What is at the core of our work?
The purpose of the department is, primarily, to educate the next generation of practicing computer scientists. This implies a focus on contemporary methods of software development, as incorporated into the broader goals of a liberal education.
At the core of our work is a shared desire for our students to live a good life: to be successful, to be productive, to contribute to their communities, and to reflect on what is good and beautiful.
3. What are three strengths of the BSU Department of Computer Science?
4. What are three weaknesses of the BSU Department of Computer Science?
5. When students (undergraduate and graduate) graduate from our department, what do we want them to know? What do we want students to be able to do? What do we want students to value?
We want them to know:
In preparation for an upcoming departmental meeting, the department chair emailed the faculty a "Reflecting, Brainstorming, Imagining Worksheet" that was provided by the office in charge of institutional assessment. I took some time to write up answers to the five questions on the worksheet, and I am sharing them below. I am not sure that these are my best answers, but I've used up the timebox that I gave this exercise. I'm happy to take any feedback or questions about the responses, as I expect to refine them later.
1. What is the purpose of Computer Science to you? To you, what does Computer Science endeavor to accomplish?
Computer Science is the study of the social and technical processes around computing systems, including their inception, development, and maintenance. It is a "science" in the sense of developing falsifiable theories that are supported by principles and rigor. It is distinct from information technology and information systems, which applies extant systems to problem domains, and computer engineering, which seeks to more efficiently or effectively manufacture hardware. "Software engineering" is a common application of Computer Science.
Computer Science drives us to a better understanding of computing systems which, in turn, leads to the development of improvements in those systems.
2. What is the purpose of BSU Computer Science? What is at the core of our work?
The purpose of the department is, primarily, to educate the next generation of practicing computer scientists. This implies a focus on contemporary methods of software development, as incorporated into the broader goals of a liberal education.
At the core of our work is a shared desire for our students to live a good life: to be successful, to be productive, to contribute to their communities, and to reflect on what is good and beautiful.
3. What are three strengths of the BSU Department of Computer Science?
- Small class sizes allows students to work closely with faculty.
- Students have significant opportunities for high-impact educational experiences, including the capstone, immersive learning, community-engaged projects, and research.
- Faculty are engaged with research that informs and strengthens the courses they teach.
4. What are three weaknesses of the BSU Department of Computer Science?
- Little shared vision about program outcomes and, hence, course and curriculum design.
- Little sense of community among the faculty, students, and alumni.
- No clear communication channels to reach students or alumni.
5. When students (undergraduate and graduate) graduate from our department, what do we want them to know? What do we want students to be able to do? What do we want students to value?
We want them to know:
- Fundamental concepts of programming, including: sequencing, selection, and iteration; data structures such as lists and hash tables, with an intuition for their implications on performance; integration of systems such as clients, networks, and databases
- How to work effectively on a team, including: articulating measurable goals; giving status reports; and suggesting improvements.
- Think critically
- Act respectfully
- Ask clear questions
- Engage in reflective practice
- Learn to use new programming languages, computing systems, or APIs
- Work on a team to design, develop, improve, or maintain computing systems.
- The responsibility they have to their team, their employer, and their community
- The dignity of the individual
- Lifetime learning
Wednesday, October 30, 2019
Lenses in game design and curriculum design
Yesterday, my game design students completed an assignment that had three optional paths. One path involved reading Richard Bartle's excellent summary of the Hero's Journey (1, 2), which includes not just an overview of the concept but also helpful pointers about what students often get wrong. During the class meeting yesterday, a student gave a masterful presentation showing how the The Last of Us maps to the Hero's Journey. Before his presentation, I gave a short lecture about what I've learned about lenses. My point here was to get out in front of issues around the masculine and feminine roles of the Hero's Journey, and I think students got my point.
One of the examples I like to give in this kind of discussion is the ludology vs. narratology wars in game design, which were dying down right around the time I joined the games scholarship community. As I understand it, scholars like Henry Jenkins applied a literary analysis lens to games and, from there, concluded that games are stories. My criticism is that if you hold up any lens, the thing you look at looks like the lens. Using the lens of literary analysis to look at games will always make them look like stories, in the same way that using a Marxist lens to analyze games makes them look like class struggle, or a systems analysis lens makes them look like systems. I made a little joke here and pointed out that we need lenses—if I take mine off, the students turn blurry—but we have to recognize their strengths and weaknesses.
Knowing that my class is mostly Computer Science students, I pointed out that Computer Science curricula still suffer from the fact that many of the discipline's founders were mathematicians. They looked at this new idea through the lens of mathematics and determined that, of course, Computer Science is basically mathematics, and that mathematics is the way to understand this thing that we called "Computer Science." Why, I asked, do we require calculus—which we almost never use in practice—and not philosophy or psychology, which we use multiple times a day?
In truth, I meant it as more of a good-natured jab, but that observation has been haunting me the last 24 hours. The lens of mathematics has undoubtedly done good things for the discipline, as lenses can often do, but it also makes the subject look like the lens. My own department is in the "natural sciences" division of the College of Sciences and Humanities, and that forces the administration to look at us from a particular lens as well.
The old joke goes like this: Ask five Computer Scientists to define "Computer Science" and you get seven different answers. I see the rise of interest in both computer science and in programming for K-12 education, but there's also infighting between the two camps. I heard at a conference the other day, "Logic is the science of Computer Science!" Meanwhile, adults who go to coding bootcamps are taking the jobs that my graduates would otherwise go for: why hire someone green and immature when you can get someone hungry for a challenge and capable of "adulting"? Why, when I try to push the old lenses out of the way, do my colleagues desperately reach out and pull it back like a comfortable blanket on a cold day?
One of the examples I like to give in this kind of discussion is the ludology vs. narratology wars in game design, which were dying down right around the time I joined the games scholarship community. As I understand it, scholars like Henry Jenkins applied a literary analysis lens to games and, from there, concluded that games are stories. My criticism is that if you hold up any lens, the thing you look at looks like the lens. Using the lens of literary analysis to look at games will always make them look like stories, in the same way that using a Marxist lens to analyze games makes them look like class struggle, or a systems analysis lens makes them look like systems. I made a little joke here and pointed out that we need lenses—if I take mine off, the students turn blurry—but we have to recognize their strengths and weaknesses.
Knowing that my class is mostly Computer Science students, I pointed out that Computer Science curricula still suffer from the fact that many of the discipline's founders were mathematicians. They looked at this new idea through the lens of mathematics and determined that, of course, Computer Science is basically mathematics, and that mathematics is the way to understand this thing that we called "Computer Science." Why, I asked, do we require calculus—which we almost never use in practice—and not philosophy or psychology, which we use multiple times a day?
In truth, I meant it as more of a good-natured jab, but that observation has been haunting me the last 24 hours. The lens of mathematics has undoubtedly done good things for the discipline, as lenses can often do, but it also makes the subject look like the lens. My own department is in the "natural sciences" division of the College of Sciences and Humanities, and that forces the administration to look at us from a particular lens as well.
The old joke goes like this: Ask five Computer Scientists to define "Computer Science" and you get seven different answers. I see the rise of interest in both computer science and in programming for K-12 education, but there's also infighting between the two camps. I heard at a conference the other day, "Logic is the science of Computer Science!" Meanwhile, adults who go to coding bootcamps are taking the jobs that my graduates would otherwise go for: why hire someone green and immature when you can get someone hungry for a challenge and capable of "adulting"? Why, when I try to push the old lenses out of the way, do my colleagues desperately reach out and pull it back like a comfortable blanket on a cold day?
Tuesday, October 15, 2019
The Fantasy Item Shop Trope
Two games I have recently been enjoying are Bargain Quest and Lord of the Rings: Journeys in Middle Earth (JiME). Juxtaposing these in a semester when I'm teaching game design got me thinking about the role of tropes in game design. In particular, I've been thinking about the ubiquity of the item shop.
Bargain Quest goes all in on the trope of the fantasy item shop. In this game, you are the proprietor of such a shop. You cleverly play your item cards to lure fantasy heroes into your shop so that you can sell them gear. Then, the heroes go off on their adventure.You earn victory points if they are successful, which raises the reputation of the shop that outfitted them. On the other hand, you also earn victory points by making them spend as much money as they can—even if you're selling them overpriced garbage. Once you've lured them to your shop, you can make them buy as much as you like. This clever spin on the item shop is infused throughout the design: Bargain Quest is entirely about the idea that you run a fantasy item shop, and this idea is executed superbly.
JiME follows in the design steps of Descent and Star Wars: Imperial Assault, all being tabletop adventure games that have app-supported cooperative campaigns. I have played both Descent and Imperial Assault, and they have a familiar rhythm: play a scenario, gain treasure, spend treasure at an item shop, repeat until victorious. This pattern is ubiquitous in CRPGs and has a strong presence in popular tabletop RPGs. The system is implied by the oldest Dungeons & Dragons rulebooks that I have seen: given that killing monsters gets you gold and items in the shop have a gold cost associated with them, then the feedback loop practically designs itself. The item shop is merely the narrative mechanism by which gold becomes power.
Last night, two of my sons and I finished the fourth scenario in JiME. One of them, being familiar with both Descent and Imperial Assault, said, "I wonder when will we get to buy new items." I was the one who had read the rules, set up the game, and been responsible for understanding the basics, so I had passively noticed before that items did not have gold costs, but I had not pointed this out to my boys. When he mentioned his expectation of an item shop, I pointed out to him that there must not be one because there was no currency or costs. This got me thinking about it more critically. Why isn't there an item shop?
It is worth returning to the source material—the inspiration for the game, Lord of the Rings. Tolkien's stories of Middle Earth don't mention item shops. There must certainly have been places where Bilbo could buy a new waistcoat, but that's hardly the stuff of legend. When a character in the novels does gain an item, it's crucial to the story, whether its cram or lembas, Sting or the One Ring itself. Gaining items is an interesting part of the story; one is never so crude as to try to purchase an elven cloak. While Dungeons & Dragons drew clear inspiration from Tolkien's world, in some ways it threw away his storytelling in favor of maintaining the quantified spirit of tabletop wargaming: a mithril coat could not be part of the game without having a weight, an armor class, and a cost. That's practical and simulationist, but it's not particularly Tolkienian.
Kudos to the designers of JiME then, for eschewing the trope in favor of designing within the source material. They were not only fighting the trope but also inertia, given that the same company produced the Descent and Imperial Assault systems. I would have liked to be in the design meetings when these decisions were made. Was it more a desire to be true to the source material or a dissatisfaction with their existing systems that led to this interesting design decision?
(By the way, here are links to my painting posts for games mentioned above, in case you want to check them out: Descent 1, 2, 3, 4; Imperial Assault 1, 2, 3; JiME 1.)
Bargain Quest goes all in on the trope of the fantasy item shop. In this game, you are the proprietor of such a shop. You cleverly play your item cards to lure fantasy heroes into your shop so that you can sell them gear. Then, the heroes go off on their adventure.You earn victory points if they are successful, which raises the reputation of the shop that outfitted them. On the other hand, you also earn victory points by making them spend as much money as they can—even if you're selling them overpriced garbage. Once you've lured them to your shop, you can make them buy as much as you like. This clever spin on the item shop is infused throughout the design: Bargain Quest is entirely about the idea that you run a fantasy item shop, and this idea is executed superbly.
JiME follows in the design steps of Descent and Star Wars: Imperial Assault, all being tabletop adventure games that have app-supported cooperative campaigns. I have played both Descent and Imperial Assault, and they have a familiar rhythm: play a scenario, gain treasure, spend treasure at an item shop, repeat until victorious. This pattern is ubiquitous in CRPGs and has a strong presence in popular tabletop RPGs. The system is implied by the oldest Dungeons & Dragons rulebooks that I have seen: given that killing monsters gets you gold and items in the shop have a gold cost associated with them, then the feedback loop practically designs itself. The item shop is merely the narrative mechanism by which gold becomes power.
Bilbo, Legolas, and Gimli fighting off a hungry warg |
It is worth returning to the source material—the inspiration for the game, Lord of the Rings. Tolkien's stories of Middle Earth don't mention item shops. There must certainly have been places where Bilbo could buy a new waistcoat, but that's hardly the stuff of legend. When a character in the novels does gain an item, it's crucial to the story, whether its cram or lembas, Sting or the One Ring itself. Gaining items is an interesting part of the story; one is never so crude as to try to purchase an elven cloak. While Dungeons & Dragons drew clear inspiration from Tolkien's world, in some ways it threw away his storytelling in favor of maintaining the quantified spirit of tabletop wargaming: a mithril coat could not be part of the game without having a weight, an armor class, and a cost. That's practical and simulationist, but it's not particularly Tolkienian.
Kudos to the designers of JiME then, for eschewing the trope in favor of designing within the source material. They were not only fighting the trope but also inertia, given that the same company produced the Descent and Imperial Assault systems. I would have liked to be in the design meetings when these decisions were made. Was it more a desire to be true to the source material or a dissatisfaction with their existing systems that led to this interesting design decision?
(By the way, here are links to my painting posts for games mentioned above, in case you want to check them out: Descent 1, 2, 3, 4; Imperial Assault 1, 2, 3; JiME 1.)
Monday, October 7, 2019
Reflecting on my activities at CCSC:MW 2019
This past weekend, I attended the annual conference of the Consortium for Computing Sciences in Colleges, Midwest. I have been involved with this organization for many years, and I wanted to take a moment to share an overview of how I was involved at this conference.
There are practically always Ball State students in the Showcase, and so rather than judge the event myself, I organize volunteer judges. The students are told ahead of time that they will be judged on the same six categories recommended by Glassick et al.; specifically, they are told they will be evaluated on the following:
To determine the winners, I simply take the medians across the six categories and sum them. This led to a clear winner in Discovery and Application tracks as well as a clear Honorable Mention (the third highest overall score), so those are the prizes we awarded. I am proud that my own Canning Heroes team presented and won the Applications track award this year.
Student Showcase
I am in charge of the Student Showcase. Many years ago, it was a Student Poster Competition, but it felt to me that this format gave inordinate status to the scholarship of discovery. When I took it over, I revised the format to have two tracks inspired by Boyer's Scholarship model: traditional "research" goes into the Discovery track, and interesting applications of computing goes into the Applications track. This year, we had six Applications track presentations and three Discovery track.
- Clear Goals (“What is the goal of this work? What problem are you solving?”)
- Adequate Preparation (“How did you get ready to do this work?”)
- Appropriate Methods (“How did you solve your problem? Why did you approach the problem in this way?”)
- Significant Results (“What was the result of this work? Who is affected by this work?”)
- Effective Presentation (“How well does this poster or demonstration communicate what is important about this work?”)
- Reflective Critique (“What would you do differently? What does this mean for you and your career?”)
To determine the winners, I simply take the medians across the six categories and sum them. This led to a clear winner in Discovery and Application tracks as well as a clear Honorable Mention (the third highest overall score), so those are the prizes we awarded. I am proud that my own Canning Heroes team presented and won the Applications track award this year.
Tutorial: Unreal Engine 4 for Computer Scientists
Since getting into Unreal Engine 4 a few years ago, I have noticed many interesting manifestations of Computer Science concepts. I decided to run a tutorial session this year at CCSC:MW to show these to the attendees. However, it did not go as well as I had hoped. I did not have machines that could run UE4, nor could I expect attendees to have adequate laptops, so I designed the tutorial as a sort of Show and Tell. Then I was scheduled for 8:30AM on Saturday, which meant that attendees would be tired and very few students would be there. The room we were given was really awkward: it was a lab, which meant everybody was behind monitors where I could not make eye contact. To make it worse, there was no station where I could stand and work at my laptop, so I was seated and, because of HDMI cabling, facing away from the attendees. To top it all off, I was traveling with family, and my son got sick that morning at the hotel, so I was distracted and unfocused.
Suffice it to say, I would not be surprised if the session were poorly reviewed. Heck, I would poorly review it. I would like to do something like this again, but in a more controlled environment. Perhaps I will move forward with writing up an actual paper about some of the interesting manifestations, and then be able to give a shorter show-and-tell in a future year.
On a positive note, preparing for the tutorial gave me several new ideas for video tutorials. In fact, I could take my tutorial outline, chop it up, and have a pretty good series. Now, it's a matter of determining which ideas have sufficient weight to merit the time required to do the video. (For those who don't know, I have a YouTube playlist of game programming tutorial videos. In fact, I have written up this blog post while rendering my latest video in Blender.)
WIP: Mapping Game Design Learning Outcomes to CS2013
I presented in the Works-in-Progress session what I have shared here on my blog about mapping game design learning outcomes to the ACM/IEEE CS Body of Knowledge. I think it was well received. I have made the slides available online for anyone who wishes to see them, but as usual, my slides do not make a lot of sense without the stories to go with them.
I think the audience assembled for the WIP session was happy to hear my story, and they seemed to understand my frustration. I think they appreciated seeing how one becomes more critical of CS2013 as one digs deeper into the recommendations. One mentioned that his institution had given up on CS2013 and simply used instructor consensus. Ball State is really the largest school in the region that regularly participates in CCSC:MW though, and most of the attendees were in very small departments at private liberal arts schools.
Subscribe to:
Posts (Atom)