Friday, February 15, 2019

Counting plays with a specific player on Board Game Geek

I started logging all my board game plays on BoardGameGeek in January 2016. In writing my end-of-year game reports (2016, 2017, 2018), I have wondered how many games I played with specific players during the year, but this proved to be a difficult number to find. The unofficial BoardGameGeek app that I use tells me how many games I have played with players in total, but this cannot be filtered by date. (Incidentally, I just discovered, when attempting to link to the app, that it is currently missing from the Google Play Store but that the author is working on getting it updated.) I asked a few friends over the years, but nobody knew of a way to get the data I wanted back out of BoardGameGeek. However, I am working on a scholarly piece for which this is important information, and so I decided it was worth some effort to figure it out.

BoardGameGeek provides an XML API for programmatic access to its content, including play data. Let's say for the sake of example that there is a user named "sample", and I want to get all their plays between January 1, 2016 and December 31, 2018. This query will provide a good start:

 https://boardgamegeek.com/xmlapi2/plays?username=sample&mindate=2016-01-01&maxdate=2018-12-31  

The results are limited to 100 per page, but there is no way to ask how many entries exist. Hence, to get all the data, one has to go page by page until the page is empty. Accessing a particular page of data is a simple matter of adding a page=N parameter to the end of the query.

I'm not interested in all plays, though; I am only interested in plays with a particular user. In my case, I am looking for matches by a given name and not a BoardGameGeek username, since this is how players are most easily added through my app. If I wanted to get all the plays with, say, Norm, than I need to dig into the XML and only count those where there is a "player" element with the value "Norm". For this, I can use XPath via xmllint.

After refreshing myself on some Bash fundamentals and finding this beautiful, idiolectic way of creating a do loop, I ended up with this script, which counts plays between BoardGameGeek fake user "sample" and his imaginary friend "Norm" between January 1, 2016 and December 31, 2018.
 #!/bin/bash  
 COUNT=0  
 PAGE=1  
 while  
   echo "Processing page $PAGE"  
   xml=`curl -s "https://boardgamegeek.com/xmlapi2/plays?username=sample&mindate=2016-01-01&maxdate=2018-12-31&page=$PAGE"`  
   plays=`xmllint --xpath 'count(/plays/play)' - <<< "$xml"`  
   pagecount=`xmllint --xpath 'count(/plays/play/players/player[@name="Norm"])' - <<< "$xml"`  
   COUNT=$(($COUNT + $pagecount))  
   PAGE=$(($PAGE + 1))  
   echo "Page $PAGE result is $pagecount out of $plays, so total is $COUNT"  
   [ "$plays" -gt 0 ]  
 do  
   :  
 done  

I learned a few interesting things writing this script and sharing it with friends. Script-wizard Ben Dean helped me revise the script so that it would not generate temporary files on the filesystem, and in pursuing this goal, I learned about bash herestrings. I had also not previously scripted with XPath via xmllint, only within larger applications. This script got me what I needed to know: I logged 937 plays with my eldest son in the three full years I have been keeping track, but not counting the plays since January 1 of this year.

Hopefully this script will be useful to you. If nothing else, it will be useful to me next time I need to ask this question!

Tuesday, January 29, 2019

Global Game Jam 2019 and Kaiju Homecoming

This past weekend was Global Game Jam 2019, and I was the site organizer for Ball State's location. It was my third time serving as site organizer here. I want to capture a few thoughts about the weekend's events here before they get too far way.

We had 21 people register, almost all of whom showed up. There were several people who came on Friday but then didn't return, unfortunately. I have seen this each year, and I think it tends to be people who are interested novices. They don't know how to move from idea to implementation, and they are not familiar with the little failures that are to be expected along the way. I take a laissez faire approach to site organization: I don't manage teams or themes, but instead I encourage people to use the whiteboards around the room to gather interested people. I've thought about whether I should include more didactic interventions, but the truth is that teaching is already my job: the jam is about jamming after all. I would never kick an amateur musician out of a jam session if they wanted to participate, but I would also not be surprised if they left. Perhaps it sounds a bit harsh when I write it out, which makes me think I need to work on better packaging for my pragmatic philosophy.

This relates to something I want to share about this year's keynote. I had a sense that none of the four speakers were actually talking to my audience: mostly novice jammers. The thoughts they shared might resonate with people who already know what they are doing, but that's not helpful if you have no grounding. Let me pick on one particular example. Rami Ismail was the most on-point of the three speakers, but he made a claim in his presentation that you cannot do a game jam wrong. I firmly disagree: there are many, many ways to do things wrong. Here are several: being an unpleasant team member, for example by refusing to compromise on your ideas or by not keeping your commitments; focusing on accidentals such as title and credits screens rather than the essence of the game; taking too long to get to a minimally playable state; thinking that ideas have value rather than implementations; not considering packaging or deployment until just before the deadline. These are the kinds of mistakes that I have seen jammers make and, more to my point above, that I see novices make in my game design and game programming classes. In fact, I think it's much easier to do it wrong than to do it right—regardless of the quality of the final product. I think a rhetoric that says you cannot do things wrong sets novices up for mistakes and frustrations.

At the end of the jam, we had four playable projects. There were two more that were "done" but not uploaded, one of them because the jammer disregarded my and others' advice to stop trying to add features and instead to figure out how to package and upload his work, and the other because he could not make it on Sunday and didn't get around to uploading his.

Last year, my oldest son came and participated in the jam, making two games of his own in the time it took me to break one. This year, I encouraged him to come again, but I told him that I really wanted us to work together on something. He actually started working on his own anyway right after the theme announcement, but once I reminded him that I really wanted to collaborate, he was up for it.

We struggled to turn the theme ("What home means to you") into a game idea. One of our best sketches was of a game where the family stands around the counter, waiting for Mom to look away, reaching out and grabbing tidbits to eat before the siblings can. I imagined we might even use a polka soundtrack and be able to name it after the Shmenge hit, "Who stole the cabbage roll?" As we kept talking, though, we somehow pulled inspiration from Terror in Meeple City and the idea that kaiju have a home as well... and there are people in it, and we don't want those people there. This was the idea that became Kaiju Homecoming.

We built a minimally playable version in Unreal Engine just to make sure the pieces would fit, just dropping a plane onto some cylindrical columns and throwing a ball at it. We laughed from the get-go. I asked my friend Emma if she could make up some digital meeples, and so she and her friend Jessica provided these models. The next day, my son worked on some more original art for the floors and he did all the level design. We had to tweak the level a few times because we couldn't get the floors to stack nicely on the columns in the UE4 editor. Playing the game, you can see the physics solver create wobbles and even occasional collapses as a result. There are probably snapping or simulation sleeping settings that we could use to fix that, but I am really happy with the look and feel, given our constraints. My son was also in charge of all the audio direction except for the soundtrack: I found the music on Kevin MacLeod's site, and he reviewed sound effects using the free weekend access to SoundSnap. We had the game mostly finished on Saturday, and because of other family obligations, we only had a few hours to work on Sunday, right before the deadline. We got as much polish in as we could, and the game was complete.

The game runs best as a Windows 64-bit executable, and you can download the project for Windows from the Global Game Jam site. We also produced a Web build that I've hosted on GitHub: it's more accessible, but lacks the performance of the native version. All the source code and assets are in a repository on GitHub as well, although we actually used Perforce Helix for version control during development. Finally, for those who want the quick overview, I recorded this short gameplay video:


It was a fun weekend jamming with my son and seeing what the other jammers put together. I've started some conversations that might lead us to a different location for the 2020 jam, but that's a long way off to worry about planning. Now, I need to return to all the tasks that I put in the "I'll take care of this after Global Game Jam" bin.

Tuesday, January 15, 2019

What do you make?

I've dabbled with different approaches to having students introduce themselves. In my game design classes, I've generally asked students to share their last great play experiences. This allows us to share some lighthearted stories and start discussing the relationship between "play" and "game." In my HCI class the last two semesters, I've asked students what they want to learn. Last semester's experience with this was not very interesting, since most students said some variation on "how to design interfaces." It's not a bad answer, but it's also not surprising.

This semester, I decided to take a hint from my good friend Easel Monster, as explained in the first minute of this video:
Easel recommends that when you meet someone, you should ask what they make. I did this in both of my regular courses this semester. In the game studio course, which includes a multidisciplinary undergraduate teams, a lot of students gave games-related answers: video games, stories, fictional settings for tabletop roleplaying games, music. This was about what I expected since I recruited these students specifically to produce a video game: of course they would be makers at heart.

My other class is Human-Computer Interaction, an elective for Computer Science majors and minors. I have been out of teaching the prerequisite course (CS222) for several semesters. Where I used to recognize half or more of the students coming into my upper-level elective courses, this time I only knew a handful. That means only a handful knew me as well, although I do wonder if they thought they knew me through my reputation. In any case, on the first day, I took my customary mugshots, having each student hold up a sheet of paper with their name written on it. Having these photos makes it much easier for me to learn which names and faces go together. As they stood in front of the room, I asked them to give their name, where they are from (as broadly as they wish to answer), and to answer the question, "What do you make?" Some of them answered that they made Web pages or software, one in particular referencing software made at his job. Others said they made stories or, again, fictional worlds for tabletop roleplaying. Three specific answers jumped out at me as being especially interesting. One student answered that he makes sandwiches. That's a great thing to make! Someone in the class asked if he made special sandwiches, and he said he made a mean PB&J. Another answered that he made friends. You could practically hear the smiles break out among his classmates—is there anything better to make? Finally, one student said, "I don't know what I make, but I'm trying to figure that out." There's a curious one. On one hand, I say he's in the right place, so higher education can help him sort it all out. On the other hand, I wonder if it should be an entry exam to ask, "What do you make?" to help students think about it.

Thanks, Easel Monster!

Monday, December 31, 2018

The Games of 2018

With just a few hours left in 2018, I am going to go ahead and write up my "Games of 2018" post. Should anything change before midnight tonight, I'll quietly come in and edit the details. This is the third post in this series, the other two being 2016 and 2017.

Let's start with the numbers for the year. In 2018, I played 103 different games across a total of 548 plays. That's significantly more plays than last year (505) over about the same number of games (104). While my scholarly h-index barely crept up from 12 to 13, my games h-index rocketed from 15 to 20. My h-index for the year was 11, and here are the 11 games that I have played 11 or more times in 2018:

  • Gloomhaven (55)
  • Stuffed Fables (23)
  • Thunderstone Quest (22)
  • Go Nuts for Donuts (20)
  • Rhino Hero Super Battle (19)
  • B√§renpark (16)
  • Camel Up (16)
  • Champions of Midgard (12)
  • Carcassonne (11)
  • Clank! (11)
  • Rising Sun (11)

Summer 2018 was the Summer of Gloomhaven. My oldest son and I had a great time playing this BoardGameGeek chart-topper. I shared my painted base characters back in March, and I have also painted almost all the rest of the characters—all those we have unlocked. I'll make a spoilerful post about those once they are complete. The 55 plays are not all full-length games: some were short defeats. I remember one of the earlier scenarios was a terrible match for our characters, and we had several false starts on it. The last time we played was over the Summer break, and we haven't had the game to the table since. I was hoping we might finish the campaign over the Winter Break, but other activity has taken precedence; I have favored playing games that incorporate more people rather than just the two of us with Gloomhaven.

Stuffed Fables was a family Christmas gift in 2017. I shared my painted minis in June, and this is one I have played with my three older boys. The third son is particularly tickled to be involved in this kind of "big kids game" I think. We have really enjoyed it, and the several times, the writing has made us laugh aloud. We have one story left to go before finishing the book. Tracking plays of Stuffed Fables is actually a bit of tricky business. I decided to track each page as a play, following along the idea of each attempt at a Gloomhaven scenario being a play. One might argue that I should have counted "sessions" of Stuffed Fables. Yet, only once has one of our sessions been an entire story; usually we do a few pages and then store our stuff in baggies for another day.

Thunderstone Quest was a recent arrival via their second Kickstarter, as I described in my post about the painted minis. We have enjoyed this immensely, and I have taught it to a few friends. It and Clank! are easily my two favorite deckbuilding games in this genre, combining Dominion-style deckbuilding with some spatial puzzles. The number of plays of this game and of Champions of Midgard correspond to my second son's rising up into the next tier of complexity; these are joined by Runebound, which he only recently learned but we've played a lot of these past weeks. He still struggles with some of the more complex interactions—and even more so with sitting still for longer games—but overall I've been surprised with how well he manages the games. Like any 8-year-old, he will sometimes get tunnel vision on a plan and not roll with the punches, but I think this is something that games will help him learn to do better.

Rhino Hero Super Battle, Carcassonne, and Camel Up are joined by Go Nuts for Donuts as games that anyone in the family can play, and so most of my plays of these are with the younger two boys.

The notable thing about Rising Sun getting to the table eleven times is that many of those were with friends rather than family. Almost all my gaming is with my family, and I love playing games with them. Having them here probably makes me a bit lazy about reaching out to my friends to have them over. However, I had a small group of friends who really caught on to Rising Sun and came over for several game nights this summer. I feel really good about that and need to keep that up.

A few notable games of 2018 did not make the cut into the top 11. I bought Charterstone as a family Christmas gift, and my two older sons, my wife, and I are four games into the campaign. We have been enjoying that, and I look forward to seeing where the game goes next. My third son received Ticket to Ride: First Journey last year for Christmas and we played that quite a bit. A few weeks ago, he graduated to Ticket to Ride, which we played five times together, before I taught him Ticket to Ride: Europe, which I think is the far superior game. We have now played Europe five times as well, and he asks pretty much every day to play it again. The kid loves trains.

Looking at the list of games I only played once this past year, it makes me wonder if I should be even more aggressive about getting rid of games. I remember my wife sharing a story with me about a collector who got rid of all but 10 games, and that he was happier with the ten he really loved than the many he rarely played. My brother also recently tried a thought experiment of what games library he would build with just $250. To me, the answer is clear: two copies of Mage Knight: The Board Game, Ultimate Edition.

Each year, I've written a little about tabletop RPG in this post, and once again, I was able to do a very little bit of RPG gaming, but not much. I ran two sessions of Index Card RPG during the year. One was a game with my three older boys, themed around the "Magic Sword" fantasy realm that my second son spent many years imagining. I actually haven't heard him say much about it, even in the months leading up to our summertime session, but for a long time all of his imaginary play revolved around a world of knights, dark magic, and dragons. My third son particularly enjoyed the session I think, and he regularly asks to play Magic Sword, but I haven't made the time to spin up new adventures for them. The other session was a challenging design for a big family vacation. With some help from the ICRPG community, I designed an adventure that would scale to a variable number of players. As it turned out, only my two older boys and their older cousins were interested, so I had a manageable table of four. I think the session was a great success in many ways, and it was a good way to spend some time with my niece and nephew.

As has been my custom, let me wrap up by looking at the games that comprise my overall h-index of 20:

  • Gloomhaven (55)
  • Animal Upon Animal (54)
  • Crokinole (47)
  • Carcassonne (36)
  • Camel Up (35)
  • Rhino Hero: Super Battle (35)
  • Labyrinth (31)
  • Clank! (29)
  • Terror in Meeple City (29)
  • Runebound (Third Edition) (28)
  • Dumpster Diver (23)
  • Race for the Galaxy (23)
  • Red7 (23)
  • Reiner Knizia's Amazing Flea Circus (23)
  • Stuffed Fables (23)
  • Thunderstone Quest (22)
  • 4 First Games (21)
  • Flash Duel (21)
  • Go Nuts for Donuts (20)
  • Samurai Spirit (20)
This was the year that Gloomhaven overtook Animal Upon Animal. I feel like has to be a milestone in the growth of my family, that a heavy fantasy strategy game overtakes a light HABA game. 

Thanks for reading. I hope 2018 was a good year of gaming for you as well. Here's to a happy and playful 2019!

My Notes on "Make It Stick: The Science of Successful Learning"

Several weeks ago, I finished reading Make It Stick by Brown, Roediger, and McDaniel. It was recommended to me by a good friend with a heart for improving education. The book aims to explain what we know about learning from cognitive science and how this can impact the practices of teaching and learning. I found the book to be inspirational, and I mentioned the book in several recent essays and presentations. I happened to meet local cognitive science and student motivation expert Serena Shim this semester, and she affirmed the findings and value of the text as well.

One of the most important findings that came up throughout the book is that spaced practice is better than massed practice. I think we all recognize that it is true: of course studying throughout the semester is more effective than cramming. However, the science is more nuanced. Massed practice is actually better for short-term recall than spaced practice, but spaced practice is better for long-term recall. This has a fascinating corollary: if our courses contain high-stakes tests, then it is a good tactical decision for students to cram.

This implies to me then that we instructors have to make a real choice between I want students to pass this test and I want students to remember this a year from now. I have a rule of thumb that I have only recently had to articulate, which is that I only want to teach content that I think students should know in five years. My general pedagogic approach favors spaced practice, but perhaps I can do more to support this. However, recent conversations made me realize that this perspective is not universal. I was involved in a somewhat heated discussion about a master syllabus revision with a colleague. The particular syllabus had, in my opinion, too many low-level learning outcomes. I argued that students don't learn these items, and he argued that they do. As evidence, I cited that they could not repeat their achievements a year after taking the class, and as evidence, he cited that they passed the final exam. Here are the loggerheads of higher education: we both believed the other to not just be wrong, but to be holding the wrong value system.

I was reminded by the text of the value of testing as retrieval practice. I had read this before but tried to dismiss it; however, the presentation by Brown et al. makes it hard to ignore. Learning improves through retrieval practice, and testing is perhaps the simplest way to practice retrieval. I mostly gave up on using tests many years ago, favoring instead continuous authentic work. However, I also see my students not remembering to apply fundamental lessons early in the semester into their work later in the semester. I need to review my use of quizzes and tests, as well as how I prepare students to do their own self-testing.

Another theme of the book that knocked my proverbial socks off was that immediate feedback is not always better than delayed feedback. I think that in the educational games community, it is taken for granted that feedback is simply good, and that quicker feedback is better feedback. As the argument goes, if feedback is good for learning, and games are feedback machines, then games can be good for learning. This is not wrong, but it is also superficial. Not all feedback is created equal. The authors cite studies that show that delayed feedback can lead to better learning. As I understand it, the actual reason for this is not understood, but the prevailing hypothesis is that immediate feedback makes the feedback indistinguishable from the task itself; this leads to a result where when the feedback is not present, knowledge of the task breaks down. This sounds an awful lot like "I can do it in the game, but I cannot do it outside the game." I wonder how many empirical educational game research projects have investigated feedback delay as a dependent variable, and if not, how one would construct such a study. After all, a player expects that if they press 'A', Mario should jump right away.

Reading the section on delayed vs. immediate feedback made me think of two other salient examples where immediate feedback may be causing problems. The endemic one is automatic spellcheck and grammar check: we all know that students do not learn to spell or write by letting their word processor do the work, it just builds a reliance on the word processor. The other, related example is IDE for novice programmers. As with automatic spellcheck, the IDE will add red squiggles to invalid code, and students can right-click on it and change it to whatever the IDE wants—often without regard for whether it is what they should want.

Chapter 8 of the book provides a series of helpful summaries that are organized for different reader demographics. It's a valuable chapter, and so I will spend a bit of time on it here describing what caught my attention and where I think it should take me. In the section for teachers, they recommend explaining to students how learning works. The following quotation is a good overview:
  • Some kinds of difficulties during learning help to make the learning stronger and better remembered
  • When learning is easy, it is often superficial and soon forgotten
  • Not all of our intellectual abilities are hardwired. In fact, when learning is effortful, it changes the brain, making new connections and increasing intellectual ability
  • You learn better when you wrestle with new problems before being shown the solution, rather than the other way around
  • To achieve excellence in any sphere, you must strive to surpass your current level of ability
  • Striving, by its nature, often results in setbacks, and setbacks are often what provide the essential information needed to adjust strategies to achieve mastery
Another tip for teachers is to teach students how to study. This has been on my mind quite a bit, along with the question, "Where does the buck stop?" I teach primarily junior and senior undergraduates, and I estimate that 5% of them have any real study tools. Indeed, I think a good description of the Ball State demographic is, "Students who are smart enough to have gotten this far without having developed study skills." Including direct instruction on study habits is an investment in their future learning, but I doubt I would be able to reap it in my own courses, so it's taking away from time on topic. More frustratingly, I have seen for years that I can teach good processes for learning and software development in a course like CS222, only to see that a year later, the students have never touched any of those techniques because other faculty do not expect them to. For example, I can teach the value of pair programming or test-driven development, present the students with research evidence that these increase productivity, and require them to deploy these techniques; but a year later, when I ask them to do these in a follow-up course, they say that they have not used these since CS222. Why should I be more optimistic about study skills, when inertia is powerful and habits are so hard to override?

The section of tips for teachers returns to the theme of "desirable difficulties" that came up throughout the book. Here are some specific desirable difficulties that they recommend:
  • Frequent quizzing. Students find it more acceptable when it is predictable and the individual stakes are low.
  • Study tools to incorporate retrieval practice: exercises with new kinds of problems before solutions are taught, practice tests, writing exercises reflecting on past material and related to the aspects of their lives; exercises generating short statements that summarize key ideas of recent material from text or lecture.
  • Quizzing and practice count toward course grade.
  • Quizzing and exercises reach back to concepts and learning covered earlier in the term.
Again, this is a valuable summary. Each of these items is covered in the text with explanation and citation. It's clear what actions can come from this list as well, and it makes me look at opportunities in my upcoming HCI class in a new way. I also recognize in it the value of several things I already do in the class, such as having students connect readings to their experience and writing reflections of development experiences. Given that I tend to divide semesters into a content-oriented first half and project-oriented back half, I need to be more conscientious about designing assignments and quizzes that reach back to the early part of the first half; this should help students deploy these ideas more readily in the second half.

The final bit of advice in Chapter 8 is to be transparent with students about incorporating desirable difficulties into the class. I have always been a fan of white-box pedagogy, although it's not every semester that I see students take interest in why I am teaching the course the way that I am. Student teaching evaluations often reveal quite mistaken models about my intentions as well. Sometimes I get these excellent reflection sessions as I described in Fall's HCI class, but the irony here is that they generally come after students have completed their evaluations.

I highly recommend Make It Stick. It is written clearly and precisely and organized in a way that emphasized the important points. Crucially, it avoids educational fads in favor of empirical research. Chapter 8, as I have said, provides a great synopsis that turns the ideas of the book into potential action items for practice. 

Thursday, December 27, 2018

Planning CS445 Human-Computer Interaction for Spring 2019

Around the Christmas celebrations, I have spent many hours the past two weeks planning my Human-Computer Interaction course for Spring 2019. I wrote my reflection about the Fall semester back on December 8, and I just put the finishing touches on the Spring section's course plan before lunch today. Now, I would like to write up a few notes about some of the interesting changes I have in place.

First, though, a funny caveat. Since I originally designed the HCI class for CS majors, it has been CS345. Due to administrative busybodies, the course will now be numbered CS445 instead. Which label should I use for my blog posts? I think I'll switch over to "cs445" now, and I'll have to remember to use both codes when I'm searching for my old notes.

Canvas Grading Caveat
Prompted in part by my frustrating experience at the end of the Fall 2018 Game Design class, I have a more explicit statement on the course plan telling students to ignore Canvas' computed grade reports. I would always say this in class, but I did not have it explicitly in the course plan before. Also, I found out that I could mark assignments as not contributing to the final grade, in which case students will be able to see their assignment grade but not a false report of their "current" final grade, so I need to remember to mark all the assignments that way. Also also, please take a moment to consider the epistemological tragedy that is the concept, "current final grade."

Writing
I was surprised in the Fall when one of my more talented HCI teams brought up their project report, highlighted a place where I pointed out grammatical and spelling errors, and asked if they had lost points because of it. There are two things wrong with this question. The first is that it assumes the students had some points to lose in the first place, which is simply not true. I don't take away points from anyone; instead, I award points for demonstrating competence. You cannot take away something that someone doesn't have. The second, more pragmatic problem is that the students also had significant conceptual and descriptive problems in their report, but they seemed more concerned about the spelling and grammatical errors.

Last semester, I included a link to the public domain 1920 version of Strunk's Elements of Style, along with advice to read it. This time, I've made my expectations more explicit. On the evaluation page of the course plan, I have explained briefly the importance of writing and the fact that Elements of Style will provides my expected standards. I also explain there that I expect to give feedback on both conceptual problems and spelling or grammar problems, along with a primer about how to interpret that feedback. I thought about making an assignment around Elements of Style, but I decided against it, partially because I did not want to shift my early-semester plans ahead by a day. My professional opinion is that the book should be remedial to anyone who has a high school diploma, but I am also a realist about the variable quality of writing instruction these students may have received.

Software Architecture
It was a little disappointing to see so few teams really engaging with principles of quality software construction last semester. I have written about this before, and the students are aware that the culture of other classes is one that values only software output rather than software quality. I have carved out some design-related topics from the HCI class to make more time to work through examples of refactoring toward better architectures. I'm still working on the exact nature of these assignments, but I have a few notes to draw from. The schedule I have online right now actually goes right up to the point where I want to switch gears from design theory to software architecture practice.

Specifications Grading
After a positive experience in last semester's game programming class, I have converted my HCI class project grading scheme to specifications grading. I laid out my expectations for each level of poor (D), average (C), good (B), and excellent (A) grades. This was an interesting exercise for me, especially around the source code quality issues. Last semester, students could earn credit for following various rules of Clean Code, and a mixed grade simply meant that they got some and not others. Now, I have put all of these rules at the B level, to reflect the fact that "good" software is expected to follow such standards. For the A level, I've included demonstrated mastery of the software architectural issues mentioned above.

I had some fun with the Polymer-powered course site as well. My new custom element for presenting specification grading criteria uses lit-html to concisely express how they are presented. It took a bit for me to wrap my head around lit-html, but I think I have a good sense of it now. The other fun new feature I added was the ability to download the specifications as Markdown. The specifications are internally represented in a Javascript object, and that object is transformed into the Web view. Of course, with this model-view separation, it's reasonable to provide other views as well, such as Markdown. I used this StackOverflow answer to write some functions that convert the JSON object to downloadable Markdown. I hope that this makes it easy for students to write their self-evaluations in Markdown. I did not use checkboxes on the Web view, as I did for game programming last semester, because they don't copy and paste well. I hope that having the Markdown version available removes the need for students to manually copy over each criterion into their self-evaluation.

More Time on the Final Project
In addition to switching the evaluation scheme, I am giving more time to the final project. I decided to keep the short project as a warm-up, since it also provides a safe-fail environment as students pull together ideas such as personas, journey maps, user stories, and software quality into a coherent process. Some of my dates for the final project aren't quite sorted out yet, as I'm debating whether to take a purely agile approach or a milestone-driven one. The advantage of the latter is that I can specify exactly which design artifacts I want to see at each stage, and the level of fidelity I expect. However, I expect that I will follow the more iterative and incremental approach, but I've put off the final decision until the semester gets underway and I can get to know these students a bit.


We are continuing our relationship with the David Owsley Museum of Art, who have been a great partner. I look forward to working with them and seeing what my students can develop during the semester.

Friday, December 21, 2018

After the Evals: Reflecting on Fall 2018 Serious Game Design Colloquium

I wasn't going to write up this post this morning—plenty of other writing and course planning activities to do. However, the Fall 2018 student teaching evaluations were just published, and I was kind of blindsided by the feedback from my Honors Colloquium on Serious Game Design. I'm engaged in some discussions on the Facebook about it right now, and I am grateful to those who are talking through some of the issues with me.

I want to give a high-level view of the course first. The students pursued a great variety of projects, and both me and my colleague at Minnetrista agreed that the overall quality of projects was improved from last year. I believe this is in large part because we already had the experience of last year. As I wrote about in September, I think having Fairy Trails as a lens helped the students see more opportunities than when we were greenfielding in 2017.

This was the first time I had Architecture majors in the class due to a change in the time I taught the class. I told many people throughout the semester how one of the great blessings here was that they were used to the idea of giving feedback. Some classes struggle to get the first idea out when giving feedback to students' work, but this is just part of the process for CAP majors. They were almost always the first to give feedback, which then got the ball rolling for the rest. By and large, they also took peer feedback well, which also created a good environment.

One thing I was aware of during the semester was how I got very few questions from this class. During the semester, only one student came to my office unsolicited to seek my advice on their classwork. There was one student with whom I had email exchanges about his project as he had a hard time getting his feet under him, but we sorted that out and his project turned out quite nicely. Aside from that, it was quiet. The problem is, quiet can signify both diligent focused work and gross misunderstanding of expectations.

All that said, this group of students gave me the worst teaching evaluations I've ever had--by far. It's patently clear in reading them that a group of students regularly conversed about me but never spoke with me about the course. Their feedback all covers the same ground, which also makes it clear that this group were my Architecture majors, or they were at least a vocal minority. It seems they harbored grudges and misconceptions all semester. Most of these issues could have been addressed by reviewing the course plan or, without a doubt, by talking with me.

In the Facebook discussion, a colleague posted this excellent quotation from Eckhart Tolle: "When you complain, you make yourself a victim. Leave the situation, change the situation, or accept it. All else is madness." That's a nice sentiment, and I certainly don't want to use my blog as a place to complain. (Well, maybe just a little bit about how students don't read course plans and don't take notes.) Moving forward, I want to point out a few areas of friction that these students brought up, what I think actually happened, and what I can do about it in the future.

Accounting for Grades
The students comments made it clear that, despite the course plan clearly laying out the evaluation scheme, many students were relying on Canvas to report their grade to them. I told them in the first week not to do this, but this didn't stop them. (Hm, not reading, not taking notes.) Unfortunately, they also didn't talk to me in any way during the semester to reflect this error; that is, I had no evidence until now that they were internalizing their grade incorrectly.

Canvas' grading system is rudimentary at best. In fact, it's naive to the point of being dangerous: I think it's one of those design decisions that pushes faculty toward bad teaching practice, but I digress. What I need to do is remember to mark all assignments as not counting toward a final course grade. I just verified, and if I did this consistently, students would see their grades as 0 out of 0 points, with "N/A" in the letter grade column. That sounds perfect. It would serve as a reminder to the students not to trust the automated system.

Consulting Grades
In the course plan, students could earn a few points for consulting with me or another game designer on their final project. In the course plan, it says that this has to be done "during the production period." The schedule for the final project included two rounds of pitches, five status reports, a practice final presentation, and an actual final presentation to the community partner. To me, it was obvious that "the production period" would be the time from the pitches to the end of Status Report 5. After that, production should be done, and we are giving presentations.

It seems that students did not see it that way. I received roughly four requests from students in the week leading up to the practice final presentations. I responded to these emails that I would be happy to meet with them, but I also pointed out that the production period had passed. Three of them then were not interested in meeting any more; one of them still met with me, and we had a productive discussion. This is important: the students contacted me to meet not to get my feedback on their projects, but only to get points. The points were, of course, supposed to incentivize them to get feedback, but it seems they were the end in themselves.

In week 2 or 3 or so of production, I reminded them in class that they could earn Consulting Points. It was not like I wanted to trick them out of it. I did note, though, that as I made that reminder, no one wrote it down. (Hm, not reading, not taking notes.)

What I can do in the future is to lay out the calendar more clearly with named periods. This will not require them to actually think about the process, but instead to match keywords. I don't mean that in a snarky way: thinking is hard, pattern-matching is what our brains do automatically. It's an easy way to reduce friction.

Also, a few students either did not understand or completely dismissed the notion of "practice final presentations." I figured the language was clear, but a few students had not actually finished their work by this date. I can make it more clear in the future that "practice final presentation" means that you should actually have your work done and be focused on practicing your presentation. How could it mean anything else? The devil's advocate position is, I suppose, that students don't understand that you play like you practice so you should practice like you play. That is, they see "practice" and think "not real" instead of "preparation for excellence."

CAP Trip
It seems that the students in the College of Architecture and Planning have a week during the semester when they go on a field study. I gather that, during this time, no CAP faculty have any expectations that they will do any work for their other classes. That's fine for CAP, but of course, my class doesn't stop because a subset of students is going on a trip. Indeed, I think the students don't recognize that I can hear, because in the weeks leading up to the trip, I heard one tell the other what a great vacation this trip is: walk around a historic city until lunchtime with your class, and then have the rest of the day to yourself.

When the students gave me their travel forms, I filed them away and told them it would not be a problem. These are university forms that say quite clearly that the students recognize they are not excused from class responsibilities. All my assignment deadlines were posted from the first day of the semester, and I figured the students would either travel with their laptops to get their other classes' work done, or just get everything done before they left. Leaving this assumption unstated was a mistake, I gather, since in the course evaluations, the students interpreted it differently. In their minds, they gave me the forms, and I did not tell them what to do until just before they left, when they asked what they should do. Again, I thought it would be obvious: they should do what everyone else is doing. After some emails back and forth, where one student insisted that they are forbidden from bringing expensive items like laptops on their trips, I extended the deadline for these students. And they complained about this in the student teaching evaluations. It still boggles my mind: I made an assumption, it turned out to not match theirs, and I gave them an extension. And I'm the bad guy... because I expected them to do work? Because they had to do something that required being online? I really don't know. I wonder about other faculty who regularly have CAP students in class, do they actually waive requirements or something, the way that the other CAP faculty seem to?

This is one where it's not clear to me what I could do differently in the future, except perhaps to go back to the normal time I taught the class, which time prevented CAP students from enrolling.

Donuts and, more generally, Food (and, more generally yet, Culture)
There was one day that I was late to class. On the next class meeting, I brought donuts by way of apology. No one mentioned that in the course evaluations.

There was a girl in the class who, every morning, brought her breakfast into the classroom and ate it before we began. This would make the whole room smell like pancakes and syrup, which I found annoying. I asked her if she would please eat her breakfast elsewhere so as not to make the room smell like food. She decided to eat her breakfast sitting on the floor of the hallway. She could have eaten it wherever she bought it, but she chose the floor outside the room. She never complained about it. In the course evaluations, a few students pointed to this as my showing favorites, that I would not let this poor, hungry student who had been in classes since 8AM eat her breakfast in the classroom. They pointed to my not wanting the room to smell like food as flippant. Now, I didn't point out to them at the time that there is actually a university rule against anyone having food in the classroom. I thought about bringing this up the day I asked the student not to eat in the classroom, but I didn't want to be "that guy" who throws the rulebook, when instead I figured I could just appeal to a sense of shared space and community. Well, that clearly backfired.

What can I do differently? Again, it's not clear. I think what happened in terms of students interpreting my decision as one of bias or callousness, it all happened in discussions where I was not involved. They were convinced they understood my rationale, and so there was no reason for them to ask or question it. It's the dark side of human tribalism: I was the enemy, and they were the valiant underdog heroes. Indeed, this fits exactly into what Lukainoff and Haidt talk about in The Coddling of the American Mind as one of the great untruths we are teaching students, that the world is made up of good guys and bad guys—homogenous tribal thinking.

I had not thought of that before, but this also leans into what I thought was an even better treatment of these themes by Campbell and Manning in The Rise of Victimhood Culture. The students did not engage with me on any of the topics they found challenging, even those that I chose as intentionally provocative, such as Harry Potter, with which people have a kind of religious devotion. Instead of engaging in dialectic with me, they wrote lengthy arguments to administrators in course evaluations, wrote about how I hurt their feelings, and threatened to write to the dean. That's exactly in keeping with Campbell and Manning's description of Victimhood Culture: easily offended and seeking justice by appeal to authority. Yikes. As I dig into why I am so upset by these evaluations, perhaps I have found my answer here.

Biased Participation Grading
Another common theme in the negative evaluations was that I was biased in my grading, that I would grade people I liked better than people I didn't. Again, I struggle to understand how they saw it this way. The way that participation grades worked is that students could earn up to three points in a day for participation. If someone said anything on topic, I wrote their name down in my notes, and they got three points. If they didn't, but they showed up, they would get one of the three points, in keeping with my usual triage grading scheme—3 meaning "essentially correct" and 1 meaning "essentially incorrect". Whenever I gave less than full credit, I included a note along the lines of this: "I have no record of your participating in class today." Frequently, I would add, "My records aren't always perfect, though, so let me know if you think I missed something."

As I said, I struggle to understand their comments here. They claimed that I would give students I "liked" more praise in class for their comments and that I would push back on those I didn't like. To some extent, that may be true, since I tend to like students who are prepared and give thoughtful feedback. However, it had no impact on the grading. I do not know where the miscommunication here was.

I will note that there was no time during the semester when any student contacted me and appealed their participation grade. No student said, "Actually, I mentioned in Zach's presentation that he should have less randomness." In truth, I would have believed anything they sent me, but they sent me nothing. How could they, when they took no notes? (Hm, not reading, not taking notes.) Even when I reminded them that the final exam would include questions about each others' projects, no students took notes from the others' presentations. It was rare that a student would even take notes of feedback during their own presentations.

What's to be done? One thing that I am very hesitant to do, but student behavior seems to be driving me toward, is grading students' in-class notes. I believe they have no idea what the value of notes is, and further, I think most of them have no idea how to effectively study. I have a blog post in draft about my reading of Make It Stick earlier this semester, and that (combined with Dorothy Sayers' "The Lost Tools of Learning") really got my wheels turning about how little students are prepared for lifetime learning. Where should the buck stop?

Book?
Another bit of the student conspiracy was a set of complaints that I had based my course on the Game Design Concepts course taught by my friend Ian Schreiber. I told them at the start of the semester, his online materials are as good or better than any books I have read on game design, so we would use that as the basis for our readings. The cabal of angry students turned this into complaints that all I had done was use his material. They did not mention the additional readings, nor the fact that I had saved them a few bucks by using a free online text. I chalk this one up to another complete disconnect from reality, that in their echo chamber of complaint, they had not realized that "basing a course off of someone else's work" is exactly what the whole textbook industry is based on. It would be interesting if we could all only teach in areas where we had written the textbook, but that clearly doesn't scale, not like amazing free online content prepared by well-respected instructors.

Project Grading
The final complaint that I want to address was one that I graded the projects in a biased manner. This is really fascinating because I did not grade the projects as such at all. The course plan is very clear about this, that I only graded their process, not their products. It is a point I emphasize in a few places in the course plan and regularly in my presentations. For each status report, students had to update a design log and address four questions: what was the primary design goal you pursued since the last status report? How did you prototype your ideas? How did you evaluate your prototype? What conclusions did you draw, and what do they imply for your next steps?

The design log was a new step for me. I added that as a way to have more evidence about whether students were actually following the process. A few students hit some rough spots in understanding the role of the design log in the process, and I could have added more guidance here. That said, they also worked, in that they showed me who was not really doing anything between status reports. I had one student essentially lie to me about her progress, since she did not know that I could check the document history in Google Docs. She was smart, though: she was careful not to explicitly lie in her email to me, but instead to imply something that was untrue. I called her out on this in my feedback, and she never responded to it, electronically or in person. I wonder if, in retrospect, I should have pursued an academic dishonesty case; maybe she would have gone away of her own accord. As you would expect, a liar is not going to be a great contributor to the course atmosphere and goals.

Talking with my friends on The Facebook, I got to wondering if the CAP majors in particular struggled with the tight iterations required of game design. They study design, ostensibly, and they believe they are good at it. However, their loops are very long, and their feedback does not come from end user evaluation, but from discussions around models. In game design, as in software design, we work in tight loops. It's possible that even though we talked about the importance of iteration, that because they believed they already understand design thinking and iteration, that they were incapable of seeing that they were not following the processes I required. If not incapable, then perhaps at least unprepared. I don't have direct evidence of this, it's merely a hypothesis. It's hard to believe that I could have been any more clear in my course plan, in my grading scheme, and in my feedback that I expected short iterations complete with evaluations and reflection; yet, this set of students complained about my grading their "projects" rather than helping shape their process.

Fun?
This one goes a bit sideways from the others, because it was some honest feedback from a student outside the Angry Gang. This student commented that they wished we had spent more time on how to make fun games. I wanted to add this here, because I've gotten this kind of feedback in my game design courses before, and it's really fascinating. It's like the student is just laying down their cards and saying, "I believe you can just tell us how to make something fun, and you didn't. I really did not learn anything this semester at all." This is another area where I'm not sure how to make it more explicit. We talk about this a lot, and the course plan, as mentioned above, is all about the process of game design for exactly these reasons. That said, I understand how someone who only goes through the motions of the course could come out without understanding this. That is, if you see a course as just being a series of work items and don't actually think about them, then you could come out saying that I didn't teach how to make something fun; to realize that this is impossible requires thinking, listening, and perhaps even reading and taking notes. (Hm.)

Conclusions
I love teaching these courses on game design. Reflecting on them, they do actually produce some of the worst course evaluations I receive, and I think there are a few reasons for that. One is that many colloquia are blow-off courses. Many alumni have told me such, that students expect colloquia to require minimum effort, a bit of BS, and then you walk away with an A. Mine is designed to be a rigorous challenge, and this perhaps sets us up for conflict. Another is that the Honors Students generally, and some subcultures like CAP in particular, are fed a line by the university that they are elite and special. My class is something none of them have ever done before, and so they make mistake, and they harbor misconceptions... and I tell them. Cognitive dissonance kicks in as follows, "If I am special and smart and successful, but this guy is saying that I am wrong, then he clearly must be wrong." Add homogeneous tribal cultures, and you have a recipe for disaster. I've never had a sizeable minority in my class all from my major before since these colloquia are open to all, but looking back on this, a lot of factors point to an inevitable conclusion that the students would circle the wagons rather than engage.

I think I have done a decent job in this essay of identifying where I made mistakes, where I might have known better, and what I can do differently. I'm still trying to sort out my own reaction to their feedback. There is certainly an element of pride, but there's also a sense of treachery, that these students had harbored these grudges and misconceptions all semester, all while smiling at me and responding politely to my questions. Writing this helped me identify elements of victimhood culture, and this helped in two ways: I can step back from the phenomena and understand them in an empirical, research-oriented way; and I can understand why it would upset me so deeply, as reading those books I referenced also did. They lead to a sort of deep, existential dread around education and society.

Incidentally, the evaluations on my other courses were par for the course, echoing some of the thoughts I've shared about strengths and weaknesses already. The one pleasant surprise there was that one student commented positively about my framing of the required statement on diversity (discussed here), the only comment that I received on that all semester. I'll keep it in there.

Thoughts? Comments? Suggestions? Criticisms? Perspectives? Share them below. No need to let Zuckerberg hold all our conversations. Thanks for reading!