Monday, December 31, 2018

The Games of 2018

With just a few hours left in 2018, I am going to go ahead and write up my "Games of 2018" post. Should anything change before midnight tonight, I'll quietly come in and edit the details. This is the third post in this series, the other two being 2016 and 2017.

Let's start with the numbers for the year. In 2018, I played 103 different games across a total of 548 plays. That's significantly more plays than last year (505) over about the same number of games (104). While my scholarly h-index barely crept up from 12 to 13, my games h-index rocketed from 15 to 20. My h-index for the year was 11, and here are the 11 games that I have played 11 or more times in 2018:

  • Gloomhaven (55)
  • Stuffed Fables (23)
  • Thunderstone Quest (22)
  • Go Nuts for Donuts (20)
  • Rhino Hero Super Battle (19)
  • Bärenpark (16)
  • Camel Up (16)
  • Champions of Midgard (12)
  • Carcassonne (11)
  • Clank! (11)
  • Rising Sun (11)

Summer 2018 was the Summer of Gloomhaven. My oldest son and I had a great time playing this BoardGameGeek chart-topper. I shared my painted base characters back in March, and I have also painted almost all the rest of the characters—all those we have unlocked. I'll make a spoilerful post about those once they are complete. The 55 plays are not all full-length games: some were short defeats. I remember one of the earlier scenarios was a terrible match for our characters, and we had several false starts on it. The last time we played was over the Summer break, and we haven't had the game to the table since. I was hoping we might finish the campaign over the Winter Break, but other activity has taken precedence; I have favored playing games that incorporate more people rather than just the two of us with Gloomhaven.

Stuffed Fables was a family Christmas gift in 2017. I shared my painted minis in June, and this is one I have played with my three older boys. The third son is particularly tickled to be involved in this kind of "big kids game" I think. We have really enjoyed it, and the several times, the writing has made us laugh aloud. We have one story left to go before finishing the book. Tracking plays of Stuffed Fables is actually a bit of tricky business. I decided to track each page as a play, following along the idea of each attempt at a Gloomhaven scenario being a play. One might argue that I should have counted "sessions" of Stuffed Fables. Yet, only once has one of our sessions been an entire story; usually we do a few pages and then store our stuff in baggies for another day.

Thunderstone Quest was a recent arrival via their second Kickstarter, as I described in my post about the painted minis. We have enjoyed this immensely, and I have taught it to a few friends. It and Clank! are easily my two favorite deckbuilding games in this genre, combining Dominion-style deckbuilding with some spatial puzzles. The number of plays of this game and of Champions of Midgard correspond to my second son's rising up into the next tier of complexity; these are joined by Runebound, which he only recently learned but we've played a lot of these past weeks. He still struggles with some of the more complex interactions—and even more so with sitting still for longer games—but overall I've been surprised with how well he manages the games. Like any 8-year-old, he will sometimes get tunnel vision on a plan and not roll with the punches, but I think this is something that games will help him learn to do better.

Rhino Hero Super Battle, Carcassonne, and Camel Up are joined by Go Nuts for Donuts as games that anyone in the family can play, and so most of my plays of these are with the younger two boys.

The notable thing about Rising Sun getting to the table eleven times is that many of those were with friends rather than family. Almost all my gaming is with my family, and I love playing games with them. Having them here probably makes me a bit lazy about reaching out to my friends to have them over. However, I had a small group of friends who really caught on to Rising Sun and came over for several game nights this summer. I feel really good about that and need to keep that up.

A few notable games of 2018 did not make the cut into the top 11. I bought Charterstone as a family Christmas gift, and my two older sons, my wife, and I are four games into the campaign. We have been enjoying that, and I look forward to seeing where the game goes next. My third son received Ticket to Ride: First Journey last year for Christmas and we played that quite a bit. A few weeks ago, he graduated to Ticket to Ride, which we played five times together, before I taught him Ticket to Ride: Europe, which I think is the far superior game. We have now played Europe five times as well, and he asks pretty much every day to play it again. The kid loves trains.

Looking at the list of games I only played once this past year, it makes me wonder if I should be even more aggressive about getting rid of games. I remember my wife sharing a story with me about a collector who got rid of all but 10 games, and that he was happier with the ten he really loved than the many he rarely played. My brother also recently tried a thought experiment of what games library he would build with just $250. To me, the answer is clear: two copies of Mage Knight: The Board Game, Ultimate Edition.

Each year, I've written a little about tabletop RPG in this post, and once again, I was able to do a very little bit of RPG gaming, but not much. I ran two sessions of Index Card RPG during the year. One was a game with my three older boys, themed around the "Magic Sword" fantasy realm that my second son spent many years imagining. I actually haven't heard him say much about it, even in the months leading up to our summertime session, but for a long time all of his imaginary play revolved around a world of knights, dark magic, and dragons. My third son particularly enjoyed the session I think, and he regularly asks to play Magic Sword, but I haven't made the time to spin up new adventures for them. The other session was a challenging design for a big family vacation. With some help from the ICRPG community, I designed an adventure that would scale to a variable number of players. As it turned out, only my two older boys and their older cousins were interested, so I had a manageable table of four. I think the session was a great success in many ways, and it was a good way to spend some time with my niece and nephew.

As has been my custom, let me wrap up by looking at the games that comprise my overall h-index of 20:

  • Gloomhaven (55)
  • Animal Upon Animal (54)
  • Crokinole (47)
  • Carcassonne (36)
  • Camel Up (35)
  • Rhino Hero: Super Battle (35)
  • Labyrinth (31)
  • Clank! (29)
  • Terror in Meeple City (29)
  • Runebound (Third Edition) (28)
  • Dumpster Diver (23)
  • Race for the Galaxy (23)
  • Red7 (23)
  • Reiner Knizia's Amazing Flea Circus (23)
  • Stuffed Fables (23)
  • Thunderstone Quest (22)
  • 4 First Games (21)
  • Flash Duel (21)
  • Go Nuts for Donuts (20)
  • Samurai Spirit (20)
This was the year that Gloomhaven overtook Animal Upon Animal. I feel like has to be a milestone in the growth of my family, that a heavy fantasy strategy game overtakes a light HABA game. 

Thanks for reading. I hope 2018 was a good year of gaming for you as well. Here's to a happy and playful 2019!

My Notes on "Make It Stick: The Science of Successful Learning"

Several weeks ago, I finished reading Make It Stick by Brown, Roediger, and McDaniel. It was recommended to me by a good friend with a heart for improving education. The book aims to explain what we know about learning from cognitive science and how this can impact the practices of teaching and learning. I found the book to be inspirational, and I mentioned the book in several recent essays and presentations. I happened to meet local cognitive science and student motivation expert Serena Shim this semester, and she affirmed the findings and value of the text as well.

One of the most important findings that came up throughout the book is that spaced practice is better than massed practice. I think we all recognize that it is true: of course studying throughout the semester is more effective than cramming. However, the science is more nuanced. Massed practice is actually better for short-term recall than spaced practice, but spaced practice is better for long-term recall. This has a fascinating corollary: if our courses contain high-stakes tests, then it is a good tactical decision for students to cram.

This implies to me then that we instructors have to make a real choice between I want students to pass this test and I want students to remember this a year from now. I have a rule of thumb that I have only recently had to articulate, which is that I only want to teach content that I think students should know in five years. My general pedagogic approach favors spaced practice, but perhaps I can do more to support this. However, recent conversations made me realize that this perspective is not universal. I was involved in a somewhat heated discussion about a master syllabus revision with a colleague. The particular syllabus had, in my opinion, too many low-level learning outcomes. I argued that students don't learn these items, and he argued that they do. As evidence, I cited that they could not repeat their achievements a year after taking the class, and as evidence, he cited that they passed the final exam. Here are the loggerheads of higher education: we both believed the other to not just be wrong, but to be holding the wrong value system.

I was reminded by the text of the value of testing as retrieval practice. I had read this before but tried to dismiss it; however, the presentation by Brown et al. makes it hard to ignore. Learning improves through retrieval practice, and testing is perhaps the simplest way to practice retrieval. I mostly gave up on using tests many years ago, favoring instead continuous authentic work. However, I also see my students not remembering to apply fundamental lessons early in the semester into their work later in the semester. I need to review my use of quizzes and tests, as well as how I prepare students to do their own self-testing.

Another theme of the book that knocked my proverbial socks off was that immediate feedback is not always better than delayed feedback. I think that in the educational games community, it is taken for granted that feedback is simply good, and that quicker feedback is better feedback. As the argument goes, if feedback is good for learning, and games are feedback machines, then games can be good for learning. This is not wrong, but it is also superficial. Not all feedback is created equal. The authors cite studies that show that delayed feedback can lead to better learning. As I understand it, the actual reason for this is not understood, but the prevailing hypothesis is that immediate feedback makes the feedback indistinguishable from the task itself; this leads to a result where when the feedback is not present, knowledge of the task breaks down. This sounds an awful lot like "I can do it in the game, but I cannot do it outside the game." I wonder how many empirical educational game research projects have investigated feedback delay as a dependent variable, and if not, how one would construct such a study. After all, a player expects that if they press 'A', Mario should jump right away.

Reading the section on delayed vs. immediate feedback made me think of two other salient examples where immediate feedback may be causing problems. The endemic one is automatic spellcheck and grammar check: we all know that students do not learn to spell or write by letting their word processor do the work, it just builds a reliance on the word processor. The other, related example is IDE for novice programmers. As with automatic spellcheck, the IDE will add red squiggles to invalid code, and students can right-click on it and change it to whatever the IDE wants—often without regard for whether it is what they should want.

Chapter 8 of the book provides a series of helpful summaries that are organized for different reader demographics. It's a valuable chapter, and so I will spend a bit of time on it here describing what caught my attention and where I think it should take me. In the section for teachers, they recommend explaining to students how learning works. The following quotation is a good overview:
  • Some kinds of difficulties during learning help to make the learning stronger and better remembered
  • When learning is easy, it is often superficial and soon forgotten
  • Not all of our intellectual abilities are hardwired. In fact, when learning is effortful, it changes the brain, making new connections and increasing intellectual ability
  • You learn better when you wrestle with new problems before being shown the solution, rather than the other way around
  • To achieve excellence in any sphere, you must strive to surpass your current level of ability
  • Striving, by its nature, often results in setbacks, and setbacks are often what provide the essential information needed to adjust strategies to achieve mastery
Another tip for teachers is to teach students how to study. This has been on my mind quite a bit, along with the question, "Where does the buck stop?" I teach primarily junior and senior undergraduates, and I estimate that 5% of them have any real study tools. Indeed, I think a good description of the Ball State demographic is, "Students who are smart enough to have gotten this far without having developed study skills." Including direct instruction on study habits is an investment in their future learning, but I doubt I would be able to reap it in my own courses, so it's taking away from time on topic. More frustratingly, I have seen for years that I can teach good processes for learning and software development in a course like CS222, only to see that a year later, the students have never touched any of those techniques because other faculty do not expect them to. For example, I can teach the value of pair programming or test-driven development, present the students with research evidence that these increase productivity, and require them to deploy these techniques; but a year later, when I ask them to do these in a follow-up course, they say that they have not used these since CS222. Why should I be more optimistic about study skills, when inertia is powerful and habits are so hard to override?

The section of tips for teachers returns to the theme of "desirable difficulties" that came up throughout the book. Here are some specific desirable difficulties that they recommend:
  • Frequent quizzing. Students find it more acceptable when it is predictable and the individual stakes are low.
  • Study tools to incorporate retrieval practice: exercises with new kinds of problems before solutions are taught, practice tests, writing exercises reflecting on past material and related to the aspects of their lives; exercises generating short statements that summarize key ideas of recent material from text or lecture.
  • Quizzing and practice count toward course grade.
  • Quizzing and exercises reach back to concepts and learning covered earlier in the term.
Again, this is a valuable summary. Each of these items is covered in the text with explanation and citation. It's clear what actions can come from this list as well, and it makes me look at opportunities in my upcoming HCI class in a new way. I also recognize in it the value of several things I already do in the class, such as having students connect readings to their experience and writing reflections of development experiences. Given that I tend to divide semesters into a content-oriented first half and project-oriented back half, I need to be more conscientious about designing assignments and quizzes that reach back to the early part of the first half; this should help students deploy these ideas more readily in the second half.

The final bit of advice in Chapter 8 is to be transparent with students about incorporating desirable difficulties into the class. I have always been a fan of white-box pedagogy, although it's not every semester that I see students take interest in why I am teaching the course the way that I am. Student teaching evaluations often reveal quite mistaken models about my intentions as well. Sometimes I get these excellent reflection sessions as I described in Fall's HCI class, but the irony here is that they generally come after students have completed their evaluations.

I highly recommend Make It Stick. It is written clearly and precisely and organized in a way that emphasized the important points. Crucially, it avoids educational fads in favor of empirical research. Chapter 8, as I have said, provides a great synopsis that turns the ideas of the book into potential action items for practice. 

Thursday, December 27, 2018

Planning CS445 Human-Computer Interaction for Spring 2019

Around the Christmas celebrations, I have spent many hours the past two weeks planning my Human-Computer Interaction course for Spring 2019. I wrote my reflection about the Fall semester back on December 8, and I just put the finishing touches on the Spring section's course plan before lunch today. Now, I would like to write up a few notes about some of the interesting changes I have in place.

First, though, a funny caveat. Since I originally designed the HCI class for CS majors, it has been CS345. Due to administrative busybodies, the course will now be numbered CS445 instead. Which label should I use for my blog posts? I think I'll switch over to "cs445" now, and I'll have to remember to use both codes when I'm searching for my old notes.

Canvas Grading Caveat
Prompted in part by my frustrating experience at the end of the Fall 2018 Game Design class, I have a more explicit statement on the course plan telling students to ignore Canvas' computed grade reports. I would always say this in class, but I did not have it explicitly in the course plan before. Also, I found out that I could mark assignments as not contributing to the final grade, in which case students will be able to see their assignment grade but not a false report of their "current" final grade, so I need to remember to mark all the assignments that way. Also also, please take a moment to consider the epistemological tragedy that is the concept, "current final grade."

Writing
I was surprised in the Fall when one of my more talented HCI teams brought up their project report, highlighted a place where I pointed out grammatical and spelling errors, and asked if they had lost points because of it. There are two things wrong with this question. The first is that it assumes the students had some points to lose in the first place, which is simply not true. I don't take away points from anyone; instead, I award points for demonstrating competence. You cannot take away something that someone doesn't have. The second, more pragmatic problem is that the students also had significant conceptual and descriptive problems in their report, but they seemed more concerned about the spelling and grammatical errors.

Last semester, I included a link to the public domain 1920 version of Strunk's Elements of Style, along with advice to read it. This time, I've made my expectations more explicit. On the evaluation page of the course plan, I have explained briefly the importance of writing and the fact that Elements of Style will provides my expected standards. I also explain there that I expect to give feedback on both conceptual problems and spelling or grammar problems, along with a primer about how to interpret that feedback. I thought about making an assignment around Elements of Style, but I decided against it, partially because I did not want to shift my early-semester plans ahead by a day. My professional opinion is that the book should be remedial to anyone who has a high school diploma, but I am also a realist about the variable quality of writing instruction these students may have received.

Software Architecture
It was a little disappointing to see so few teams really engaging with principles of quality software construction last semester. I have written about this before, and the students are aware that the culture of other classes is one that values only software output rather than software quality. I have carved out some design-related topics from the HCI class to make more time to work through examples of refactoring toward better architectures. I'm still working on the exact nature of these assignments, but I have a few notes to draw from. The schedule I have online right now actually goes right up to the point where I want to switch gears from design theory to software architecture practice.

Specifications Grading
After a positive experience in last semester's game programming class, I have converted my HCI class project grading scheme to specifications grading. I laid out my expectations for each level of poor (D), average (C), good (B), and excellent (A) grades. This was an interesting exercise for me, especially around the source code quality issues. Last semester, students could earn credit for following various rules of Clean Code, and a mixed grade simply meant that they got some and not others. Now, I have put all of these rules at the B level, to reflect the fact that "good" software is expected to follow such standards. For the A level, I've included demonstrated mastery of the software architectural issues mentioned above.

I had some fun with the Polymer-powered course site as well. My new custom element for presenting specification grading criteria uses lit-html to concisely express how they are presented. It took a bit for me to wrap my head around lit-html, but I think I have a good sense of it now. The other fun new feature I added was the ability to download the specifications as Markdown. The specifications are internally represented in a Javascript object, and that object is transformed into the Web view. Of course, with this model-view separation, it's reasonable to provide other views as well, such as Markdown. I used this StackOverflow answer to write some functions that convert the JSON object to downloadable Markdown. I hope that this makes it easy for students to write their self-evaluations in Markdown. I did not use checkboxes on the Web view, as I did for game programming last semester, because they don't copy and paste well. I hope that having the Markdown version available removes the need for students to manually copy over each criterion into their self-evaluation.

More Time on the Final Project
In addition to switching the evaluation scheme, I am giving more time to the final project. I decided to keep the short project as a warm-up, since it also provides a safe-fail environment as students pull together ideas such as personas, journey maps, user stories, and software quality into a coherent process. Some of my dates for the final project aren't quite sorted out yet, as I'm debating whether to take a purely agile approach or a milestone-driven one. The advantage of the latter is that I can specify exactly which design artifacts I want to see at each stage, and the level of fidelity I expect. However, I expect that I will follow the more iterative and incremental approach, but I've put off the final decision until the semester gets underway and I can get to know these students a bit.


We are continuing our relationship with the David Owsley Museum of Art, who have been a great partner. I look forward to working with them and seeing what my students can develop during the semester.

Friday, December 21, 2018

After the Evals: Reflecting on Fall 2018 Serious Game Design Colloquium

I wasn't going to write up this post this morning—plenty of other writing and course planning activities to do. However, the Fall 2018 student teaching evaluations were just published, and I was kind of blindsided by the feedback from my Honors Colloquium on Serious Game Design. I'm engaged in some discussions on the Facebook about it right now, and I am grateful to those who are talking through some of the issues with me.

I want to give a high-level view of the course first. The students pursued a great variety of projects, and both me and my colleague at Minnetrista agreed that the overall quality of projects was improved from last year. I believe this is in large part because we already had the experience of last year. As I wrote about in September, I think having Fairy Trails as a lens helped the students see more opportunities than when we were greenfielding in 2017.

This was the first time I had Architecture majors in the class due to a change in the time I taught the class. I told many people throughout the semester how one of the great blessings here was that they were used to the idea of giving feedback. Some classes struggle to get the first idea out when giving feedback to students' work, but this is just part of the process for CAP majors. They were almost always the first to give feedback, which then got the ball rolling for the rest. By and large, they also took peer feedback well, which also created a good environment.

One thing I was aware of during the semester was how I got very few questions from this class. During the semester, only one student came to my office unsolicited to seek my advice on their classwork. There was one student with whom I had email exchanges about his project as he had a hard time getting his feet under him, but we sorted that out and his project turned out quite nicely. Aside from that, it was quiet. The problem is, quiet can signify both diligent focused work and gross misunderstanding of expectations.

All that said, this group of students gave me the worst teaching evaluations I've ever had--by far. It's patently clear in reading them that a group of students regularly conversed about me but never spoke with me about the course. Their feedback all covers the same ground, which also makes it clear that this group were my Architecture majors, or they were at least a vocal minority. It seems they harbored grudges and misconceptions all semester. Most of these issues could have been addressed by reviewing the course plan or, without a doubt, by talking with me.

In the Facebook discussion, a colleague posted this excellent quotation from Eckhart Tolle: "When you complain, you make yourself a victim. Leave the situation, change the situation, or accept it. All else is madness." That's a nice sentiment, and I certainly don't want to use my blog as a place to complain. (Well, maybe just a little bit about how students don't read course plans and don't take notes.) Moving forward, I want to point out a few areas of friction that these students brought up, what I think actually happened, and what I can do about it in the future.

Accounting for Grades
The students comments made it clear that, despite the course plan clearly laying out the evaluation scheme, many students were relying on Canvas to report their grade to them. I told them in the first week not to do this, but this didn't stop them. (Hm, not reading, not taking notes.) Unfortunately, they also didn't talk to me in any way during the semester to reflect this error; that is, I had no evidence until now that they were internalizing their grade incorrectly.

Canvas' grading system is rudimentary at best. In fact, it's naive to the point of being dangerous: I think it's one of those design decisions that pushes faculty toward bad teaching practice, but I digress. What I need to do is remember to mark all assignments as not counting toward a final course grade. I just verified, and if I did this consistently, students would see their grades as 0 out of 0 points, with "N/A" in the letter grade column. That sounds perfect. It would serve as a reminder to the students not to trust the automated system.

Consulting Grades
In the course plan, students could earn a few points for consulting with me or another game designer on their final project. In the course plan, it says that this has to be done "during the production period." The schedule for the final project included two rounds of pitches, five status reports, a practice final presentation, and an actual final presentation to the community partner. To me, it was obvious that "the production period" would be the time from the pitches to the end of Status Report 5. After that, production should be done, and we are giving presentations.

It seems that students did not see it that way. I received roughly four requests from students in the week leading up to the practice final presentations. I responded to these emails that I would be happy to meet with them, but I also pointed out that the production period had passed. Three of them then were not interested in meeting any more; one of them still met with me, and we had a productive discussion. This is important: the students contacted me to meet not to get my feedback on their projects, but only to get points. The points were, of course, supposed to incentivize them to get feedback, but it seems they were the end in themselves.

In week 2 or 3 or so of production, I reminded them in class that they could earn Consulting Points. It was not like I wanted to trick them out of it. I did note, though, that as I made that reminder, no one wrote it down. (Hm, not reading, not taking notes.)

What I can do in the future is to lay out the calendar more clearly with named periods. This will not require them to actually think about the process, but instead to match keywords. I don't mean that in a snarky way: thinking is hard, pattern-matching is what our brains do automatically. It's an easy way to reduce friction.

Also, a few students either did not understand or completely dismissed the notion of "practice final presentations." I figured the language was clear, but a few students had not actually finished their work by this date. I can make it more clear in the future that "practice final presentation" means that you should actually have your work done and be focused on practicing your presentation. How could it mean anything else? The devil's advocate position is, I suppose, that students don't understand that you play like you practice so you should practice like you play. That is, they see "practice" and think "not real" instead of "preparation for excellence."

CAP Trip
It seems that the students in the College of Architecture and Planning have a week during the semester when they go on a field study. I gather that, during this time, no CAP faculty have any expectations that they will do any work for their other classes. That's fine for CAP, but of course, my class doesn't stop because a subset of students is going on a trip. Indeed, I think the students don't recognize that I can hear, because in the weeks leading up to the trip, I heard one tell the other what a great vacation this trip is: walk around a historic city until lunchtime with your class, and then have the rest of the day to yourself.

When the students gave me their travel forms, I filed them away and told them it would not be a problem. These are university forms that say quite clearly that the students recognize they are not excused from class responsibilities. All my assignment deadlines were posted from the first day of the semester, and I figured the students would either travel with their laptops to get their other classes' work done, or just get everything done before they left. Leaving this assumption unstated was a mistake, I gather, since in the course evaluations, the students interpreted it differently. In their minds, they gave me the forms, and I did not tell them what to do until just before they left, when they asked what they should do. Again, I thought it would be obvious: they should do what everyone else is doing. After some emails back and forth, where one student insisted that they are forbidden from bringing expensive items like laptops on their trips, I extended the deadline for these students. And they complained about this in the student teaching evaluations. It still boggles my mind: I made an assumption, it turned out to not match theirs, and I gave them an extension. And I'm the bad guy... because I expected them to do work? Because they had to do something that required being online? I really don't know. I wonder about other faculty who regularly have CAP students in class, do they actually waive requirements or something, the way that the other CAP faculty seem to?

This is one where it's not clear to me what I could do differently in the future, except perhaps to go back to the normal time I taught the class, which time prevented CAP students from enrolling.

Donuts and, more generally, Food (and, more generally yet, Culture)
There was one day that I was late to class. On the next class meeting, I brought donuts by way of apology. No one mentioned that in the course evaluations.

There was a girl in the class who, every morning, brought her breakfast into the classroom and ate it before we began. This would make the whole room smell like pancakes and syrup, which I found annoying. I asked her if she would please eat her breakfast elsewhere so as not to make the room smell like food. She decided to eat her breakfast sitting on the floor of the hallway. She could have eaten it wherever she bought it, but she chose the floor outside the room. She never complained about it. In the course evaluations, a few students pointed to this as my showing favorites, that I would not let this poor, hungry student who had been in classes since 8AM eat her breakfast in the classroom. They pointed to my not wanting the room to smell like food as flippant. Now, I didn't point out to them at the time that there is actually a university rule against anyone having food in the classroom. I thought about bringing this up the day I asked the student not to eat in the classroom, but I didn't want to be "that guy" who throws the rulebook, when instead I figured I could just appeal to a sense of shared space and community. Well, that clearly backfired.

What can I do differently? Again, it's not clear. I think what happened in terms of students interpreting my decision as one of bias or callousness, it all happened in discussions where I was not involved. They were convinced they understood my rationale, and so there was no reason for them to ask or question it. It's the dark side of human tribalism: I was the enemy, and they were the valiant underdog heroes. Indeed, this fits exactly into what Lukainoff and Haidt talk about in The Coddling of the American Mind as one of the great untruths we are teaching students, that the world is made up of good guys and bad guys—homogenous tribal thinking.

I had not thought of that before, but this also leans into what I thought was an even better treatment of these themes by Campbell and Manning in The Rise of Victimhood Culture. The students did not engage with me on any of the topics they found challenging, even those that I chose as intentionally provocative, such as Harry Potter, with which people have a kind of religious devotion. Instead of engaging in dialectic with me, they wrote lengthy arguments to administrators in course evaluations, wrote about how I hurt their feelings, and threatened to write to the dean. That's exactly in keeping with Campbell and Manning's description of Victimhood Culture: easily offended and seeking justice by appeal to authority. Yikes. As I dig into why I am so upset by these evaluations, perhaps I have found my answer here.

Biased Participation Grading
Another common theme in the negative evaluations was that I was biased in my grading, that I would grade people I liked better than people I didn't. Again, I struggle to understand how they saw it this way. The way that participation grades worked is that students could earn up to three points in a day for participation. If someone said anything on topic, I wrote their name down in my notes, and they got three points. If they didn't, but they showed up, they would get one of the three points, in keeping with my usual triage grading scheme—3 meaning "essentially correct" and 1 meaning "essentially incorrect". Whenever I gave less than full credit, I included a note along the lines of this: "I have no record of your participating in class today." Frequently, I would add, "My records aren't always perfect, though, so let me know if you think I missed something."

As I said, I struggle to understand their comments here. They claimed that I would give students I "liked" more praise in class for their comments and that I would push back on those I didn't like. To some extent, that may be true, since I tend to like students who are prepared and give thoughtful feedback. However, it had no impact on the grading. I do not know where the miscommunication here was.

I will note that there was no time during the semester when any student contacted me and appealed their participation grade. No student said, "Actually, I mentioned in Zach's presentation that he should have less randomness." In truth, I would have believed anything they sent me, but they sent me nothing. How could they, when they took no notes? (Hm, not reading, not taking notes.) Even when I reminded them that the final exam would include questions about each others' projects, no students took notes from the others' presentations. It was rare that a student would even take notes of feedback during their own presentations.

What's to be done? One thing that I am very hesitant to do, but student behavior seems to be driving me toward, is grading students' in-class notes. I believe they have no idea what the value of notes is, and further, I think most of them have no idea how to effectively study. I have a blog post in draft about my reading of Make It Stick earlier this semester, and that (combined with Dorothy Sayers' "The Lost Tools of Learning") really got my wheels turning about how little students are prepared for lifetime learning. Where should the buck stop?

Book?
Another bit of the student conspiracy was a set of complaints that I had based my course on the Game Design Concepts course taught by my friend Ian Schreiber. I told them at the start of the semester, his online materials are as good or better than any books I have read on game design, so we would use that as the basis for our readings. The cabal of angry students turned this into complaints that all I had done was use his material. They did not mention the additional readings, nor the fact that I had saved them a few bucks by using a free online text. I chalk this one up to another complete disconnect from reality, that in their echo chamber of complaint, they had not realized that "basing a course off of someone else's work" is exactly what the whole textbook industry is based on. It would be interesting if we could all only teach in areas where we had written the textbook, but that clearly doesn't scale, not like amazing free online content prepared by well-respected instructors.

Project Grading
The final complaint that I want to address was one that I graded the projects in a biased manner. This is really fascinating because I did not grade the projects as such at all. The course plan is very clear about this, that I only graded their process, not their products. It is a point I emphasize in a few places in the course plan and regularly in my presentations. For each status report, students had to update a design log and address four questions: what was the primary design goal you pursued since the last status report? How did you prototype your ideas? How did you evaluate your prototype? What conclusions did you draw, and what do they imply for your next steps?

The design log was a new step for me. I added that as a way to have more evidence about whether students were actually following the process. A few students hit some rough spots in understanding the role of the design log in the process, and I could have added more guidance here. That said, they also worked, in that they showed me who was not really doing anything between status reports. I had one student essentially lie to me about her progress, since she did not know that I could check the document history in Google Docs. She was smart, though: she was careful not to explicitly lie in her email to me, but instead to imply something that was untrue. I called her out on this in my feedback, and she never responded to it, electronically or in person. I wonder if, in retrospect, I should have pursued an academic dishonesty case; maybe she would have gone away of her own accord. As you would expect, a liar is not going to be a great contributor to the course atmosphere and goals.

Talking with my friends on The Facebook, I got to wondering if the CAP majors in particular struggled with the tight iterations required of game design. They study design, ostensibly, and they believe they are good at it. However, their loops are very long, and their feedback does not come from end user evaluation, but from discussions around models. In game design, as in software design, we work in tight loops. It's possible that even though we talked about the importance of iteration, that because they believed they already understand design thinking and iteration, that they were incapable of seeing that they were not following the processes I required. If not incapable, then perhaps at least unprepared. I don't have direct evidence of this, it's merely a hypothesis. It's hard to believe that I could have been any more clear in my course plan, in my grading scheme, and in my feedback that I expected short iterations complete with evaluations and reflection; yet, this set of students complained about my grading their "projects" rather than helping shape their process.

Fun?
This one goes a bit sideways from the others, because it was some honest feedback from a student outside the Angry Gang. This student commented that they wished we had spent more time on how to make fun games. I wanted to add this here, because I've gotten this kind of feedback in my game design courses before, and it's really fascinating. It's like the student is just laying down their cards and saying, "I believe you can just tell us how to make something fun, and you didn't. I really did not learn anything this semester at all." This is another area where I'm not sure how to make it more explicit. We talk about this a lot, and the course plan, as mentioned above, is all about the process of game design for exactly these reasons. That said, I understand how someone who only goes through the motions of the course could come out without understanding this. That is, if you see a course as just being a series of work items and don't actually think about them, then you could come out saying that I didn't teach how to make something fun; to realize that this is impossible requires thinking, listening, and perhaps even reading and taking notes. (Hm.)

Conclusions
I love teaching these courses on game design. Reflecting on them, they do actually produce some of the worst course evaluations I receive, and I think there are a few reasons for that. One is that many colloquia are blow-off courses. Many alumni have told me such, that students expect colloquia to require minimum effort, a bit of BS, and then you walk away with an A. Mine is designed to be a rigorous challenge, and this perhaps sets us up for conflict. Another is that the Honors Students generally, and some subcultures like CAP in particular, are fed a line by the university that they are elite and special. My class is something none of them have ever done before, and so they make mistake, and they harbor misconceptions... and I tell them. Cognitive dissonance kicks in as follows, "If I am special and smart and successful, but this guy is saying that I am wrong, then he clearly must be wrong." Add homogeneous tribal cultures, and you have a recipe for disaster. I've never had a sizeable minority in my class all from my major before since these colloquia are open to all, but looking back on this, a lot of factors point to an inevitable conclusion that the students would circle the wagons rather than engage.

I think I have done a decent job in this essay of identifying where I made mistakes, where I might have known better, and what I can do differently. I'm still trying to sort out my own reaction to their feedback. There is certainly an element of pride, but there's also a sense of treachery, that these students had harbored these grudges and misconceptions all semester, all while smiling at me and responding politely to my questions. Writing this helped me identify elements of victimhood culture, and this helped in two ways: I can step back from the phenomena and understand them in an empirical, research-oriented way; and I can understand why it would upset me so deeply, as reading those books I referenced also did. They lead to a sort of deep, existential dread around education and society.

Incidentally, the evaluations on my other courses were par for the course, echoing some of the thoughts I've shared about strengths and weaknesses already. The one pleasant surprise there was that one student commented positively about my framing of the required statement on diversity (discussed here), the only comment that I received on that all semester. I'll keep it in there.

Thoughts? Comments? Suggestions? Criticisms? Perspectives? Share them below. No need to let Zuckerberg hold all our conversations. Thanks for reading!


Wednesday, December 19, 2018

Reflecting on the Fall 2018 CS315 Game Programming Course

Before the Fall semester fades away into memory, I wanted to capture a few of my thoughts about its CS315 Game Programming course. Interested readers can examine the course plan as well as my summertime blog post describing the plan's construction. What I will do here is start by addressing the major points that I brought up in the other post.

More Small, Structured Projects
This went well, having four two-week projects that increased in technical complexity. I prevented people from choosing the same partners for the first three, which I think was also successful in both building community and sharing knowledge. Curiously, it was not entirely successful in helping some students avoid unproductive team members in the final project, but at that point, the onus was on them, and so it was their learning opportunity.

Some students wanted even more fine-grained assignments and closed-form assignments. An example might be to take a skeletal project and add a new actor or component to produce a desired behavior. This suggestion seemed to have influence with students who had middling performance, and this makes it worth considering. Those students who are still on the road to learning how to learn may need more fine-grained scaffolding. On the other hand, I was always available to help them move forward on the projects, and so it's not completely obvious that they simply weren't being responsible with their time.

Specifications Grading
This was, by and large, a great success. The students knew what they had to do to earn the grade they wanted, and there were very few questions about ambiguous criteria. I did end up giving each student a "Quick Save" which allowed them to fix one criterion and resubmit, addressing the concerns I expressed in my September post. I regret that I didn't think to call them "Save Points", since that's what they do.

A good addition to the specifications grading approach was to require a project report in which students told me what grade they earned. This made my evaluation into a verification rather than a bug-hunt. I like how this forced the students to go back and look at what they had done and follow a checklist of critical ideas. Some students had what I consider unreasonable trouble with Markdown, but at least they learned a new skill along the way. I am thinking of incorporating specifications grading into my HCI class in the Spring, and I will definitely take this approach to fostering students' reflections on their work.

During one of our class meetings after the first iteration of the final project, I asked the students what they thought about the specifications of the final project, and in particular whether they would make any additions to A- or B-level criteria. The one A-level that came up was profiling—a great idea that I had not considered at all. One of the teams used a Post-Processing Volume, and we agreed that this could be a good B-level criteria. A more insightful and critical student pointed out that it would be useful to distinguish between "bare use" and "proficient use." A good example here is Behavior Trees. The grade criteria was simply to "use Behavior Trees," but this could be done with a one-node tree that is just awkwardly recreating what could be done in a simple call to UE4's AI Move To. This is an area that I can try to clean up in the future.

Another area that was surprising to me was how many students had trouble with the criteria to document the licenses under which they used third-party assets. I know this is something that I bring up when I teach CS222, although few people there use third-party assets. In this class, everybody used some textures, models, sound effects, or music that was created by somebody else. I'm not sure if I need to have more documentation about this, plan an early-semester lecture about it, or give more examples. Very few students did it right the first time, and many more made mistakes more than once, which is a bit mind-boggling to me. That is to say, I do not understand which part of the process they do not understand.

Perforce Helix
A lot of students struggled with Perforce Helix, as I expected. However, it did serve its pedagogic and practical purposes: the students got to see a different model of version control from GitHub, and it ensured that all the projects were in one place. I have little sympathy for the one or two students who still could not use P4V at the end of the semester: at some point, they must have given up trying, because everyone else got used to it. I need to review my video tutorials on this topic before next Fall, because I think I may be able to tighten them or get more targeted at the kinds of problems students had.

Simple Achievements
I was surprised that a very small number of students pursued the achievements I set before them. I really thought that there was low-hanging fruit here for course credit: learn basically anything about UE4, talk about it for five minutes in class, get credit. There certainly were interesting things learned that I had not planned on, including post-processing volumes and custom animations via bone rotation in Blend Spaces. Perhaps this extra step was not necessary, since when teams demonstrated their work, their classmates would always ask at something interesting, "How did you do that?"

I don't have much to say about the Decorum and Diversity portions of my original planning essay, so I will move on to...

Conclusions
I think it was a great semester. I was really impressed with the students' projects. I expect to teach the call again next Fall, and I will make some minor modifications. If we are able to replace the lab machines with UE4-capable rigs, that would be even better, since then we could do more "lab" exercises. I also learned a lot this semester, manifest in my new video tutorials and my Heroic Uncertainty National Game Design Month project. My department has been on this strange pattern of offering HCI every semester (including summers!) when it used to be a once a year course, and as much as I like that course, this one is much more where my mind is these days. Perhaps I need to talk to the department about trying a Spring section of this course next year.

Tuesday, December 18, 2018

Two Paths to Systematizing Introductory Game Design Coursework

I have a decision to make, and there are two paths I can see before me. Seems like a good opportunity for a short blog post to lay out some arguments for each.

I have been teaching game design for several years thanks to the generosity of the university's Immersive Learning program. They have provided the funding that has allowed me to teach an honors colloquium on Game Design through Ball State's Honors College. This gets me a somewhat captive audience, since each student has to complete six credit-hours of colloquia to graduate, and the students tend to be high achievers. I wouldn't mind continuing this line of work indefinitely, but it relies upon both internal funding and political goodwill from many different areas. Briefly, it is a fragile plan, and I've been thinking especially the last two years about how to systematize this work.

The university recently announced its call for 2019-2020 Creative Teaching Proposals, and this got my wheels turning about whether this program would be appropriate to bootstrap my revisions. The proposals are not due until the end of February, but the break between semesters is the best time for me to get my thoughts and writings together.

The first path I am considering is to take my Honors Colloquium on Serious Game Design and make it into a low-level elective for the Computer Science department. Specifically, I am thinking of making CS120 (Computer Science 1) its only prerequisite, which would mean that each student would have some expected level of computing literacy. This would enable interesting perspectives on game design from a systems approach, including potentially formal modeling of game systems (e.g. state machines or Machinations), introductions to end-user games programming environments such as Construct, and important metaphorical understanding of rulebooks as programs for human execution. Also, I suspect such a course could drive increased enrollments in CS120, which I think is good for everyone: more people learning fundamentals of Computer Science can only be good, whether or not they choose to continue into a degree program in Computer Science. One of the changes that would be required in transitioning my existing course is that I would have to go from an Honors College-imposed cap of 15 students to a more likely cap of 30-ish. This is actually a significant change in the course, including a switch away from individual projects toward team projects.

The disadvantage of this approach is that, if the course is as popular as I imagine it would be, I would not be able to teach enough sections of it. I am the only member of my department who is engaged in games-related research, and I do not expect other tenure-track faculty to be interested in picking this up. We do have a vibrant collection of adjunct faculty who I think could do a good job with this course, or even local community members who might be interested in adjuncting such a course, but they would need significant support. This is where the Creative Teaching Grant proposal comes in: I could propose to create a teacher's guide to go with the course, a primer for interested but untrained people to be able to effectively teach introductory game design.

The alternative path is inspired by some research and conversations I've done around how other people in my shoes have solved this problem: Computer Science faculty with an interest in both game design and game programming, at a university without an existing program in game design. I've seen some people combine these two together into a sequence specifically for CS majors. That is, rather than have two separate courses on game design and game programming (which would be part of the path described above), they roll the two together into a two-course sequence on Game Design and Development. There's a real elegance to this, as concepts of design can be discussed alongside the technical skills required to build such systems. It also addresses a problem that I have had with the Game Programming class, which is that some of the student-designed projects are understandably terrible since we don't spend any class time on game design. The clear disadvantage of this approach is that it restricts participation to those students who have already completed five CS courses as prerequisites. I do believe that a Computer Scientist can design good games, but I also believe that a multidisciplinary approach will almost always yield better games.

Right now, I am leaning heavily toward the first path. Even if the course proves very popular but we only offer one section a year taught by me, that is still a systematization of what is now ad hoc. I suspect that CS majors and minors who plan to take Game Programming would take this hypothetical Introduction to Game Design course anyway, so that could still yield improvements in the Game Programming teams' projects.

Either of these paths I think could help lead toward another direction I have been considering taking my work. The past several years, again thanks to the Immersive Learning Program, I have been able to engage in some high-impact projects with undergraduate student teams. I've written extensively about these here on the blog, so I won't bother saying too much more about these now. However, I have been thinking about the value of having more, lower-stakes projects.

This was inspired in part by this year's recruiting of the 2019 Spring Studio team. This was the first time that I advertised more broadly across campus in addition to talking to my colleagues in complementary departments. This wasn't a very aggressive recruiting, just a few campus-wide emails sent by the Immersive Learning office. However, it resulted in many more applicants than I have had in the past—38 applicants for roughly 11 slots were submitted by the first deadline. From these applications, I had to craft what I hope to be the team with the best chance of success. This means that there were at least 27 other students who were excited and eager to join the group, but they won't have the opportunity. One way to provide opportunities for these students is to offer more low-stakes projects, where several multidisciplinary student teams can work together on passion projects rather than having a smaller team with close faculty mentoring working on a community-engaged project. In fact, we already have a curricular structure in place for this in CS490: Software Production Studio. This is the course that I created as a framework for the high-stakes projects, but I intentionally kept it broadly-defined to allow for other project structures as well. In a sense, all that would be needed is the right kind of recruiting and the department's blessing to offer a section of CS490 in the coming academic years that would allow for student teams to form and pursue their interests. This fits with one of my general philosophies, which is that we should incentivize, legitimize, and give credit for the immense amount of work that some students are willing to put into creating original games.

Thanks for reading. As always, let me know in the comments if you have feedback on any of these ideas.

Tuesday, December 11, 2018

Painting Runebound: Unbreakable Bonds (with Special Guest)

Runebound remains one of the most-played games in my library, although it had sat unplayed for almost a year. A few weeks ago, I decided to see if my second son would enjoy it. Turns out, he loves it, so we've played four or five times together now—the two boys and I. The Unbreakable Bonds expansion was something that had been on my radar for some time because it added a cooperative mode, but it came out around when our interest was waning. With the resurgence of Runebound here, I picked up that expansion and set to painting the figures. We have played with some of the extras added to the base game, but we have not yet tried the co-operative mode, preferring instead to let my second son experience the competitive scenarios that we already had.

There are only two heroes in the expansion, and first up is Tatianna.


Her character is distinguished by the aqua sashes and a variety of tattoos. The tattoos on the card art were too finely detailed for me to reproduce, but I think I captured the circle and dot motif. The bands around her legs are not shown in the illustration, which stops at the knees, so I decided to bring in some more of the bright red as well as the blue-purple of the left shoulder. Although it's a bit loud for leg decorations compared to the plain black bands on her arms, I think it provides a good triangular arrangement of both colors.


The other hero is Eliam, whose hair is frankly ridiculous. I am still not sure if the shade in the hair is too dark, or if it accurately captures the shadows in his pale locks. This guy had some nasty mold lines cutting across his arms and clothes, but I was able to clean most of them up. I think I'm most happy with the blue-grey outfit. This is not the most exciting part of the figure, but I think the highlighting sells the idea that it's a lit dark color rather than a shaded light color. The blades are kind of uninteresting, but they are this way in the card art as well.

I tried something different on these bases. I had picked up Vallejo Brown Earth texture paint after having seen Sorastro use it in a few videos, particularly his Star Wars Legion series. The first time I put this on a base—honestly, I cannot remember when that was—I was kind of disappointed with the tone of it, which appeared much more red than I had hoped. It was Black Magic Craft's quick review of the product line that got me thinking I could just apply it before priming and then paint it whatever color I want. I thought I had taken a photo of these two after priming but before painting, but I must have forgotten. The texture paste leveled out more than I expected, leaving just a bare layer of mostly fine grit. It was quick and easy, but I think the effect of the superglue technique I used with my Thunderstone Quest miniatures is more appealing. However, I knew I wanted to apply a lot of flock to these guys, so it was more of an undercoat of texture anyway. For the record, I added chunks of cork to Elias' base as rocks, wet blending a few different colors to get a reddish-brown tone, and then flocked with pale green and, primarily, rooibos tea. Tatianna has a mix of green flocks and green and black tea, but as her character abilities clearly indicate a comfort with the wilderness, I went heavy on the static grass for her.

And now, the Special Guest. Last year, my son received the Gilded Blade expansion, which includes the hero, Red Scorpion. He painted her many months ago, but as I was painting Elias and Tatianna, he decided to touch up the paint job. Here she is, painted by my 11-year-old son:

Red Scorpion is, of course, red, and red is a tricky one to paint. I think he did a nice job with the highlights on both the sweeping red cloak and the hair. The facial highlights are a bit stark, but he'll get better with time. Also, he's using cheap craft paints, which I find to be very difficult to work with for fine detail and thin layers. I may have to graduate him to better paints and brushes. It's hard to believe it was over three years ago that he, my wife, and I painted minis for the Pathfinder Adventure Card Game.

Interested readers can find my other painted Runebound miniatures here, here, and here.

Saturday, December 8, 2018

Reflecting with the Fall 2018 Human-Computer Interaction Class

The truth is that I had been a bit down on my HCI class. I set up what I thought would be a wonderfully inspirational cooperation with the David Owsley Museum of Art, and I gave them the challenge of prototyping real solutions to some of DOMA's problems. As with my Spring 2018 HCI class, I wanted these solutions to be firmly grounded both in human-centered research methods and in the general design theories we studied in the first half of the semester by reading Don Norman's The Design of Everyday Things. However, as we moved through the stages of the project, I was not seeing teams doing either of these. In the first iteration reports that they submitted before Thanksgiving, it was obvious that their designs were not really rooted in their research, nor were their designs intentionally applying the principles we discussed. This left a heaviness in my heart, both because I doubted the efficacy of the class and because we were going to be showing our results to DOMA at the end of the semester.

Our primary contact at DOMA was their Director of Education, Tania Said, who has been a gracious and kind partner in this endeavor. Due to the schedule of docent training, we scheduled our final presentations for Tuesday of last week, which was the penultimate day of class. I was impressed by how well my students presented their work and the effort they had put into revisions in the 1.5 weeks since Thanksgiving. It helped that Ms. Said has a grace and wisdom that made her questions uplifting to the students: her questions were fair and honest, yet always supportive and encouraging.

I think we would all call this meeting a success, yet there was one more meeting yet in the semester—Thursday, December 6, which was also the deadline for their projects. I had originally planned to have them present their final projects to each other, but this clearly would have been redundant after the meeting on Tuesday. I decided that my goal for the meeting would be to try to wrap up some of the loose ends, to share with them some of my joys and frustrations from the semester, and to prime their reflections to prepare for the final exam. To this end, I decided to present them with three questions and an activity: What went well this semester? What did not go well this semester? What still puzzles you? The activity would be to write final exam questions.

We pushed all the tables against the walls so we could sit in something like a circle, doing the best that the room permits. I opened the class with a few remarks about my perspectives, and I got them into groups of three or four students to answer the first question. Here is a transcription of the notes from the board after we openly shared our results, slightly expanded from my blackboard shorthand:

  • Project pride & satisfaction
  • Presentation to Tania Said
  • Getting class feedback
    • Using the Click-Share system from our seats promotes discussion 
  • Second round of paper prototyping helped clarify the difference between sketch and prototype
  • Museum visits
  • Connecting with a real place / real partner
  • Two-week warm-up project provided a trial run before working with the real partner
  • Relationship between design principles and Clean Code
  • Reading Design of Everyday Things
It gave me satisfaction to see them bring up many of the items that I too thought were our greatest successes of the semester. Of course, we didn't vote or rank these, so some may only be important to one or two students, but that's fine for our purposes. One of the biggest surprises came from the comment about the second round of prototyping. This was a particular exercise where everyone dropped the ball, so we agreed to reschedule literally the rest of the semester so that they could do this homework exercise again. At least the student who spoke up, this gave her the opportunity to really understand it, which is infinitely more valuable than just plowing forward. I was also glad to see students reflect positively on the two-week project warm-up rather than frame it as wasted time and effort.

The next question to cover was, "What did not go well this semester?" I had them get up and move around the room to sit with different people for this conversation—something I had warned them I would do in my introductory remarks. For many of the items, I asked a follow-up questions, whether the conversation included ideas about how to address the things that didn't go well. I used two markers for this: black for the original item, and green for the suggestions. Again, my notes are below, with the green parts offset in square braces.
  • Project reports
    • Balancing the need for functionality against the practical application of design principles
      • [Emphasize making high-fidelity executable paper prototypes]
    • Justifying designs as meeting the principles rather than intentionally applying them
    • Struggling against "just make it work" vs. a good report
      • The former is enforced by department culture
      • One commented that I am the only faculty who grades on anything besides whether the software "works"
      • [Focus on design (look, feel, operation) over implementation]
      • [More practice exercises to practice implementations]
    • "Two-class" phenomenon, where first half of the class focuses on DOET and second half on the project, without a clear bridge.
      • [Again, high-fidelity executable paper prototypes might help here]
      • There were more conversations in the first half than the second
      • [Sitting in a circle would have increased conversation quality]
      • [Use an Interactive Learning Space for this class]
  • Test-Driven Development
    • Still confused about types of testing (unit, integration), test-friendly architectures, and how to test first.
  • Time management and problem slicing
    • Other faculty do not emphasize Agile slicing approaches
  • Continuous communication with DOMA
    • We had to work around their schedules, which held us back on our tight timeline
  • The collection database we used was inconvenient
I will quickly add comments about those last two points. Regarding working around their schedules, I explained that this phenomenon was one of the reasons why the granddaddy agile methodology Extreme Programming calls for an on-site representative of the client, so that they are always there to answer questions and work alongside you. Regarding the database we had, I pointed out that our data was a smaller copy of a live system in use by the University Libraries. That is, it showed what a "real" database looks like, rather than a fabricated one for a class example. 

It was delightful for me to hear students sharing so candidly about their struggles during the semester. Again, they identified many of the elements that had been on my mind as well, but it was more powerful to hear it come from them. I was particularly pleased at their comments about how the architecture of the room was an impediment to our activity.

At this point, we only had about twenty minutes left in the meeting. I told them that I was cutting the third question but still wondered if there was anything that still puzzled them. One student posed a really interesting epistemological question: how do we know if Don Norman's rules are the right ones? This got us into a quick conversation about how his rules in DOET are like Robert Martin's rules in Clean Code: they are not universal rules, but they are frameworks that help us consider what the truth might look like. In retrospect, I should have spoken about shuhari here, but I was trying to push ahead and it didn't come to me in time.

I moved on to the last question, again shuffling the groups. Here are their ideas of what might be a good final exam question for the course.
  • What does good design mean to you?
  • What are five of the seven Universal Principles described in DOET?
  • What are the four stages of the Double Diamond?
  • What went well or what did not go well?
  • If you were to take this class again, what would you do differently?
  • Give examples and applications of the seven Universal Principles
  • How would you change the organization of this course?
  • Reflect on something valuable you learned and how you will use it in the future.
  • Reflect on the outcomes of the course.
When the first item was shared, I asked how the student would assess responses. I think they didn't expect this question, but I assured them it was a good question, and that I was trying to understand the nature of it. A few students suggested that it would be enough to see if the hypothetical test-taker could answer it coherently, to show that they had thought about it. 

The second and third questions really surprised me. Those are so unlike the kinds of questions I give. I wish I had noted at the time whether these were students who had ever taken a class with me before, but I didn't. By contrast, that fourth item is exactly the kind of question I like to ask on a final exam, and the student who suggested it pointed to our notes on the other boards and said, basically, "Just ask those. Those are good questions." 

I had been feeling a bit down in the dumps on Thursday morning for a variety of reasons. The way I put it on Facebook was, "I feel emotionally dead inside, and now I have coffee. I can be apathetic FASTER." At the middle of the day, though, my Game Programming students showed their final projects, and most of them did a great job. That lifted my spirits, but after this meaningful, honest conversation with my HCI students, I felt almost giddy. Of course, maybe it was sleep deprivation, but I'll give them the benefit of the doubt. Maybe I let myself grumble too much about those students earlier in the semester when I was feeling uncertain. They were really a good group, maybe an uncommon mix, but they were paying attention and they were learning.

My original plan was that I would work with DOMA to choose a project they really wanted to see polished up, and then use that in my premiere graduate-level course on Software Engineering. As I recently reported, however, those plans have gone up in smoke. Instead, I will be teaching another section of Human-Computer Interaction. This gives me a chance right away to incorporate some changes, although it has to be done in the all-too-short winter break. But first, I better actually write my final exams. Thanks for reading!

Friday, November 30, 2018

Heroic Uncertainty: A Project for 2018 National Game Design Month #NaGaDeMon

TL;DR: Heroic Uncertainty source and Windows 64-bit builds are on GitHub.

I've had a game design idea tickling my brain for several years. There is an established pattern in fantasy games and stories that a hero receives quests from the imperiled people of the kingdom, and the hero's mission is to go overcome those obstacles. What if the information about the quest could not be trusted? That is, what if, like a game of Telephone, information decayed by the time it got to the hero, who then had to not only decide whether to face the peril, but also how much of the rumor to believe?  Earlier this year, I happened to mention this idea to a meeting of the Ball State University Serious Games Knowledge Group, and everyone around the table thought it was an intriguing concept. This inspired me to keep my eyes open for an opportunity to pursue building a playable prototype.

November 2018 may be better known as National Novel Writing Month (NaNoWriMo), but it is also National Game Design Month (NaGaDeMon). I have been aware of this lesser-known online creative movement for some time, but it was only this past year that I tracked down Nathan Russell's partially-active Website about it and, from there, joined the Facebook group. This game concept struck me as having too much complexity to finish in a weekend jam, but a month-long effort seemed a better fit. I knew that in November, my three classes would have transitioned or be transitioning into group project mode, which shifts my role toward mentoring rather than direct instruction. I figured this would be the ideal time to see if I could get this idea playable—and I did.

I spent a bit of time early in the month considering whether I could build a paper prototype of the core mechanisms, but my mind kept turning to more interesting computational models. I decided to build a proof-of-concept in Unreal Engine 4.20, furthering my own understanding of that tool and its potential for helping me move from concept to prototype.

A Distraction-Free Title Screen!
Heroic Uncertainty is a game where you control three heroes who are protected the kingdom from peril. Every few days, a creature shows up somewhere in the map, which triggers a person from that map to start heading toward the Capital in the middle of the kingdom. When the person reaches the capital, they report on what they know about the peril, which can include its type, size, location, relative power, and whether it requires magical skills to defeat. Each of these pieces of information has a chance of being incorrect, and so the player has to choose how best to deploy their heroes against uncertain enemies. If a peril is unaddressed for too long, it will defile the region, and the game ends when three regions have been defiled.

For most of development, the game was called Split the Party, playing off of the classic bad advice from tabletop roleplaying. I was intrigued by the idea of having multiple heroes rather than one, so that a player has to tactically deploy heroes with different skills to different areas. These three roles became Knight, Bard, and Sage. The only one with a really unique ability is the Bard, who can determine what part of a rumor is false. I was hoping to add special talents to the other classes as well, but as time ran out, the one most related to the core mechanism seemed best to explore.

Instructions in Thrilling Text!
The more information I added to the reported rumors, the more opportunities there were for information degradation. This is a clear area where more work could be done, should someone wish to continue the project. The current relationships between items is mostly random: for example, a peasant has an equal chance to report a bandit as being a demon as being a wolf. I tinkered with similarity graphs, but this quickly got out of scope for getting the game up and running.

Do you believe Chauncy the Thatcher? I wouldn't.
I've been showing builds of the game to my Game Programming students. It's funny to me that each time I show it, someone suggests highlighting the areas where perils appear or showing how long they have until they are overrun. The point of the experimental game is, of course, to see what happens if exactly that information is hidden. There are only two places where I bent this rule. When a region is overrun by a peril, it changes to green, no matter where the heroes are. Given full graphic production, I imagine that this would be shown as the region being set ablaze or shrouded in mystical darkness—something visible from far off. The other exception is that the heroes receive word about rumors when a person reaches the capital, whether they are at the capital or not. If the heroes know where each other are on the map, they must have some kind of magical communication network with each other, so why not with the monarch as well? This could be folded into a narrative more convincingly, but I think what I have is good enough for this experiment.

Combat! Perils!
What's the verdict? Honestly, a week ago, I would have just said that it is not fun. As I added more polish and, particularly, more information that could be degraded, I did begin to enjoy my internal testing more. The game has not really been played aside from my mostly-technical testing. However, I have been talking to people about it during the month, and their responses are revealing. One of them responded, "Oh, you're making Fake News: The Game." Hm, yes I suppose there is some sense of art-imitates-life here. Another response was that I was trying to gamify bureaucracy. Yikes, maybe so. All in all, I'm left with an idea that my wife and I discussed many months ago as I was talking about this concept, which is that it might work as a subsystem in a larger game, but that larger game had better be really good to overcome the frustrations of this subsystem. I do not plan on continuing work on Heroic Uncertainty, but I am glad I built it. I can put the idea to rest and have learned a bit in the process.

Let me turn, then, to some of the specific interesting things I learned by building this game. I've talked some about the innovative game systems above, but here I want to focus on some technical issues.

First and foremost, I've never programmed a game on a hex grid before, and I really didn't know how to start it. I came across Red Blob Games amazing series about hexagonal grids, which both helped me understand the mathematics and gave me a basis for building my implementation. If nothing else, make sure you check out their animation from cube coordinates to hexagons.

The implementation makes use of UE4's Data Tables, which I had never used before. It's a technique I had read about and was curious about, and indeed I even put it on the list of possible A-grade distinctions for my Game Programming class' final project. I ended up using CSV files to store tables of possible peasant names and their occupations, as well as the statistics of the various monster types. To add more of any of these, one simply has to edit the CSV and reimport the data table. Smooth.

The 4.20 release of Unreal Engine included support for Rich Text, something that was sorely missing in the past. Indeed, it would have been an excellent addition for Fairy Trails. It takes a few steps to get it up and running, but the documentation is clear.

A majority of the implementation of Heroic Uncertainty is in C++, particularly the formal game systems, with Blueprints being used for mostly visual and interactive elements. Building games in UE4 with a hybrid of C++ and Blueprints reminds me of how I feel about Lisp. Sometimes, I will work in Lisp for a while, and all the pieces make sense, but those insights seem to fall away as time goes on. It's almost like a peasant with a rumor about an invading orc who cannot quite remember the details when he reaches the capital.

In the original implementation of hex choosing, I had a custom dynamic material instance that used a pixel offset to depress the clicked hex. This looked good, but it did not allow for multiple actions on each selected hex. I'm quite pleased with both the UI and the implementation of my solution, which is to highlight the available actions on the hexes where they are supported. This is done with a custom blueprint that I called an ActionDecal. It consists of a decal that is displayed on top of the hex, tinted in the color of the current hero. I had never used decals before, and I had some trouble with them until I realized they render in curious ways if rotated away from axis alignment. Each decal also has a cube with the same dimensions, and the cube is not rendered but it is hoverable and clickable. This is how I detect the mouse moving over a decal and then, using timelines, highlight the hovered one or detect a selection.

You may have noticed that the heroes and rumor markers are simply the cylinders and cones that come packaged with UE4. Did you take a close look at those hexagons, though? I modeled those myself. Yes, that's right, I made my own 3D model and imported it into UE4. Pretty nice. I also explored UV unwrapping and used that to make custom textures for the hexes before I settled on the ActionDecal approach to action selection. This content, like a little experiment with spline-based animation, did not end up in the release, but it was still a fun area for me to learn about.

In Stunning 3D!
I had hoped to build the game for HTML5 and put it online, but there's a defect in UE4 that is preventing that build from working. The closest information I can find is this thread about 4.15. The author reports that the trouble seems to come from UMG, but I have not made the opportunity to try detaching different widgets to see if one of them is the issue. I do wonder if it's the fact that I am building my own inheritance hierarchy of UMG dialog boxes so that I get consistent behavior from them. I built the Windows 64-bit version and deployed that to GitHub, the only trick to that being that I had to use a console command within the game mode blueprint to get a window of an appropriate size. Designing the game for multiple window resolutions was definitely out of scope!

I did not keep track of how many hours went into this project, but I think 60 is probably a good ballpark estimate. I worked on it on several of my non-teaching days, as well as some nights and weekend hours. The worst times were when I spent several hours on systems that had to be discarded because they were ill-conceived. Indeed, in consecutive days, I started building similarity graphs of enemy features before really having a place to test them. This was a bottom-up design failure that ended up throwing away some six hours of programming. However, even here I got a good story to share with my students, as they were at a point of planning their own final projects.

You can download all the UE4 source files or a Windows 64-bit build of the game from GitHub. Feel free to check it out and leave a comment below. Thanks for reading, and keep on making games!