Sunday, May 26, 2019

Building an online randomizer app for Thunderstone Quest, and what was learned along the way

I have enjoyed playing Thunderstone Quest and am glad I painted the minis for it. For some time I have had an itch to make a tool to assist in generating random adventures. There is one out there already, but it has some frustrating defects, such as the selection of heroes not always matching the constraint that there's one per class. I talked to the developer about it and tried sorting through the implementation, but neither of us made any headway.

The other day, I received my To the Barricades expansion, which is the final component to my Kickstarter pledge. We've only had it to the table once, and we enjoyed it despite the inevitable rockiness of first plays. This really got my wheels turning about building a new online randomizer—so I did.

Thunderstone Quest Randomizer:

Source Code:

I worked on this from Thursday night through Sunday morning. It's not going to win any graphic design awards, but it has all the functionality that I wanted from it. The rest of this blog post contains a few things I learned while building it. I will focus on two things: jq and lit-element.

A friend mentioned jq to me several weeks ago as a powerful command-line tool for manipulating JSON. The other randomizer already included a transcription of most of the Thunderstone Quest cards, but it was not in a format that I wanted. This is exactly what jq is built for. However, I don't know exactly what I expected, but it wasn't what I found: jq's system of generators and filters was mind-blowing. It took me a while to wrap my head around it, and even after several hours I feel a bit uncertain of my grasp of it. I ended up crafting a filter like this to do the transformation:

 jq '[to_entries[] | .value | to_entries[] | {"Name": .key} + (.value | del(.Cost,.Types,.Keywords,.Races,.Summary,.Alert,.Light,.Battle,.Spoils,.Special,."After Battle",.Reward,.Entry)) | select(.Category)]' cards.json   

The whole command is wrapped in square brackets because I want the result as an array. The next few commands allow me to wade through the deeply-nested structure of the original format. Then, I push object keys as a new "Name" field into the rest of the object, filtering out all the data from the original source that I did not need. Finally, I pulled out the handful of cards from the original source data that had no Category specifier. Writing this command took several hours of pushing and prodding, but I'm glad I got it figured out.

Although I had pulled out much of the bad structure of the original format, I still had a lot of redundancy. I decided to grin and bear it until I reached a point where I wanted to add metadata about quests. In my first pass, quests were only represented as fields in objects: each card had a "Quest" key with a value listing the name of its quest. I wanted to pull that out so that I could describe a Quest more systematically, as having a code (such as "Q2") and as being part of a set (such as the Champion-level Kickstarter pledge). To do so, I needed to pull out each of the types of cards into their own lists and put these under the corresponding Quest. Unfortunately, I could not figure out how to automate this process for an arbitrary number of categories, and so I did some nasty copy-pasting of commands, manually entering what I would rather have parameterized. Get ready to cringe, because here it is:

 jq '[group_by(.Quest) [] | {"Quest": .[0].Quest, "Heroes": [.[] | select(.Category=="Heroes") | del(.Quest,.Category)], "Items": [.[] | select(.Category=="Items") | del(.Quest,.Category)], "Spells": [.[] | select(.Category=="Spells") | del(.Quest,.Category)], "Weapons": [.[] | select(.Category=="Weapons") | del(.Quest,.Category)], "Guardians": [.[] | select(.Category=="Guardians") | del(.Quest,.Category)], "Dungeon Rooms": [.[] | select(.Category=="Dungeon Rooms") | del(.Quest,.Category)], "Monsters": [.[] | select(.Category=="Monsters") | del(.Quest,.Category)]}]' cards.json  

If any readers know jq better than I do and see how to parameterize the management of each of those card categories, I'd love to know. Fortunately, this is a one-time transformation, since after doing this one, I had to manually enter the data for the new Barricades-level quests. There should be no reason for me to ever have to run this particular transformation again.

I have had an itch to learn Flutter, and I was excited to hear that there will be support for compiling Flutter apps for the Web. However, that's still in a technology preview state, so I decided instead to build my randomizer using Polymer. I've been using Polymer for a few years now, and it's a fascinating system, allowing the development of custom components with two-way data binding. At some point in the last year, I came across LitElement and lit-html, which provide very useful and terse expressions for binding values and also for iterating over arrays—syntactically much nicer than Polymer's dom-repeat. I decided I'd spend some more time with lit-element in my Polymer-based solution.

I ran into a problem in writing an element to filter quests, so that users could choose to get cards only from the sets they own. Making the element show all the quests with checkboxes was fine, and I could track the states of them from within the element, but the values were not reflecting back to the main app that held the filter element. This puzzled me for some time, until I ended up just moving that logic from a nested element into the top-level app: that's not a particularly good design decision, but it is a pragmatic one. At some point later, I was looking for help on the lit-element when I came across this excellent post by James Garbutt on the relationship between Polymer 3 and lit-element. This particular portion blindsided me:
Bindings are one-way in Lit, seeing as they are simply passed down in JavaScript expressions. There is no concept of two-way binding or pushing new values upwards in the tree.
I had been conceiving lit-element as just a way to use some cool lit-html features within a conventional Polymer ecosystem. Garbutt goes on to helpfully add, "Instead of two-way bindings, you can now make use of events or a state store." Events are familiar, of course, but a real part of Polymer's appeal to me was that I could get elegant two-way binding instead of tedious event plumbing.

Now, though, I have to show some of my ignorance, because I had to ask, "What is a state store?" With this term in hand, I discovered the work-in-progress PWA Starter Kit, which combines lit-element with Redux. "Redux" has the shape of a buzzword I would have overheard somewhere on the Internet, but I couldn't have told you what it is. I downloaded the PWA Starter Kit and started fiddling around with it, keeping the docs open beside it. This looked really interesting and exciting as a way to start building off of what I've learned using Polymer... but at this point I was hip deep in a project that I wanted to get done before the weekend was over. I put Redux and the PWA Starter Kit out of my mind, armed myself with the knowledge that lit-element is not what I thought it was, and I went back to the randomizer. I did end up keeping the card filter logic in the top-level application element, where it clutters things up but still works.

I jumped into this project with a relatively unstructured goal. I had an intuition of what I wanted, but I didn't do any sketches or write any specifications. I had a few regression defects along the way that made me wonder if I should have used TDD, but I think the whole project was really in the "experimental programming" mode. Making it free and online means that other people can gain some benefit from my work, but if I were to rebuild it from scratch, I would certainly be more careful about it. Indeed, it's tempting to rebuild it using Redux for state management, but this was already a four-day distraction from my main summer side project.

Incidentally, the randomizer itself is a progressive web app published on GitHub Pages. I was able to repurpose the approach I took for Elixer, my scoring application for Call to Adventure, which I don't think I ever wrote about here. That one is also an open source project hosted on GitHub. It took me a lot more time to make that a full-fledged PWA, but the second time around with the Thunderstone Quest randomizer was much faster.

Thursday, May 23, 2019

Importing Blender animations into UE4

Last Fall, I worked out how to create simple animations in Blender and import them into UE4, using separate files for the mesh and the animations. I intended to make a tutorial video about it, in part so that I would remember the steps. Alas, I postponed that video for long enough that I forgot all the tricks, and so this morning, I had to sort it all out again. I'm going to jot my notes here on the blog in case I forget again between now and creating the video.

The steps assume you already have a properly rigged mesh with an animation action created and selected in the dope sheet. Make sure you rename the armature from "Armature" to something else, otherwise the scale of the animations will be wonky, as described in TooManyDemons' answer here.

To export the mesh, from the FBX exporter, choose to export the Armature and the Mesh, but in the Animation tab, make sure nothing is selected. Export that to a file named something like model.fbx.

To export the animation is a bit trickier. I found good advice here. Make sure the desired animation is the only action shown in the NLA Editor, and push it down onto the stack. From the FBX exporter, select only Armature, and in the animation tab, select everything except All Actions. Deselect all the options under Amature as well. Export this to something like model_boogie.fbx.

This allows you to import the model and its animations independently within UE4, although they can still be in the same .blend file.

Other notes that I will likely forget:

  • When adding new actions in Blender, tap the 'F' button to save the animation even if it has no users.
  • To delete an action, hold Shift while tapping the 'X'. This marks it with a zero in the popup but doesn't actually remove it unless the file is reopened.
Now I just have to remember next Fall that I wrote this note on my blog...

Saturday, May 4, 2019

Reflecting on the Spring 2019 CS445 Human-Computer Interaction Class

Regular readers may recall that I was given the Spring 2019 HCI course to teach on rather short notice, so I only made a few structural changes between it and the Fall 2018 section. The most relevant to this post are the increased attention to software architecture and the switch to specifications grading. I also gave the teams nominally more time for their final project, but not enough that it was noticeable from my point of view. We retained our collaboration with the David Owsley Museum of Art (DOMA) and the overall theme that student teams would identify and address real problems they face.. Yesterday, I shared my sending-forth message to the students, and today, I would like to share my reflection on the semester's experience. Feel free to reference the course page, which provides the policies, procedures, assignments, and assessments for the semester.

I used a similar approach to specifications grading as I did in the Fall 2018 Game Programming course, in which there were discrete criteria for each level of grade. I added a separate category of criteria for the project reports, which were designed to provide the process documentation that corresponded to the technical artifacts produced. As before, students had to submit a self-assessment along with their source code and report, the self-assessment's consisting of a checklist of criteria that were met. Unlike the game programming class, where there was rarely disagreement between the students and I about whether a criterion was met, there was a lot of friction this semester. This was especially the case on the final project's two iterations. As I wrote to several students in my formal feedback, I have serious doubts that many of the teams honestly conducted a self-evaluation at all. Consider, for example, one of the most missed criteria asked students to explain how their projects manifested particular design principles from Don Norman's The Design of Everyday Things. Teams submitted a list of examples, with no explanations of them. It seems to me that if a team sat together and worked through the checklist, as I expected them to, someone would have said, "Have we explained this?" I don't think they had anything approximating such a discussion: I think they surveyed the checklist, said "Good enough," and marked the box. That is, I think they defaulted to the "hope for points" model rather than the "ensure success through unambiguous choices" model. Of course, the idea of self-assessment is not to save me grading time but to foster reflective practice and remove ambiguity. When students do it honestly, it does save me grading time, and when it is done dishonestly, I suspect that students might learn something about the importance of self-reflection. I need to think about what I might change in future uses of specifications grading to get around this.

An honest student approached me near the end of the semester, in part to share what he claimed was a voice of many students who were frustrated with the specifications grading system. I explained to him that the goal of the system is to remove ambiguity, both for students and for assessors. I honestly never got a good explanation of what precisely he or other students did not like about it, except that there were standards at all. I think the status quo is that students believe they start a project with full credit, and then I take away points for mistakes. One of the things I like about specifications grading is that it follows my contrary philosophy, which is that students start with nothing and must earn their credit. I think it is this idea, not specifications grading in particular, that students are upset about, because it holds the accountable to demonstrating understanding to earn credit. The fact that I get complaints about grading regardless of the scheme I use is probably testament to students' pushing back against having expectations rather than the particulars of the system. However, foreshadowing some of what is to come below, part of why a subset of students complains is likely that I actually draw upon knowledge they should have from prerequisite courses—knowledge that they may not actually have.

In the first half of the semester, I used a running demo project ("archdemo") to demonstrate some ideas of how to separate the layers of a user-facing software system. In the previous semester, I had done something similar, but using a context separate from our class collaboration with DOMA. Many students that semester did badly with the "warm-up" project, and so in order to help with on-boarding and consistency, archdemo showed a sample use of the DOMA data via the ContentDM database. The resulting application was called "Naïve Search," named thus because it didn't really solve any reasonable search problem: it just showed how to separate the layers of a system. While this worked in the short term, I think it also caused problems as students perceived more value in the example than it was meant to have. It was never intended as a template, but only an example of very specific course concepts.

One of the changes I made from Fall 2018 was that I required final project teams to use a subset copy of the ContentDM database in their projects. My intention here was that each team would have to demonstrate that they could separate the layers of a user-facing software system, regardless of what creative direction they wanted to take the project. The result, however, was that nearly every project looked a lot like archdemo with an added bell or whistle. Last semester, we had a broad range of concepts on a plethora of platforms; this semester, it was dullsville, as the teams just added some minor idea to archdemo. One team even consistently referred to their solution as "an improvement over Naïve Search," despite my repeatedly telling them in their formal feedback that this was not even close to our goal. I have no doubt that our partners at DOMA were uninspired by this semester's projects, although we have not had our wrap-up meeting yet. I would be remiss not to mention one exception, which was a clever interactive map that tied into the database in interesting ways despite the tight project timeline; those guys really nailed it, so if you're on that team and reading this, kudos to you.

Throughout the semester, we returned to five principles of design brought up in Don Norman's The Design of Everyday Things: affordances, signifiers, mapping, feedback, and conceptual models. Despite this being a theme of the class, there is scant evidence that students understand or applied these principles. Instead, my professor's eye tells me that they designed whatever they wanted, and then they tried to shoehorn those designs into these principles, or to justify their work after the fact. Although they had several assignments and much formative feedback about these principles, students continued to show misunderstandings through the final exam.

What was missing? I believe a big part of it was that they didn't follow my advice. This is exemplified with one key example: taking notes. The course plan says explicitly that students should always have their notebooks available for taking notes and jotting questions, and furthermore, that they should not have their distraction machines (laptops and phones) in their way during class discussions. Taking a friend's advice, I even made a first-day quiz in which students had to answer questions about this aspect of the class. Yet, very quickly (and for some, instantaneously), their old habits took over, and I would stand in front of class looking at the backs of open laptops rather than faces. Almost no students took any notes on any of our discussions, and is if to drive the nail of the coffin, some of them only got out their pens when I wrote something on the board. Even if they had a glimmer of understanding about affordances and signifiers during class discussion, there is no way that they held on to this fifteen minutes after class unless they actually expended the effort to do so.

I wrote on Facebook the other day about how I was feeling conflicting emotions about this class. On one hand, I am unsympathetic that they did not learn the material because they chose not to follow my advice on how to do so. The advice is not complicated: it primarily involves reading and taking notes. However, at the same time, I pity the students, because I think a large majority of them—if not all of them—know neither how to read for understanding nor how to take notes while reading or discussing. To me it begs the question, "Where does the buck stop?" If I get undergraduates in my upper-division elective Computer Science courses who lack these skills, is it my responsibility to teach them or just to assess what is in the master syllabus?

There is a related puzzle, which was foreshadowed in my sending-forth message to the class. An uncomfortably large proportion of the class showed very little proficiency in fundamental programming skills. When I brought this up in honest, private conversation with trusted undergraduates, they showed no surprise: they said that it was fairly easy, and common, for students to "cheat" on assignments. This manifests in two ways: either copying the work of a peer and submitting it as their own, or stringing together bits and pieces of code found online. Neither approach forces the learner to confront the useful struggles required to build firm understanding. It reminds me of the advice from Make it Stick that I wrote about last December, and that in its absence, students really don't know how or what it means to learn.

Going a little further, I witnessed a curious phenomenon several times during a guest presentation by a CS alumnus and successful professional. The speaker is currently in a position with a lot of creative flexibility, and he has his own team of programmers to implement parts of what he designs. However, many students seemed to misunderstand his story, thinking that this meant one could just "have ideas" and tell others to program them. They missed the part where he worked on rather tedious programming tasks for ten years to prove his capability, vision, and leadership. Instead, they rejoiced, saying things like, "I cannot program, but I want to tell programmers what to do, so now I see that there's a job for me!" This sentiment was shared primarily among CS minors despite their having taken at least three programming courses in the prerequisite chain to this course.

These students don't seem to see a connection between the HCI goals of the class and the fundamental skills of software development. It appeared to me that these students were not being tripped up by the accidental complexity of software development (such as the placement of braces or the quirks of a UI framework) but rather by its essential characteristics, which include precision and sequential reasoning. How I frame HCI as a Computer Scientist is essentially that it is user-centered precision and sequential reasoning. What happened on the students' final project teams seemed to be that those who had programming skill were relegated to doing persistence- and model-layer data manipulation tasks—required tasks for the program to work at all—while those who could not program worked on the UI. The result is that the UIs were badly conceived and executed, because those working on them couldn't conceive of the problem as requiring precision and sequential reasoning. Part of my evidence for this manifestation is the difference between teams' paper prototypes and their final products. Every team decided upon a paper prototype that was developed from a user-centered design process, but practically no team's final product looks at all like their prototype. Instead, their products looked like the archdemo sample, but with a few more widgets added via SceneBuilder. One could say they did what they could rather than what they wanted, or more pointedly, they decided to fail conservatively rather than succeed differently—which was exactly one of the human failure modes we discussed in class.

A student confided in me that, when he signed up for the course, he expected he would learn how to design a good user interface. By this, he meant that there would be some thing I could teach him that would suddenly make him good at it—a silver bullet. He pointed out that some students seemed to think that it was all in the tools: if they learned the tools, they would be able to make good UIs. I am grateful that he took the time to share this with me. I asked him if, after studying this topic for fifteen weeks, he understood why I could not meet his desires, why I could not dump ideas into his head that would suddenly make him good at UI design. He indicated that he did understand it, but he also sounded disappointed.

As I wrote about yesterday, I had forgotten about the emotionally powerful reflection session I had with my students at the end of the Fall 2018 HCI course. I didn't schedule for it this semester, and so it didn't happen. I think it would have helped all the students to frame their difficulties and challenges within their authentic context: yes they struggled, but they did so because what we are doing is legitimately hard.

The puzzles I face in considering how to change the course for Fall 2019 are significant. I expect this week to be able to meet with representatives from DOMA to talk about their take on the experience. Our primary contact is their Director of Education, Tania Said, and she has intimated that she would like to see the class work together to produce something with more staying power rather than a series of prototypes. This makes me a bit nervous, given my previous experiences having entire classes taking on a project, but there may be a possibility to set it up like competing consultancies rather than trying to do a whole-class team. One of the reasons I wrote this up now is that it has helped me serialize and articulate some of my thoughts in preparation for meeting with her. I hope that conversation will help me turn some of these reflections into actionable course plans for Fall. As always, I expect I will be able to share my summer planning activity in a blog post in the coming months. Until then, I think it may be time to spin up my summer project.

Friday, May 3, 2019

Sending forth the Spring 2019 HCI class

I spent the morning computing final grades for my Spring HCI class. Last night, I came across a post I wrote at the end of the Fall 2018 section that I had forgotten about: my post about how powerful our final reflection session was. This, along with other thoughts and desire for closure, inspired me to compose the following final announcement for this semester's class. I have another post I am working on which is a reflection about the semester, but I also wanted to share this as an open letter here. If I didn't, it would be trapped away on Canvas where I wouldn't be able to reference or reflect on it later. What follows is the announcement in its entirety, except for the first paragraph, which simply dealt with the final grading formula and reporting mechanisms.

I regret that we didn't schedule a day for a semester wrap-up discussion. Because of the problems scheduling our final presentations with Tania Said, we went right from working on the final project, to technical presentations, to formal presentations... and now time is up. Even though this is a one-dimensional stream, I would like to share a few of my final observations, in hopes it is inspirational to you as you move forward in your studies and career.

A conversation just fifteen minutes before the final exam made me think of the perfect question for the final exam. Of course, it was too late to make any changes at that point, but I did post it up on Facebook for some of my friends and alumni to comment upon. It looks like this:
Many of you signed up for this course expecting that I could pour some special knowledge into your heads that would make you good at designing user-interfaces. Now that we've studied design methods for 15 weeks, explain why what you wanted is not possible.
The resulting conversation on social media has been interesting. A colleague who also teaches undergraduate HCI mentioned how he regularly gets students in his class who conflate visual design with HCI. A successful alumnus from about five years ago posted, "It's a field without universal truths except for, 'Know thy user, and thy user is not you,'" which I think is a brilliant summation. Another alumnus who does a lot of user-facing development pointed out that there are always accessibility issues in any design, responding to a nested conversation about how accessibility is important but also platform- and application-dependent. A friend who teaches game design in Michigan simply said, "People are broken in interesting ways," which also ties in to many of our discussions of design processes, working in teams, and confronting our human biases.

I share this with you to help you compare what you know now to what you new before the semester started. Undoubtedly, there is something you can do to take the next step of improvement in your HCI skills, and my hope is that this course helped you identify what that is. After all, the goal of a liberal education is not prosaic job training but helping people understand how much there is yet to learn. Successful professionals are those who engage in reflective practice: thinking carefully and critically about their work and continually improving. Keep close to the heart of agile.

It would be irresponsible of me not to share with you an observation I made from working with you on the final projects. For many of you, the thing that is stopping you from creating innovative and useful systems is programming. I was disappointed to see so many in this class struggle with fundamental topics from the prerequisite courses, such as variable scope (CS120), parsing a one-dimensional data structure (CS121), and naming variables and methods appropriately as verbs and nouns (CS222). There are prerequisites because a solid background is required to succeed at the level of discourse appropriate for a 400-level course. How a student may have gotten to this level of the curriculum without such knowledge is a question for their own personal reflection; prudence demands considering what to do next. What ought one do who wishes to improve their software development skills? As I shared with you from Brown et al. Make It Stick, "To achieve excellence in any sphere, you must strive to surpass your current level of ability."

A creative mind may come up with no end of possible hobby projects; for such a person, the problem can be identifying a reasonable scope. You can always look up the kinds of questions that are asked on technical interviews, participate in the mathematical challenges on sites like Project Euler, or join a community of practice around a passion area such as game development or humanitarian open source projects. If you're not sure what else to do, I suggest working through this semester's projects again. The course plan will stay online. You can start again on the short project, but do it from scratch: don't fall into the trap that every team did for the final project and assume that my naive search architectural demo had any implicit value. Build up something for which you understand every part. Hold yourself accountable to best practices, which you can refresh yourself on by re-reading Clean Code. It's possible that our image server will be taken down some time this summer, but that doesn't have to stop you: you can find another data source to draw from, as the first part of the final exam implied. In fact, the university is moving to a cloud-hosted ContentDM database, and I'll be talking with Ms. Said and Mr. Bradley about how future students might take advantage of that.

A. J. Liebling wrote, "Freedom of the press is guaranteed only to those who own one." Andy Harris said, "You want to be a game designer? You write the code, or you write the check." Steve Jobs said, "To me, ideas are worth nothing unless executed. They are just a multiplier. Execution is worth millions." Kyle Parker told us that he was proving himself in the trenches for a decade before he was given a position of creative authority within the university. We are Computer Scientists—whether majors or minors!—and we are the ones who create value in the 21st century.

I hope that you have had a formative experience in this class. I wish you the best in future endeavors and will pray for your success.

Tuesday, April 30, 2019

Happy Accident Studio and Canning Heroes: Reflecting on the Spring 2019 Immersive Learning Project

It was a great semester in my Spring Game Studio class. I had more applicants than ever before, thanks in part to recruiting assistance from the Office of Immersive Learning. It made it harder to select a team, but it also demonstrated the demand for this kind of experience. I ended up accepting six Computer Science majors, one Music Media Production major, two Animation majors, one Journalism Graphics major, and one English major with a Computer Science minor.

This class worked with Minnetrista in the second semester of a two-semester sequence. They inherited a paper prototype that was selected by me and our primary contact at Minnetrista, George Buss, the Director of Experience and Education. The specific prototype was inspired by Cooking Mama and involved players preparing and cooking foods related to Minnestrista's herb garden and Farmer's Market. The Spring class was able to take this project in connection with the renovation of the historic Oakhurst Home, which is where the original recipes of the Ball Blue Book were developed.

The end result of this work is Canning Heroes, which is a cooperative game for 1-4 players. It is designed to be played on a large touch screen, and our partners at Minnetrista are currently working on the logistics of installation in the Oakhurst home. The source code is available on GitHub, and the executable can be downloaded from This is my first time releasing anything on, and I was pleased with how easy it was to post the game and add student collaborators to the project.

The students dubbed themselves "Happy Accident Studio," which is an homage to Internet-culture icon Bob Ross, who recorded many of his famous painting shows in Muncie right on Minnetrista's campus. I am very pleased with the work done by Happy Accident Studio. Like the best of my immersive learning teams, they were able to learn quickly how to work together and hit stride about ten weeks into the semester. This allowed us to make the necessary revisions in the tail end of the semester to really make the game shine. I think Canning Heroes is the most polished product to come out of my immersive learning classes, and I am hopeful that it will be installed at Minnetrista for public consumption.

Happy Accident Studio was a good team by any metric. They had their flaws as any team will, but overall, they really came to understand what multidisciplinary collaboration means and why it is necessary for this kind of risky and creative work. They struggled with accountability, as student teams often do. They had great eureka moments, particularly around playtesting and code quality. It turns out that except for two seniors, the rest of the team was juniors. This means they will be around next year, and it makes me wonder if we can make some hay while the sun shines.

This was the second immersive learning team I have mentored who have used Unreal Engine 4 for their projects, the previous one being last Spring's Fairy Trails by Guy Falls Down Studio. There was one team member from Spring 2018 who was able to join again this semester, and it was good to have another voice with some experience in the room. He and two others had done some work with UE4 before, and they were enough to seed the learning experience for everyone else. Once again, I was generally happy with how quickly students were able to learn the tools of the engine, particularly Blueprint programming. The team got up and running quickly, and the mistakes they make—such as poor functional decomposition or bad naming practices—are general code quality problems that dog all novices. That is, there was nothing lost by using Blueprint, and much to be gained. Also, whereas Fairy Trails was written entirely in UMG with an enormous amount of effort going toward platform support, Canning Heroes makes better use of the UE4 platform, even though it's entirely "flat" using Paper2D.

Reflections on Student Assessment

Two stories from this semester have me thinking about whether I want to change my student evaluation strategy in future immersive learning projects. This team experienced many of the common successes and failures during the semester, but these two stories stand out in my memory.

First, there was an artist who wanted to create tutorial animations for the game. Her instinct was to create them in After Effects so that then we could import them into the engine. I suggested that, instead, she learn to create the animations in-engine. The tutorial was already using assets that we had in the game, and if the animation were done using keyframes and tweening in-engine, then anyone could tweak them as necessary, for example if the assets or timing needed to be changed (both of which happened, of course). By contrast, if she had created the animations using her familiar tools, any change would require going back to After Effects and repeating the render and import process, decreasing the number of people who could make the change while increasing the bus factor. To this student's credit, she jumped right in, and within a few hours was as adept with creating in-engine animations of 2D assets as anyone else who had been using UE4 for months. In our final retrospective meeting, she brought this up as a great learning outcome of the course: that there are times to use familiar tools and times to learn new tools, and not to be afraid of either.

The second story is not quite so positive. Our music major was de facto audio lead for the team, being in charge of both original compositions and sound effects. Several times during the semester, I encouraged her to do integration work, following the value of interdisciplinary collaboration and multidisciplinary pairing that is explicit in the methodology. However, even at the end of the semester, her approach was to upload files to the server and then step aside: she could not complete the loop of creating a new sound, putting it in the game, and evaluating it. A related story is that throughout development, many team members desired to randomize the sound effects. Again, I encouraged the team generally to look into UE4 support for this, but it was never pursued. It turns out that just after the game was complete, I watched an archived UE4 livestream that prompted me to look more into the audio system myself, and I was crestfallen to see that what the team wanted could be done with a single node of sound cue. I am certain that, had this student shown the ambition of the artist from the first story, we would have a much better aural experience in the game today.

Reflecting on these stories made me wonder whether there was some kind of more formalized incentive that I could use to nudge the students toward manifesting the attitudes and behaviors I desire for them. Several years ago I looked into KPIs as an industry practice that I could adopt in the studio to maintain project authenticity, but my investigations at the time led me to conclude that the timeframe of KPIs doesn't match conventional course structure. Regardless of my pedagogic choices, I am still locked in a three credit-hour class that meets three times a week for fifteen weeks, which doesn't seem to have the breadth required for Google-style KPIs. However, I have also recently been tinkering with specifications grading, particularly toward the goal that grading criteria become unambiguous. At least in theory, removing ambiguity from the grading scheme should help students plan their actions prudently.

The alternative idea that crossed my mind, which I share here in part as an excuse to explore it further, is paths for team members. "Paths" at least is the word that came to mind, informed I suspect by my anticipation of the new Pathfinder Adventure Card Game Core Set, but alternatives might be "roles" or "quests." Keep in mind that I manage the team using a philosophical model of team ownership inspired by agile practices. That means everyone owns everything: anyone can work on art, sound, story, programming, UI, and so on. Occasionally I have teams where one aspect of the game requires a lead who approves work in that area, but that's always in response to dysfunctional team members. Most of the time, I can get students on board with the idea of total team ownership and its concomitant responsibilities.

Paths might offer a concrete way to inform students about expected behaviors, potentially annotating them with incentives (that is, grades). We might start with something like Path of the Novice, which would include incentives for reading the team methodology document, getting accounts set up on Slack and whatnot, putting something on version control. From there, maybe one could branch out into:

  • Path of the Artist: create an asset, put it in version control, and integrate it into the game.
  • Path of Code: conduct a code review of someone else's work; have your work subject to code review.
  • Path of Quality: run playtesting sessions for the game, documenting the results
  • Path of the Robot: automate the game build using continuous integration 
Obviously, these are just off the top of my head. Perhaps students could choose a path each two-week iteration and provide evidence at the end about whether they met their goals. Perhaps it really needs levels of achievement. The clear problem that I can see is that it might constrain people in a way that total team ownership is designed not to: it may trick a team member into not contributing what the team needs at some critical point because it's "not their job" (i.e. not on their path). Maybe, then, the result of this reflection is that all I really need is a simpler rule: everyone needs to be able to integrate and test their work. Unfortunately, that really devolves into the existing rule, "Follow the methodology," which is missed by a subset of team members. 

This whole discussion of how to manage future projects has to be framed by Price's Law. I learned a year or two ago that there's a general rule that half the work is done by a square root of the team. It seems that this has strong implications for a group like my immersive learning class. Which disciplines are represented under the square root limit can strongly influence the direction the project can go. Yet, at the end of the semester, I still have to assign grades, and they should be done as fairly as possible. It's not the goal, then, to ensure that everyone produces equal outcomes, but rather than their work is fairly assessed.

What's Next

I've been doing the two-course sequence of immersive learning game development projects since Fall 2012, but I'm taking a little detour in 2019-2020. I received approval from my department to offer my game design class as a special topics class for CS majors and minors in the Fall, so I'm going to try that instead of teaching it as an Honors Colloquium. What I would like to do, as I wrote about two weeks ago, is try to transform my game design course into a regular CS course, which would make it much more sustainable. Fall's plan is set, then, but Spring is still up in the air. As I mentioned above, I would like to take advantage of the fact that I will have more students than ever who have experience with game design and development, but it will be a little while before I can determine what shape that will take.

Monday, April 29, 2019

The studio helps students succeed in other classes

I had the final formal meeting with my Spring 2019 Game Studio class today, and we conducted a final retrospective in a format similar to the one from Spring 2016: we annotated a timeline with details about what was learned and how we felt about it.

One of the most unexpected comments in the conversation was about how the studio class helped students succeed in their other classes. Many students agreed that they came to campus MWF for our 9 o'clock studio meetings, and since they were already on campus, they attended their other classes. They were quite explicit that if they did not have the studio, they would have skipped class much more often. I don't remember this phenomenon ever coming up in the studio before.

However, they also mentioned that their work in other classes often took a back seat to their motivated focus on the studio project, but hey, at least I got them to show up.

Saturday, April 27, 2019

Two missing specifications in HCI

I finished grading my students' final projects for the Spring 2019 HCI class (CS445, used to be CS345). Before the start of the semester, I wrote about how I would try specifications grading in the course. After the afternoon of grading, I realize that I missed two important specifications. I will share them here so that I have a better chance of remembering when planning Fall's class, since I've been assigned to teach the course again.

I should have had a specification requiring all non-trivial processing to be done off of the event thread. This is, of course, a requisite for any kind of multi-threaded UI programming. I specifically chose a data source that would require handling slow load times and long processing times so that my students could practice this technique. I developed a sample project in the first half of the semester based around this common practice, and I explained to them why it was important. However, I neglected to have a specification about it. Three hours before the final project was due, I had a student ask for some last-minute troubleshooting. He said that he added a spinner while some images loaded, but it wasn't showing up. Of course, it wasn't showing up because he was loading the image on the event thread. I showed him (again) the example from earlier in the semester and explained (again) why this pattern was necessary. From their final technical presentations, it was clear that he was the only person in the class of roughly twenty students who understood this crucial point. I believe this is an instance of the old standard motto: if it's important, make it worth points. I simply missed it in my specifications.

The other specification deals with acceptance testing. There are two relevant specifications in the evaluation plan, one at B-level and one at A-level. Specification B.5.R says that the final report " describes the methods by which the solution was evaluated," and A.2.R says that "The documented solution evaluation includes both quantitative and qualitative elements that explicitly align with this semester's readings." The B-level specification is designed to be broad: you can earn a B on the project by doing any kind of acceptance testing. The A-level specification is designed to be more focused: do a mixed methods evaluation based on a theory we studied this semester. None of the five teams explicitly aligned their evaluations with the semester's readings. This didn't stop two of the groups from marking that specification as complete in their respective checklists, casting serious doubt on the implicit claim that they had conducted the required self-assessment for which the checklist is a result. (Perhaps, then, I need to add more rigor to the self-assessment itself, requiring them to link their claim to the artifacts.)

The problem with the acceptance testing actually goes much deeper than dishonest claims of completion. Among those who conducted any kind of acceptance testing, there was no evidence of their having learned anything from the assigned readings and exercises relating to Steve Krug's and Jakob Nielson's theories. Instead, they followed ad hoc approaches that were poorly designed and yield unreliable results. They did actually use quantitative and qualitative approaches, in keeping with A.2.R, but they did not do these well. For example, many groups asked questions like "What did you think of the application?" and then reported "3/6 users say they liked it." I pointed out in my feedback that 50% of users claiming they liked it is different from 50% of users liking it. More importantly, "liking" the application was not one of our design goals: we were designing systems to be used. Yet, only one of the groups conducted a task-based evaluation, where the user was asked to accomplish a goal using their system. Task-based evaluation is what I expected, and task-based evaluation is what I wanted. However, I wanted the students to realize that this kind of evaluation was the right choice, so I left the specification open to other options. The other options were demonstrably worse. Hence, in the future, and particularly in this introductory HCI course, I should just require them to follow the best practice rather than give them the choice to shoot themselves in the proverbial feet.

I have to wonder if the students would have spontaneously met these criteria if they had taken notes during our discussions

Thursday, April 25, 2019

A student's Eureka Moment with game state analysis

I was in the studio the other day with a student working on redesigning part of the gameplay implementation on this semester's immersive learning project. Incidentally, it's been an amazing semester for this team, but I've had other writing tasks eat up the time I would otherwise have been blogging about it. In any case, he told me that he had a solution in mind that would require using two booleans. I stopped him and pointed out that once we have more than one boolean, we need clearly enumerated states. This is good practice generally and it is also a rule in the Gamemakin UE4 Style Guide that we're following.

I took the student over to the whiteboard and we began analyzing the interactions involved, resulting in this diagram:
We talked through a few scenarios. When I was convinced that he understood the analysis, we went back to the workstation and I showed him how to create an enumerated type with three values. Then, we went into Blueprint and looked at the events. I demonstrated how we could switch on the new enumeration that we had defined and use that to encode the logic from the diagram on the whiteboard. At first he was kind of quiet and seemed to be thinking about the problem at the source code level. Then, around the time we added the logic of the second trigger, he spun around and proclaimed something like, "Oh! That diagram becomes the code!" It was a wonderful example of that eureka moment when the pieces click and, as a result, he became a more powerful developer. Now he has a new tool in his toolbelt, and I can work with him to understand how to recognize when the problem affords using such a tool.

Wednesday, April 17, 2019

"Who Would YOU Rather Work With?" A classroom game show for my HCI class

On Tuesday last week, my HCI class gave their informal reports for the first iteration of their final projects. I neglected to post any real guidelines about the presentation, and so it was ad hoc. We did have two guests in the room, though, one being an expert user for the projects and the other being a software development professional. There were several points in the students' presentations when I cringed at how they presented their status and ideas. I considered for a while how to address this, and I ended up coming up with a classroom game show called Who would YOU rather work with?

This is a rhetoric game along the lines of The Metagame and CARD-tamen. There are two teams, each of which sends up a player for the round. The two contestant are shown a pair of fictional quotations, and based on these, must argue for who they would rather hire onto a team. We alternated which team went first, and each player had 30 seconds to make their cases. The winner was determined by popular vote. Even though a team could just vote for their own representative, I think the students understood that it should be a fair vote.

The original idea for the game was Is it User-Centered... or Not?, but I realized as I started working on the game systems that user-centeredness was only part of the equation. More of the problems were with students' rhetoric than with their technical or procedural approaches, and hence my choice to make this a rhetoric game.

We played the game last week Thursday. I had the students count off by twos in order to shuffle them around, so they would not be on a team with the person they usually sat with. The two teams named themselves The One Before Two and Los Dos. (Important note: The One Before Two insisted that the short name of their team was TOBT, pronounced "tot" with a silent "B".)

Here are the prompts that I gave them:

"The images take a long time to load, so the user just has to learn to wait."
"We will show a spinner to signify to the user that the system is still working."

"JavaFX is a pain. It's really fiddly."
"We thought we knew the JavaFX API better than we actually did. We talked about it and identified our main misconceptions, and we are working to overcome them."

"We were all really busy so we did not work on that feature."
"We documented that feature as a user story to consider for the next iteration."
[or, in a bonus round, "That feature is out of scope."]

"Git kept on destroying our files. It doesn't do what it is supposed to do."
"We had problems managing version control, and one of our impediments is our own understanding. We have prioritized finding the root cause so that it doesn't happen again."

"We are making this application using Java and JavaFX because Dr. Gestwicki provided an example with these."
"We have analyzed our target population's needs and have chosen a development platform that allows us to deploy a solution that meets their needs."

"We did a lot of work that you cannot see exposed through the UI right now, but it will be incorporated into the next iteration."
"Our source code provides the simplest possible solution for this iteration that maintains our standards of quality."

"We are not sure if this is an SRP [Single Responsibility Principle] violation or not."
"We have evaluated our code for compliance with SRP and, where we were unsure, we consulted with the professor to evaluate the structure."

"We know that what we have planned for iteration 2 is going to solve our problem."
"We learned from iteration 1 to help us improve our product and processes for iteration 2."

"We based our work on Dr. Gestwicki's example."
"We built a solution for this problem based on our users' specific needs."

The last one is quite a bit like #5, but that was in part to handle the case where everyone actually showed up to class and would get a turn. Of course, not everyone showed up to class, so we had enough questions that some people got to answer two.

One of the students hypothesized out loud, in round three or four, that the bottom answers were always "right." I pointed out that this was a rhetoric game: there was no objectively right answer, but rather a challenge to persuasively argue your position. Once the game was over, I explained to them that there was indeed a pattern, of course. The top item was more or less what I heard people say during their presentations, but the bottom is what I would have like to hear instead. This caused a murmur in the crowd as they considered what this means. It gave us a good excuse to talk about how you present yourself when applying for a job or internship: that the applications will be sorted into two piles, and you want to make sure you end up in the "Let's talk to this person" pile.

For the most part, students argued that they would rather work with the second person on each slide, using the kinds of arguments you would expect. The statements on the bottom tend to show capacity for reflection, eagerness to learn from mistakes, honesty, professionalism, accountability, and user-centered rather than developer-centered. However, it was not the case that everyone argued for the bottom. On #3, a student argued that the person on top was simply being honest, not really making excuses, while the person on the bottom was being political to the point of dishonesty.

In the end, it was a close game, with TOBT winning 5 to 4 over Los Dos. I think the students appreciated the interactivity and how everyone got a chance to speak their piece. When it was all over, I walked through the slides again to talk through some of the issues that came up, so I guess they didn't really get a reprieve from hearing me lecture to them. However, I think they felt like they had some skin in the game now, since they had already shared their thoughts about each issue.

Tuesday, April 16, 2019

Mapping Introductory Game Design to CS2013

Regular readers may be wondering why I was looking at the learning objectives of CS2013, the ACM/IEEE recommendations for Computer Science curricula. The answer is that I am in the process of putting together a proposal for a new Introduction to Game Design elective in my department.

For the past several years, I have been teaching an Honors Colloquium on Game Design in the Fall semester. This has been very rewarding, but it is only made possible by internal funding from the Immersive Learning program and the good will of both the Honors College and the Computer Science Department. I believe it is time to formalize this class and bring it into my home department. This has a few implications, one being that I have to explain to my colleagues the relationship between Game Design and Computer Science. I suspect that they, like many outside the discipline, have never carefully considered the distinction between Game Design and Game Programming—the latter of which is an elective that I teach here. If I can get their approval for the course, it will allow me to build a more reliable pipeline of prepared students, which means we can explore more ambitious game development projects.

One of the ways I am planning to make the course appropriate as a departmental elective is to give it a one-semester programming prerequisite. While it's true that this will allow us to do a little programming in the game design course (such as in Twine), primarily I want to be able to use programming as a lens for considering system analysis. One of the real values of learning to program is that it teaches you to think through complex problems and serialize the steps of a solution. This is, essentially, writing the rules of a game. It's something that I've seen some of my past non-CS Honors students struggle with, and it's a skill that I see good programmers take for granted.

That's sufficient background for me to get back to the CS2013 mapping. Below is a list of the knowledge areas and topics that I see as being covered in an Introductory Game Design course that would be housed in a Department of Computer Science. With each topic, I have indicated whether the ACM/IEEE recommendations include minimum contact hours and at which tier.

  • Human-Computer Interaction:
    • HCI/Foundations [4 Core-Tier1 hours]
    • HCI/Designing Interactions [4 Core-Tier2 hours]
    • HCI/User-Centered Design and Testing [Elective]
    • HCI/Collaboration and Communication [Elective]
    • HCI/Design-Oriented HCI [Elective]
  • Platform-Based Development:
    • PBD/Introduction [Elective]
  • Software Engineering:
    • SE/Software Project Management [2 Core-Tier2 hours]
  • Systems Fundamentals
    • SF/State and State Machines [6 Core-Tier1 hours]
  • Social Issues and Professional Practice
    • SP/Professional Ethics [2 Core-Tier1 hours, 2 Core-Tier2 hours]
    • SP/Intellectual Property [2 Core-Tier1 hours]
    • SP/Professional Communication [1 Core-Tier1 hour]
For most of these, the course could cover all the core hours. For SE/Software Project Management, which I wrote about last week, team-based projects in introductory game design could hit the objectives as well as anything else, despite the products being analog rather than digital. The only place where the proposed course would clearly not fully satisfy the recommendations is SF/State and State Machines. These provide useful tools for describing the behavior of games, but we would not use a rigorous form of them and probably not spend more than an hour on their explicit treatment. It may be worth noting that some past offerings of my game design class provided an optional exercise with the formal design tool Machinations, but no one ever took me up on that.

It may be worth noting that all the core hours of those objectives above are covered elsewhere in the curriculum, although not all in required courses. HCI/Designing Interactions topic is only met in the HCI elective course, and this deficiency was noted in my department's recently-written self-study. The SE topic is ostensibly covered in the senior capstone, and the SP in our required one credit-hour professionalism class. However, I think it's worth repeating aphorism that it doesn't matter what we cover in the curriculum: it matters what our students uncover.

Of course, introductory game design primarily is teaching game design. This mapping exercise is to help me frame this work for any potential skeptics in the department. The result of this exercise belongs in the Course Rationale portion of the master syllabus. My next step is to define the learning objectives and assessments for the course proposal, which defines more clearly what the course is about. Here, I can focus more on fundamental principles of design, which I was disappointed to see are not directly addressed in CS2013. 

Saturday, April 13, 2019

Questioning the value of CS2013

I recently had cause to re-read the ACM/IEEE CS2013 curriculum recommendations. This document is created by a large committee of content experts to provide guidance to Computer Science undergraduate programs. On one hand, it's a useful articulation of the body of knowledge that constitutes "Computer Science". On the other hand, well, let's take a closer look.

The CS2013 document breaks down Computer Science into knowledge areas such as Discrete Structures, Human-Computer Interaction, Operating Systems, and Software Engineering. Within the Software Engineering (SE) topic is a section called SE/Software Project Management. The section contains "2 core Tier2 hours", which means there are two recommended minimum contact hours on the topic, and they should be in a non-introductory course. For those who don't know the higher education jargon, "contact hours" means "students and faculty together in a formal meeting" such as a lecture. According to US financial aid legislation, and therefore according to practically every domestic institution of higher education, a student is expected to engage in two additional effort hours for each contact hour. For example, a student is expected to read, study, or work on homework for two hours for each one-hour lecture.

The core topics contained within SE/Software Project Management are as follows:
  • Team participation
    • Team processes including responsibilities for tasks, meeting structure, and work schedule
    • Roles and responsibilities in a software team
    • Team conflict resolution
    • Risks associated with virtual teams (communication, perception, structure)
  • Effort Estimation (at the personal level)
  • Risk (cross reference IAS/Secure Software Engineering)
    • The role of risk in the lifecycle
    • Risk categories including security, safety, market, financial, technology, people, quality, structure and process
That's a lot of topics to cover in two hours of lecture and four hours of studying, but I could see it being all combined into a breakneck chapter of a Software Engineering textbook. Note that there are additional elective topics within SE/Software Project Management, but we won't consider those for this discussion.

Part of the ostensible value of CS2013 is that it doesn't just provide a topic list for each section of a knowledge area: it also provides learning outcomes. These are classified in a system inspired by Bloom's Taxonomy of the Cognitive Domain, so each outcome can be at the level of Familiarity, Usage, or Assessment. These are defined on page 34 of CS2013, and we can summarize them as basic awareness, ability to deploy knowledge concretely, and being able to consider from multiple viewpoints—as one would infer from the nomenclature. These, then, are the learning outcomes for SE/Software Project Management, as listed on page 177 of CS2013:
  1. Discuss common behaviors that contribute to the effective functioning of a team. [Familiarity] 
  2. Create and follow an agenda for a team meeting. [Usage]
  3. Identify and justify necessary roles in a software development team. [Usage]
  4. Understand the sources, hazards, and potential benefits of team conflict. [Usage]
  5. Apply a conflict resolution strategy in a team setting. [Usage]
  6. Use an ad hoc method to estimate software development effort (e.g., time) and compare to actual effort required. [Usage]
  7. List several examples of software risks. [Familiarity]
  8. Describe the impact of risk in a software development lifecycle. [Familiarity]
  9. Describe different categories of risk in software systems. [Familiarity]
It's pretty clear to me that it's impossible to meet those nine objectives in two contact hours and four out-of-class work hours. Even if we assume that each student is perfectly capable of independent learning and studying, the time required to make any meaningful knowledge about these items is certainly greater than the recommended hours. I recognize that I am making assertions here and not arguments. In an earlier draft of this post, I tried a more rigorous argument, but I realized that a true skeptic could always counterclaim that they could meet these goals within the timeline. The real difference would not be the time required but our mutually exclusive definition of what it means to meet a learning objective.

It seems to me, then, that there are only two ways this could be approved by a committee. The first is that the committee has no functional understanding of contemporary learning theory. If that were the case, then why should anyone take their recommendations for curricula seriously? The second option is negligence, that the committee as a whole knew better, but they did not verify whether their recommendations and the learning objectives actually align. If that's the case, then why should anyone take their recommendations seriously?

Saturday, March 23, 2019

Rolling 2d6 in PDQ and Apocalypse World: Learning the Numbers

I am supervising two students in independent study projects this semester. One of them is working on a tabletop RPG, and I want to share here a topic that came up on our group's online chat.

I have fiddled with Chad Underkoffler's PDQ a few times over the years, running some short-lived campaigns with my boys. I wrote about one of my favorite aspects of PDQ several years ago, how it models the psychology of learning in a more realistic way than many mass-market games. You can find the PDQ Master Chart in this free PDF, and I'm reproducing it here for convenience:
I remember that every time I played a PDQ session, I would have this simple chart in front of me. I never got to the point where I internalized it, although I'm sure I would have with time. Let's look at some of the structural properties that make it learnable. First, "average" quality is 0, which is simple: a regular person has no bonuses or penalities. Going up or down in quality is in steps of two, which I like from a mathematics point-of-view because it's statistically meaningful in a way that, say, a +1 on a d20 is almost insignificant. For the Difficulty Ranks, there is a sensible centering on 7 as "a straightforward task." Anyone who picks up the rulebook is going to know that 7 is the most likely result on 2d6, so this gives a memorable default. Difficulty Ranks also move up and down in steps of two, which again I think is a much better choice than fine-grained quanta, especially since this is supposed to be a narrative-heavy game rather than a simulationist one. (PDQ stands for "Prose Descriptive Qualities" after all.) I never had problems with the centers of these scales, but I did find it hard to remember, say, the difference between a Difficulty Rank of 11 and an 13. Also, for reasons I cannot quite explain, I find -2, 0, +2, +4, +6 to be much easier to remember than 5, 7, 9, 11, 13, even though they are the result of the same function, just applied to different inputs.

My student who is exploring tabletop roleplaying game design has been inspired by the Powered by the Apocalypse movement, which, briefly, describes a family of designs that are inspired by Vincent and Meguey Baker's Apocalypse World system. I backed the Kickstarter for the second edition of Apocalypse World a few years ago on a forgotten recommendation from a respected designer: I think it was someone like Robin Laws or Richard Bartle who posted about the campaign, saying that anyone serious about game design should support it. It's a riot to read, but it's certainly not the kind of thing my kids are quite ready for yet!

The mechanisms of Apocalypse World that I want to focus on here, though, is resolution of moves through rolling 2d6. These are the only dice used in the system, and it works like this: with less than a 7, it's a failure; between 7 and 9, it's a partial success, and 10 or higher is a complete success. That's it. This is truly elegant for a system that is designed for narrativist gameplay.

A funny thing happened when my student was testing his RPG with the research group. In order to make the game learnable to new players, he put this dice resolution system into a chart and set it in the middle of the table. He himself referenced the chart a few times while we played the game, despite his being the designer and a regular player of Blades in the Dark, another PbtA game. Curiously, the Bakers don't actually have a chart in Apocalypse World second edition at all; they just describe it in prose.

This got me thinking about the specific values used by Apocalypse World, similar to how I analyzed the PDQ Master Chart. Once again, we have 7 as a prominent number: it's the most likely value on 2d6, and it sits at the threshold between failure and partial success. It is, in a way, the easiest number to remember as being significant on 2d6. The next threshold value is 10, which marks the lowest value that represents a full success. Ten is the first double-digit number possible on two dice. It's also, for most people, the number of fingers you have. It's the base of our counting system. A "perfect 10" cannot be beat. 7. 10. Got it. Two numbers, that's the whole system in a very small amount of memory.

Just to be clear, I think this is brilliant. I don't think you could pick two different numbers that would be better for breaking down 2-6 in a more memorable way. I wonder if the Bakers chose these numbers because of their interesting properties or if it was done with intuition and luck.

We had a brief discussion in my research group meeting last week about adding 2d6 vs. counting success on variable numbers of d6. For example, the question was raised, can most players more quickly add 2d6 or count how many of an arbitrary number of dice have rolled above 4? I argued that for anyone who had spent any appreciable time playing tabletop games, rolling 2d6 can be done purely with pattern matching: I don't add five pips and the three pips, I just see a representation of the quantity "8". I mention this here because I think it's a good games research (or perceptual learning) question, but I haven't checked to see if anyone's explored this yet.

Finally, I'll mention that, with the little bit of testing we've done on my student's game, I've grown quite fond of the "partial success" idea. My favorite expression of this as a story device is that you succeed, but at a cost.

In case you were wondering about the dearth of posts here: I've been meaning to write more about my research group activities this semester, since it's been very rewarding. I hope to write up some kind of summary at the end of the semester at least. However, the past few weeks, my attention has been pulled away into a departmental self-study report. I think the sections I have been spearheading are strong, but it's had a significant impact on my research time as well as my reflective writing time.

Saturday, March 16, 2019

Painting Middle Earth Quest

Here's a project that was years in the making: painting the miniatures from Middle Earth Quest.
Middle Earth Quest is an asymmetric one-vs-all game, one player as Sauron and the rest as heroes of the Free Peoples of Middle Earth. The setting is between The Hobbit and The Lord of the Rings, as Sauron is establishing his foothold while the Free Peoples are trying to rally to face him. This is a game that I remember being fun, although I haven't played in years. How do I know that? Well, if you go way back to a post from December 2014, I mentioned that I had painted Argalad, one of the heroes, and I know I primed and based them all in one go. I have another post from July 2015 in which I described how I painted the Witch King. Some time after that, I painted Gothmog, and I started the Ringwraiths, and then the figures sat around for years, first on my painting table, and then in a box. Some time in between, Fantasy Flight Games removed all reference to the game from their site, so it looks like there won't be any more printings or expansions; it must not have sold well.

I actually had Argalad, the Witch King, and Gothmog sitting on my desk the whole time, at first because I liked how they looked and wanted to inspire myself to continue the set, and then because of inertia. A few months ago, #2 Son finished reading The Hobbit, which got me thinking about this game again. Would he enjoy playing this game, now that he's older and knows a little bit of the LotR lore? Maybe it was time to finally get back to the Ringwraiths and finish this set.

Without further ado, here's a quick summary of each of the figures from this game. I'll start with the ones I painted some years ago and then move through them in the order I addressed them.

Argalad is the token elf. Or is that Tolkien elf? I remember being quite pleased with him at the time, and I spent quite a while on him. It's worth noting that all these figures were primed in black, because that's how I was working at the time, when I was less than a year back into the hobby. I'm sure it was all done with layering and some washes, but the only part that really sticks in my memory was the silver embroidery on the cape. That was done by mixing silver paint with Future Floor Polish, which flowed really well into the engraved pattern on the cloak.

There's a whole post about the Witch King of Angmar, so I'll just leave you to check the link if you want to read about that.

Here is Gothmog, who I painted some time between winter 2015 and now, but certainly nearer then than now. I know I gave him quite a bit of time, but I don't remember much else about the process. He's pretty dark, but I did want all the villains to have clearly dark color schemes. I remember that, playing the game, it was sometimes hard to tell at a glance who was heroes and who was villains. This is particularly difficult for the two upcoming horse-riding figures. As you'll see later, I think I did meet the goal of having the "sides" of the game clearly contrast.

Now we move into the figures that I have painted in the last few weeks. It was interesting, but also frustrating, to come back to these figures—especially the Ringwraiths. At the time I started this project, I thought it was a good idea to prime in black (as Dr. Faust does) and also to work on the bases first. All of these figures were in jet black primer but with painted bases. If I were to start the project today, I would build the bases, use zenithal priming over the whole figure (as I started with Massive Darkness), paint the bases, and then paint the figures. I thought about re-priming the figures, but I decided it would be a fun to try to keep them as I started them, as a sort of artistic archaeology project.

Getting into the Ringwraiths reminded me why I stopped this project. Horses are the worst. These figures are pre-assembled, so there are lots of hard-to-reach areas on the miniature. The sculpts are not very good, with flat-faced hooded men. It's also black-robed guys on black horses. It's hard to think of anything that could make this a worse project! I had indeed started painting the horses some four years ago, but it was hardly noticeable: I had done very faint layered grey highlights and then given up.

I recently watched Dr. Faust's episode on highlighting black in which he mixed a few different tones of black onto one figure. (Ironically, that figure is not primed in black!) He suggests that in most cases you wouldn't mix blacks on one figure, but looking at the Ringwraiths, I thought this would be good exercise if nothing else. I mixed a warm black by adding some VMC Flat Brown, which I used for the horses, and then a cool black by adding VMC Deep Sky Blue, which I used for the robes. I overhighlighted the robes in the first pass and then used a black ink glaze over the whole thing to bring it down. Unfortunately, I used my Liquitex Glaze Medium for this, which left the robes super glossy. I knew my matte varnish would take that away, but it made it really hard to compare the blacks of the robes with the blacks of the horses because the glossiness contrast dominated the viewing.

These guys sat on my desk while I painted the rest of the series too, but in anything but the best light, they really just looked like a black blob. I did go in and touch up the highlights on the robes, even before varnishing, and I also punched the leather parts way up. I had been going with a dark leather look, because wouldn't you put dark leather on your black horse if you were a black rider? Yes, you would, but it wouldn't help your miniature any.

In the end, they still look like a bit of a black lump, but at least they're done, which is better than they have been in years.

Here's the Black Serpent. The last thing I wanted to paint after the Ringwraiths was more horses, but decided that getting the horses done was actually the best thing for me to do. I think it turned out fairly well, although it was working on this one where I made a conscious decision that the rest of them would only be "good enough." I was much more interested in getting this set done than in making any showpieces. Also, these sculpts are not very good. The game is from 2009, which is before these sorts of miniatures became centerpieces of marketing strategies a la Kickstarter. The frustrating was exacerbated by the fact that I didn't used to spend as much time cleaning mold lines and flash off of the minis, so some have some awkward tags and creases.

Let me mention a funny thing about the capes. All the caped figures have designs on the capes, but these designs are actually etched into the cape. This makes them look decent unpainted, but it's not actually how a design on cloth "works." Also, this is a place where the detail is not very clear, so it's hard to see the real edges. Argalad's cape turned out pretty good, and I thought about doing something similar with the Black Serpent. However, as I worked on the cape, I realized that if he's truly "The Black Serpent," then he really should have a black serpent on him, shouldn't he? I'm glad I used the etched design as a border here rather than anything else, since it provided convenient lines to paint in, and the paint job makes the actual etching invisible.

The last of the villains is the Mouth of Sauron, who I remember being one of my favorite cards in Middle Earth: The Wizards, which of course remains the grandest Middle Earth game adaptation ever created. In any case, in keeping with the "make the villains wear black" theme, I decided to go with a cool black and warm, maroon trim. This provides a nice contrast both in temperature and in hue but keeping relatively similar saturation. I think the metallic and bone provide more visual interest to the figure as well, so although he's only a few colors and relatively simple, the result is nice.

Which hero to start with? The one one the horse, of course. This is Eomer, clearly from Rohan. He took the most time out of any of them, in part for his size and in part because of his detail. There are lot of greens, browns, greys, yellows, and golds on the card art that don't exactly match those of the sculpt, but it comes close. I also think I did a decent job of the horse, which was done almost entirely with layering.

The cape is the weakest link. I tried to reproduce my approach from Argalad and make it look like there is gold embroidery on his cape. The details on this cape were too ambitious for the casting, however, and it's kind of hard to tell what's going on there, unlike the simple border of Argalad's. I decided to keep it rather than repaint it. Good enough, I say, and most importantly, no more horses.

Next up is the figure that I have long considered the worst of the bunch: Thálin. He's basically a lump of plastic. All right, maybe he's not all bad, but if he stood up and let his arms rest, they would reach past his knees. Also, his back is clearly sculpted to be a natural material of some kind, but who would want to leave their back so unprotected? It's an odd piece.

Whereas the cloaked heroes have etched designs on their cloaks, Thálin's are in his ... is it a tabard? A loincloth? I'm not sure. Anyway, on the card art there's a clear yellow-on-red pattern here, and I copied that idea onto my painting. This part is actually really strong, adding visual interest to an otherwise leather-and-metal warrior.

Berevor is a ranger who provided a chance to play with shades of green. I think the different greens look good together, with enough difference to be visually interesting, and the brown leather bits give her a good earthy tone. The slightly bluish color also adds a bit of cool contrast to the more dominant warm greens.

Last up is Eleanor from the White City. She has really sharp contrasts between her dominant colors, which I think came out strongly in the painting. I originally had the tree on her chest painted in white also, but looking again at the card art, it was clearly silver. I painted that over in silver, while the rear is simply white. I think hers is another case where a fairly limited palette makes her visually distinct and interesting, without needing lots of bells and whistles.

I'm glad to have this set complete, and I look forward to getting these to the table some time soon. Re-reading the rulebook, I was reminded about how it is a bit fiddly, which makes me question whether #2 son would find it interesting or frustrating. We'll see. A nice thing about games is that they don't go bad, so I can always put these guys back in the box they were in for so long, and revisit them when the kids are at the age and interest to want to try it.

As seems to be customary for many of my painting posts, I want to close with a comment about the photography. My first two rounds of photos did not turn out well for the usual reasons: using the default camera app, I was getting light and dark lines from my lamp's frequency, and using Open Camera to adjust the shutter speed, I was getting poor color temperatures and washed out images. I ended up doing a different physical layout, as shown in the photo below:
I tried to make sure the lamp was pointing at the figure, which leaves the background slightly in shadow. If you look again at the photos, you see a slight gradient, which I think is fine though not artistically intended. Using Android's default camera app, I was able to eliminate the lamp-lines by raising the exposure to 0.7. Doing so manually meant that I could not refocus without it dropping back to 0.0, so I tried shooting all the figures without refocusing. This worked on some but not others, the latter of which I had to shoot again. If I were to do it again this way, I would have just focused each time and then adjusted the exposure, so as not to waste time taking out-of-focus shots.

Thanks for reading this tale of my multi-year project to paint Middle Earth Quest! As always, feel free to leave a comment.