Sunday, October 14, 2018

Notes from Meaningful Play 2018

One of many effective learning techniques that my students don't use is transcribing their notes. I was reminded of this when I took many pages of notes at the 2018 Meaningful Play conference. I decided it might be fun to share my list of odds and ends here. Perhaps there will be something here that the interested reader might learn from, or at the very least, it gives me a chance to create a digital archive that I can return to.

Tracy Fullerton mentioned a book called Wanderlust: A History of Walking. That could be interesting.

She also mentioned a Situational Game Design, whose abstract claims that it is about analyzing games as player experiences rather than systems. This could also be good, and it makes me wonder how it compares to Chris Bateman's writings about games as being constituted of player practices and with the systems approaches that I tend to like from folks like Koster, Cook, and Burgun.

Fullerton was making a case that games should be about interesting situations, and that this should include meditative play. She gave an example of a meditation-reward system in Walden: having Thoreau pause and look over a scenic setting rewarded the player with a little narration. To me, this raises the question: who is meditating, the player or the character? She gave several examples of games that included this kind of mechanism, but there weren't any compelling counterexamples given. It left me with a sense that maybe I didn't understand her, or maybe she assumed the listeners were already on the same wavelength. A particular call was made to move "beyond mechanics, beyond systems" and toward "a series of meaningful situations" in game design. The problem I have with this is that having Thoreau narrate his meditative experiences is system design. I didn't have these thoughts fully assembled in time for the Q&A, and I did not have a chance to talk with her later in the conference. I appreciate her sharing her work-in-progress ideas about where to take game design. That is, after all, a lot of what I do here.

Well Played has a CFP about intergenerational gameplay, and the idea is for the intergenerational players also coauthor an article about the experience. I'm thinking of trying this with my sons, probably the oldest one.

Ann Arbor is known as a fairy town, with little hidden fairy doors throughout the city. People geocache with these doors too. I need to tell my colleagues at Minnetrista about this, in case they don't know about it already.

The city of Brussels has a tour based on comic art. I need to send this to Easel Monster.

Is reading Finnegan's Wake worth it? Paul Darvasi says so, and he's one of the coolest scholars I know. He also said you can't "read" it, but rather you "study" it, time and time again. It sounds fascinating, maybe it's like the English literature version of Godel Escher Bach.

I heard someone refer to "gamification" as "punish by reward", referring to how it uses extrinsic motivations to diminish actual learning results. I need to keep that in mind for this workshop I've been asked to run at BSU about game-based learning.

Eric Zimmerman seemed to me to be separating games, systems, and play in his keynote. Part of what gave this away was a slide with those three words on it, and that he talked about them as separate. I had a hard time with this, since I see them as not just interconnected, but essentially the same—at least when done well. I asked Darvasi about this and he pointed me to Zimmerman's Ludic Century manifesto, which covers the same ground and, fortunately, is shorter than Finnegan's Wake. As with my earlier notes, I don't have all my ducks quite in a row here yet, but let me share the gist of it. This is informed, without a doubt, by Cockburn's Heart of Agile philosophy. There are systems in nature that are studied by natural scientists. Other systems arise from human behavior, and the intentional ones are studied by the sciences of the artificial. A good designer must recognize that the system they design fits into a bigger system, and that people have experiences before and after, and sort of parallel to, their systems. These systems are like software: we design them as specifications, but they have dynamic behaviors. We design them statically, we hope for the dynamic behavior we want. To design intentionally for humans requires accounting for human nature, which is playful; put another way, if you design for humans and you don't account for their playfulness, you are not doing good design. I think this is the right idea but I need to work on the articulation. I am very grateful to Andrew Peterson for listening to me as I tried to sort out my thoughts about this, and who constructively disagreed with me, and Mars Ashton, who said essentially, "Yes, it's all just design."

One of the best presentations I saw was by Sandra Danilovic. She organized a game jam for people with a variety of disabilities and welcomed them to make games about their disabilities if they desired, and she conducted a qualitative study around the event. One of her findings she called logopoiesis, which was essentially about the healing power of computational thinking (as I understand it from the presentation). I asked her if this particular factor was tied into the fact that these people were making games versus, say, film or poetry. Turns out she herself had background in a variety of arts, and she said two particular cases in her study dealt with the challenges of problem solving through programming and with the meditative act of arranging pixel art. These things she found in her analysis to be separable from the other characteristics. I thought this was fascinating: there is a lot of rhetoric about the power of computational thinking (more rhetoric than empiricism, but maybe that's unavoidable), and I have said for years that computing is really a new liberal art. This was the first time, though, that I have come across the therapeutic value of it, as related to its poesis (the making of a thing that did not exist before—a definition I had to check because it's not in my usual lexicon).

I learned about the game Night in the Woods, which sounded quite interesting. The more I heard about it, though, it also started sounding more and more nihilistic. I'm not sure if it should go on my to-play list or not. The speaker also recommended Wandersong, which I also didn't know much about. I seem to be behind the times on trendy indie games. Heck, I just finished Bard's Tale IV, a sequel to a game from thirty years ago, so "behind the times" may actually be generous.

Someone recommended I check out Mark Rosewater's Ten Things Every Game Needs. They said it was a good summary of ideas, well articulated though not groundbreaking. I did not write down who made this recommendation and cannot remember now. I just scrolled through, and it looks reasonable.

I had a great conversation with Andrew Peterson during a "dinner break" when neither of us felt like going and getting dinner. He shared with me several interesting ideas he uses in his game design class. First, he has completely "flipped" the class. They do readings and preparation on their own, and class sessions are almost entirely devoted to teams working on their prototypes. The teams are randomly assigned themes from a brainstormed list. His students have a week 12 ship date, at which point they have to have their materials sent off to Game Crafter. The physical prototype that arrives is what he then grades. This front-loads the work into the earlier part of the semester and allows for more slack time for the students in the last three weeks, when their other classes tend to build to fever pitch. Peterson also mentioned that as an instructional designer and game designer, what he likes to do is ask faculty what they don't like to teach from a course: identify the variables, determine which can be turned into a game. This also might be useful for me as I start prepping for my own campus presentation on games in learning.

I met Chris Totten, who said the most perfect and quotable thing over breakfast: Games should be good. He is clearly a like-minded individual.

Kate Edwards supposedly has an excellent talk about imposter syndrome. I think this must be it. I was at a table with some very talented people, all of whom had stories about how they themselves had been touched by imposter syndrome and how they knew some of their heroes also did. This is interesting by itself, but one also mentioned how he teaches his game design students about imposter syndrome and the Dunning-Kruger effect. He has them do a short jam of sorts to show off their skills to their cohort, after which it is easy for people to feel outclassed. He introduces these topics then, and he reminded our table that students generally are unaware of these phenomena. It made me think, I should do something like this in many of my classes as well.

Henry Petroski has a book about the pencil. That sounds amazing.

The Death and Life of Great American Cities sounds like a great book about urban planning. Some colleagues were very excited about this work, enough to make me think it could be worth looking into.

A friend told me about a talk that Jesse Schell gave at GDC about grant funding in which he set a $50 bill ablaze. I was able to reach out to him for the link, and I look forward to watching it later.

A speaker happened to mention that he uses one-page design documents successfully in an upper-division game design and development class for Computer Science majors. This surprised me, since the times I've tried it, I've found the assignment confounded by my students' lack of visual communications and document design skills. It sounds like they are doing the designs on whiteboards, but they're also iterating on them during the semester. I sent out an email asking for more information last night, and this morning—as I continue this lengthy post—I have already received a response. They have their students start with whiteboard designs and make them progressively more refined during the semester. I also see that I think we were using different terms for the same thing. They are drawing upon David Osorio's style, where a "one-page" is more of a pitch document, whereas I use the term drawing upon Stone Librande's, where it replaces a design document. What they're doing in Osorio's style, which incorporates images and two-dimensional design, I have my students do using Tim Ryan's concept document format, which privileges text.

One of the best talks I saw, both in terms of content and form, was by Jessica Hammer. Her work was inspired by research findings that students don't get better at giving feedback during the semester if the skill is not taught. Hammer presented her EOTA model for helping students give feedback to their peers after playtesting. "EOTA" is an acronym that walks through the kinds of feedback that should be given in sequence, the expansion being: Experience (I thought..., I felt...), Observations (I saw...), Theories (Therefore...), and Advice (You might...). She described a learning experience where the designer had to remain silent during the whole feedback process, although she amended that to include that the designer could say "Thank you." I look forward to reading her whole paper once the proceedings are up, because I think this is the kind of thing I can bring into many of my classes to help students learn how to give and receive feedback.

Another wonderful pedagogic structure she described was to have students write down their favorite snacks on a survey at the start of the semester. Then, when a team does really well, she can bring in those students' favorite treats in honor of their accomplishments. This is very clever, since it doesn't put anyone on the spot: the students who did well know it was them, and everyone gets to enjoy the treats. I need to keep this in mind as I'm planning next semester's courses, although maybe this will have to wait until Fall 2019 just due to the teaching I expect to be assigned for Spring.

A keynote speaker mentioned the concept design fixation, which is when one gets caught thinking of an object only for its conventional purpose. I jot this down here just because it's a useful term that I'm not sure I would have thought of, had I been looking for it.

I met Derek Hansen who is teaching cybersecurity and using games, so I just sent him some info about Social Startup Game, which Kaleb Stumbaugh and I created as a research and design project a few years ago. I had to look up where we presented our findings besides the S2ERC Showcase, and it turns out that was Meaningful Play 2016. Unfortunately, when you look for the proceedings of this conference, they are nowhere. I have emailed the conference organizers a few times over the past two years to ask about it, in part because I am so happy with the evaluation that Kaleb and I conducted. Still, however, no paper in the proceedings. A copy of the paper is available on the project site, however, so interested readers can check that out. (I forgot it was there, so in fact I just rebuilt the paper from LaTeX sources to send to Hansen. Oh well.)

A speaker strongly recommended Kurt Vonnegut's Player Piano, which I am sure I've never read. Maybe that would be a good piece of fiction to offset all the non-fiction building up in my reading list.

A keynote was given by Katherine Isbister, whose work on wearable technology was not previously known to me. I am sending her name to some friends studying HCI.

A speaker brought up the idea that karaoke-style games can help people learn to pitch correct. It made me wonder if there are good ones that my kids can use. Some of them clearly love to sing, but I find it hard to teach them to hear where the notes go.

Sissy's Magical Ponycorn Adventure. Gunman Taco Truck. Ian Schreiber has been kind enough to talk to me quite a bit about "fam-jams", or family game-jams, and these two games are great inspirations. I think I already wrote about how my oldest son made two complete games at the last Global Game Jam, whereas mine barely worked. It inspires me to find opportunities to do some more creative collaboration with my kids. Chief among his tips are that when working with your children, you should do what they say—not necessarily what they want, but what they say. Watch this blog for details.

I overheard a friend mention a book called something like "Just Keep Writing," and he said that although the book was about creative writing, it could just as easily be applied to games. I am not sure what the specific book was, but I'm sure he's right. It brings up a thought that came up all during the conference for me: I should make more.

In her keynote address Saturday morning, Diana Hughes from Age of Learning brought up mastery learning. This is the idea that students may not move forward in the curriculum until they have shown that they have mastered antecedent work. It struck me that this is essentially a tech tree or a skill tree in games: you cannot build the next thing until you build its predecessors.

I attended a workshop about The Agile Teacher. It is a game created to help teachers—particularly new teachers—to explore active learning techniques. The game presented each group with a context, ours being a mathematics seminar with 10 or more students. Each group presented their findings to the rest of the room. By design, groups were supposed to have a context in which no one at the table was an expert. The designers explained that they had seen cases where the one person at the table who is from that area would tend to dominate the conversation. That makes sense. However, I also noticed (and shared with the designers) a fascinating phenomenon: without pedagogical content knowledge, every group designed activities that represented stereotypical understandings of the domain. The groups that had drawn Math as their domain reverted to a high-school understanding of math as essentially computation. The groups that drew Science worked on techniques to help students memorize taxonomies. Computation and taxonomies are both parts of math and science, but they are ancillary. It's not exactly a flaw with their game, since it was doing what it was designed to do (namely, foster conversation), but it was an interesting phenomenon nonetheless. It made me think about how I have seen people at my institution pigeonhole other faculty because they think they understand the others' domain. Maybe it's an instance of good old, "You're a Computer Science Professor? Can you help me with my printer?"

The final session I attended was another workshop, this one given by the aforementioned Andrew Peterson about his game, enRolled. The game is designed for new college students, to get them thinking about the impacts of their decisions as students. His motivation was excellent: as an adjunct, he was handed a course where he was supposed to lecture new students about how to succeed, and he realized that lecturing about these topics was of limited value. Peterson then worked with his students to design the first incarnation of a game to convey similar ideas. The difference was that student would engage in meaningful and authentic discussion around how to codify things like drinking and studying as game mechanisms, whereas they were hesitant to do so in a dry lecture. Very clever. He said a few times, "The game sucks," pointing to its dependence on random events and lack of balance. However, he was also clear that it does what it was designed to do: foster important conversations. This is an interesting dovetailing into my previous notes, reflections, and conversations around what design really is. In this case, Peterson has a playful conversation generator that happens to be a game, and so its fitness function is not about balance but about authenticity of emergent conversation. The game can be purchased one-off from The Game Crafter, but he also mentioned that he hopes to run a Kickstarter to print many copies at once and therefore reduce the price.

There was one particular story that Andrew shared with me that I want to capture here, so that I can re-tell it to my game design students. I hope that he doesn't mind my doing so. When he was working with this students on enRolled, the task was to determine the relative negative points of drinking and drugs. Andrew started with the idea that drugs would be worth -10 and alcohol, -2. His students, however, disagreed, justified by the prevalence of alcohol abuse. Very few of them knew someone who dropped out of school because of drugs; they all knew several who had dropped out due to alcohol. What a great example of how game design gave rise to new insights and conversations!

That's the end of my notes. I've shared here almost everything within my pocket notebook, just skipping some of my notes about questions to ask presenters that turned out to be not worth sharing here. Other interesting things happened during the conference as well, but my goal here was not to write a conference report but only to assemble the thoughts that I wrote in my pocket notebook. (Variations on these notebooks have always been in my pocket since reading Pragmatic Thinking and Learning, by the way.) I am tired out from the conference and feeling a little apprehension at the coming week, knowing that I now have to catch up on a backlog of tasks. If you have made it this far through my notes, know that I'm happy to discuss any of these ideas with you in the comments or future communications. Many thanks to the organizers of the conference for such an inspiring event, and thanks to those attendees who shared their knowledge, wisdom, stories, and advice.

Saturday, October 6, 2018

Custom validation and scoring of the Creative Achievement Questionnaire in Qualtrics

The Creative Achievement Questionnaire (CAQ) is an instrument developed by Shelley Carson to measure a person's creative achievement. I came across it years ago when doing some pilot studies on the impact of creative achievement on learning Computer Science. In fact, for years I have been pointing nascent scholars to the paper by Carson et al. as an exemplar of careful attention to reliability and validity in social science.

A colleague and I thought it would be interesting to revisit some research questions around creativity and computer science education. I'll leave a discussion of the full study design for another time. For today, I want to focus on how I was able to adapt the CAQ for delivery in Qualtrics. Last time I used the CAQ, we deployed it on paper and scored by hand, which was fine for a small number of participants; now, we want to scale it upward. Additionally, we want each participant to know their CAQ score at the end. It took several hours of working with Qualtrics to figure out a way to do this.

The CAQ has several sections that look like this:
A. Visual Arts (painting, sculpture)
_0. I have no training or recognized talent in this area.
_1. I have taken lessons in this area.
_2. People have commented on my talent in this area.
_3. I have won a prize or prizes at a juried art show.
_4. I have had a showing of my work in a gallery.
_5. I have sold a piece of my work.
_6. My work has been critiqued in local publications.
*_7. My work has been critiqued in national publications.
The idea is that you mark the numbers that are applicable to you. Scoring a section in the simple case is easy: simply sum up the numbers of the items. For example, if you have taken painting lessons (1) and people have commented on your painting talent (2) then your score for this section is 3. If that's all there were to it, it would be simple to use Qualtrics' embedded scoring system, which allows for a numeric value to go with each option. The trick, however, is that asterisk on the last line: that means that you have to mark the space with the number of times this has happened and then multiply that by seven. So, if you have had your work critiqued in national publications twice, that contributes 14 to your CAQ score.

I put assembled a survey that looks like this:
The editor made it easy to add the text field to a checkbox, but what I need to be able to do is figure out what number was in the textbox and what numeric entry it is in the list of options, and then also to store that product—along with the sum of the other values—into an embedded variable.

I found the documentation for the Javascript API to be a bit opaque, in part because it assumes you already know their internal vocabulary for surveys. Matt Bloomfield's Medium article helped me to make some more sense out of how custom scripting in Qualtrics works. Using Chrome's developer tools, I was able to poke around in the preview view of the survey and find that the text field above has HTML like this:

<input aria-describedby="QR~QID7~8~VALIDATION" class="TextEntryBox InputText QR-QID7-8-TEXT QWatchTimer" data-runtime-textvalue="runtime.Choices.8.Text" id="QR~QID7~8~TEXT" name="QR~QID7~8~TEXT" style="width: 150px;" title="My work has been critiqued in national publications this many times:" type="text" value="" />

That id looks interesting, doesn't it? Its parent looks like this:

<label for="QR~QID7~8" id="QID7-8-label" class="MultipleAnswer" data-runtime-class-q-checked="runtime.Choices.8.Selected"><span>My work has been critiqued in national publications this many times:</span></label>

It appears that the text field shares an id prefix with its parent, and that I can identify a text field by looking for the suffix "~TEXT". Additional experimentation confirmed that the number just before that is the sequence number of the option within the question: this particular item is the eighth option, since CAQ items are numbered in true computer science fashion as 0–7.

After much experimentation, I was able to come up with one reusable Javascript function to score each section of the CAQ. It makes use of the fact that jQuery is built in but unconventionally invoked through a variable named, appropriately, jQuery. This allows me to search for an id by suffix, which I did not previously know was possible. This function was added to the "header" section of Qualtrics Look and Feel for the survey to ensure that it was included on each page.

 function scoreCaqSection(section, context) {  
  var selections = context.getSelectedChoices();  
  var score = 0;  
  for (var i = 0; i < selections.length; i++) {  
      var questionId = context.getQuestionInfo().QuestionID;  
      var selector = '[id$="' + questionId + '~' + selections[i] + '~TEXT"]';  
      // Check if there is a text entry associated with this question  
      var textField = jQuery(selector);  
      if (textField.val()) {  
       score += (selections[i] - 1) * textField.val();  
      } else {  
       score += selections[i] - 1;  
      }   
  }   
  Qualtrics.SurveyEngine.setEmbeddedData('CAQ.'+section, score); console.log('Score for ' + section + ': ' + score);  
 }</script>  

The last line of the script is setting an embedded data variable, so by the end of the 10 parts of the CAQ, there will be embedded variables called CAQ.A, CAQ.B, etc., that hold the CAQ component scores. Each individual portion of the CAQ can therefore include an simple custom script like this:

 Qualtrics.SurveyEngine.addOnPageSubmit(function(type) {  
      scoreCaqSection('A', this);  
 });  

The call indicates what section of the CAQ is being scored, which allows for proper storage of the partial score, as well as the question object itself. The final page of the survey then can use this awkward embedded expression computation to get the final score:

$e{e://Field/CAQ.A + e://Field/CAQ.B + e://Field/CAQ.C + e://Field/CAQ.D + e://Field/CAQ.E + e://Field/CAQ.F + e://Field/CAQ.G + e://Field/CAQ.H + e://Field/CAQ.I + e://Field/CAQ.J}

Yes, I did work for a little while on automating that expression to contain a loop instead of hardcoded values, but I realized I was spending an order of magnitude more time on that than by just typing them all in.

The final piece of the puzzle was to ensure that what the users enter into the text fields is actually a number. Qualtrics provides custom validation options, but I was surprised that it does not have a built in option to check that a field is a number. I needed to combine this with the idea that a number needed to be in the field only if the option was checked. Qualtrics will ensure that if you type in a box, the option is checked, but not the other way around. The custom validation for part A therefore looks like this:

This is checking that either the option is checked and there is a valid number in the field, or the option is not selected. Most of the cases are fairly simple, but the portion with multiple starred items requires more individual steps. We currently have a GA stress-testing our Qualtrics implementation to ensure that the validation and computation are correct, and this frees us up to work some more on the IRB protocol.

This was my first experience with Qualtrics, since I usually go for the much simpler Google Forms when I need to collect some data. It clearly has some more advanced features, and the custom scripting option is powerful, although it was slow going for me to make sense out of it. There's relatively little chatter in the forums I could find about adding custom Javascript, and the perfunctory nature of the API documentation makes me think that this is not a frequently-used feature by Qualtrics' customers. Since it took me several hours of work to get this far, I wanted to take a few minutes this morning to write up what I have and what I learned. Perhaps then, next time future me needs to remember how to do this, I can simply end up back here and figure out what I used to know. It wouldn't be the first time my blog has helped me with that.

Friday, October 5, 2018

Specifications grading and wiggle room

I have been using specifications grading in my game programming class. I just finished evaluating my class' third mini-project, and now I am encountering some interesting problems. I want to share them here, in part to be able to review my notes later, and in part to see if my readers have any thoughts or advice.

First, a very brief background. Specifications Grading is the name of a technique popularized in the eponymous book by Linda Nilson. I actually used this approach several years ago in CS222 without giving it a clever name, but I abandoned it when I had a student tell me that he gave up on the course material because he knew he could not get to the level he wanted. That's just one case, and clearly from my plans for Game Programming, I decided to give it another shot. You can find all the specifications for the four Game Programming Mini-Projects on the projects page of the course site.

In the first two rounds of mini-projects, I had some cases where I ran into subtle problems with the specifications. One specification requires that students follow our established naming conventions. The problem arises, what to do if a student misnames one asset? Is one violation enough to say the whole specification was failed? Clearly not, but how many then? I am thinking about recasting such elements into something like "No more than two violations of the style guide." The problem then is, of course, that I have to actually look for them and keep track, doing the kind of bookkeeping that good specifications should make unnecessary.

A similar problem came up in the project report requirements, where I require that students describe the third-party assets they used and the licenses under which they are using them. Turns out, my students either were unbelievably lazy about this or really didn't understand the requirement. After the first Mini-Project, I encouraged them to take this more seriously; after the second Mini-Project, I realized I needed to intervene. I gave a class presentation about IP and licensing, including specific examples of violations from student work. I thought they should have learned this before a junior-level elective in college, but maybe they didn't; however, even in the third Mini-Project, I had students doing this wrong. I made the project report a low-grade specification: a student needs to have a well-formatted project report in order to get a C, but "well-formatted" includes all this licensing info. According to the specification, students who do not track the licenses properly should get a D. Is that right? Maybe, maybe not. I am thinking of separating the specifications for the project report to make this more generous to the students who really haven't yet internalized concepts of intellectual property.

Another area where students are having trouble is in working with Perforce. I sympathize: it took me some time to make sense out of it, and I had the benefit of working with many kinds of version control systems. It doesn't help that Perforce's nomenclature of "depot" and "workspace" is idiosyncratic. Having a working version of the project in our Perforce depot is a D-level requirement: fail that, and you fail the project. However, many students demoed games in class that are clearly not what they submitted. I am torn on this one: it's a clear, explicit D-level criterion that "Project is correctly configured on the depot so that a new client provides a runnable game." Students are turning in project reports with that box checked, but I doubt they have actually verified that this is the case. Indeed, one student even submitted a project report with that box checked and emailed me to say that he had trouble with the depot before submission. Hence, while I am sympathetic to the frustrations of learning new version control systems, I have very little tolerance for conscientiousness and none for deception.

If you look over the specifications, you will see that are simply for levels D, C, B, and A. In the report, students are supposed to tell me what grade they earned. My intention behind this was twofold: first, it would make them doublecheck the specifications and reflect on what they have done, and second, it would make my grading easier. I am surprised by how many students, in their reports, will make a claim like "I earned a B+" or "I earned at least a C because I worked hard on this." Neither of these are in line with the specifications system at all. It's really not clear to me where they get these ideas, if they are reading into rules something that's not there or, possibly more likely, they are not reading the rules at all.

My friend and colleague David Largent has been using Nilson-style Specifications Grading for several semesters, and so I look forward to picking his brain about some of these issues. He deploys a system called "Oops Bits" wherein students can get another chance if they misunderstand or misrepresent a specification, but I don't know much more about the system than that. I am thinking of sending out an email to my class to give them some kind of timeboxed period in which they can deploy an "Oops Bit", e.g. if they realized that the game the demoed was not properly submitted to the depot. The obvious negative here is that then there's no lesson really learned: I have to grade their work again, when they already claimed in their reports to have met that criterion.

As an aside, I require that the project reports themselves be written in either HTML or Markdown. I am surprised how few students are fluent with one or the other of these. Like understanding intellectual property, it seems to have become a major unexpected learning objective of the class that students understand plain text formats. I provided the students with a Markdown report template for Mini-Project 1, and yet many of them manage to create non-standard or nonsensical reports. I wonder if I can easily modify the Javascript that creates the specifications articulation on the course Web site to automatically generate downloadable Markdown templates for each separate project, which would potentially reduce students' copy-paste hassle and, ideally, provide a more convenient way for students to fill in the blanks and meet the report specs correctly.

Tuesday, September 18, 2018

What if Mario could choose his Princess?

I had my game design students read Keith Burgun's "What Makes a Game?" essay. I think it's an interesting perspective for helping students think about the roles of decisions, solutions, and ambiguity in their game designs. I used it myself a few years ago in a retrospective on my own game design projects. As I have done in the past, students had to bring with them examples from each of Burgun's four levels: Interactive System, Puzzle, Contest, and Game. I pointed out to my students, as I will also do for those of you unfamiliar with Burgun's taxonomy, that the names of these levels are essentially arbitrary: he's not claiming that only things at this level in his taxonomy count as "games", but rather that the things at this level in his taxonomy are what he references as "games." This is reminiscent of the classic McDermott article, "Artificial Intelligence meets Natural Stupidity," which points out that just because you make a Lisp function called Understand doesn't mean that you've made a program that understands anything.

A student presented 2048 as an example of a Puzzle, Super Mario Bros as a Contest, and Hearthstone as a Game. This was enough to spur serious conversation when I asked if the rest of the class agreed or disagreed. Students provided reasonable justifications for their claims. Once the conversation settled down, I clarified the taxonomy based on having read several other essays by Burgun as well as his two books, Game Design Theory and Clockwork Game Design. In one of those (I honestly cannot remember which), he uses Super Mario Bros as an example of a Puzzle because there is a series of inputs that will lead you to the "correct" solution. I pointed out that, in Burgun's lens, the choices you make in a level are not meaningful, not in the same way as the moves are in a game of Hearthstone. We also discussed how you could turn a Puzzle like Super Mario Bros into a contest by, for example, trying to beat your past high score, or by playing it in a tournament, but that now you're essentially making a new Contest where Super Mario Bros is one of the elements.

This got the students thinking back to their understanding of the reading, and I asked them to look at their peers' work, which was posted to the classroom wall, to see if they saw anything in particular that they thought strongly exemplified—or completely missed the boat about—Burgun's taxonomy. One jumped out to me: a student had identified Fallout 4 as their example of a game. I asked if, after the previous discussion, they agreed with this assessment. A student responded that, indeed, because the game has multiple endings and your choices are meaningful to which ending you get, that it was therefore a Game. This got me thinking about the Super Mario Bros example, so I took the devil's advocate position and asked, "What if, at the end of Super Mario Bros, you got to choose whether you got a blonde princess or a brunette princess? Would that now make it a Game instead of a Puzzle?"

We had actually run a minute over time, and so I left them with the challenge of considering where Fallout 4 fits into the taxonomy. As we packed up, one of my students told me that he was pretty sure it was "just" an Interactive System, and not a Puzzle, Contest, or Game. I encouraged him to write up his thoughts on the discussion board or share them in our next meeting.

I wanted to capture this little piece of my teaching experience in part because I like the idea of adding "narrative choice" to Super Mario Bros. I think we can all look at that and say it's not really an interesting decision, but it's harder to distinguish the systemic differences between choosing your princess and any of the binary-ethical-choice BioWare games. Isn't Mario choosing his princess effectively the same as Commander Shepard seducing a selected crew member? What if it didn't matter what you did the whole game, you just picked an ending that you wanted? Then, as I was writing this, I realized that I was describing the conclusion to Deus Ex.

Friday, September 14, 2018

The Paper Metaphor and the Brainwashing of Writers

I had a parenthetical phrase in my previous post that was about as long as the paragraph that contained it, so I decided to extract and reform it into its own post. It's something that's been on my mind the last few days in two of my classes, specifically game design and human-computer interaction. I'm using Google Docs in these classes, as I have done for years in many classes. Google Docs has some excellent affordances for learning, perhaps the most obvious being that student teams can collaboratively write in a convenient way. To me, however, this feature is secondary to the ability to highlight and comment on specific parts of a document and then to transform that comment into a conversation in the margins. If I see something interesting or confounding or insightful in a student submission, I can highlight it and leave a comment. The comment might be a question designed to make a point, an honest question of my own curiosity, a reference to relevant work or other student work, or really anything else I can express in text. Whoever wrote that section of the document gets a notification, and anyone who reads the document can join in this comment thread.

The problem is that my students are not responding to the comment threads. In fact, I believe that this entire semester so far, no student has responded to or even resolved my comments in any of their submissions. I paused to wonder why this was the case, and I came up with two answers. The first is the simple pedagogic answer that I had not incentivized them to do so. Students, like many of us, are busy—some are even busy with with their studies. If there is no incentive to respond to comments, then why bother?

When I teach CS222 (Advanced Programming), I often use a resubmission policy through which students can rework old assignments, learn from their mistakes, and resubmit for course credit. If a student resubmits something and they have not responded to my comments, they should expect me to simply kick it back to them. I believe that this policy is generally good for students, since they have a real incentive to learn from their mistakes, although it also has the negative consequence that some students submit substandard work knowing they can resubmit it later. That aside, the resubmission policy requires a lot of effort on my behalf: not only do I have to grade a submission more than once, I also have to try to understand and comment on the differences between the original and the revised submission. The burden on my time is one of the reasons I am not using a resubmission policy this semester.

I think there's something going on here besides just the incentive structure, however. Conventional educational practice involves students "turning in" work to the teacher, who then evaluates it, assigning a grade and giving some feedback. A student gets the paper back, looks over the comments, and then discards it. Online writing environments like Google Docs draw upon a conceptual model of writing on paper, in part because the legacy of text editors is often tied to the concept of printing onto paper. It's worth noting, however, that "plain text" editors make no such pretense. While it's possible ti know what "page" you're on when programming in your favorite programming environment, nobody does it. Concepts like "page" are purely metaphorical in a digital writing environment: there is not a "page" at all, not unless work is printed onto said page. Because rich text editors draw upon the conceptual model of paper, however, students get drawn into the same one: whether a student gives me a URL or a printed sheet, the culturally expected behavior is the same. This phenomena rose its head earlier this semester when I noticed how many students were putting hard page breaks into their Google Docs documents that are intended to collect all their work. This makes it tedious for me to scroll through their work, because to me, the conceptual model is "chronological log of work," but to them, the model appears to be "series of pages of work."

What I intend, when I leave comments into a student's document, is the conceptual model of conversation, not submitting paper to a teacher. Imagine sitting with a student who makes a claim such as, "I don't think Don Norman gives enough credit to the role of amateurs in his discussion of the future of design," and you say to him, "Why is that?" and then they simply walk away. Cultural constraints around conversations tell us that this is not only strange but rude. However, I have left, oh, let's say thirty questions in students' submissions already this semester, and they have all essentially got up and walked away. I think it's a mismatch of mental models: I think we're having a conversation, where they are in a transaction.

Unfortunately, it's not clear to me how to encourage students to use the feedback-as-conversation mental model without adding incentives such as resubmission. This means that once again, it breaks away from being an exchange of ideas and into a transaction around points. There's always the opportunity to turn it into an achievement in a course that uses it, although I'm not using many achievements this semester as I experiment with specifications grading instead.

My observations beg the question, "What would a digital writing environment look like that fosters the conversation rather than transaction mental model?" I hate to beat a dead horse, but I think this is exactly what Google Wave was getting at, and perhaps is one of the reasons why it didn't catch on. It was solving a problem that people didn't know they had, because they were locked into a different conceptual model of how writing and conversations manifest. For programmers, the answer is clear: digital writing looks like GitHub, which integrate writing and conversation along with version control and issue tracking. If GitHub supported commenting on text without needing a pull request, then perhaps using it and Markdown would be a viable alternative to Google Docs for the kind of learning environment I want to foster.

Wednesday, September 12, 2018

Fairy Trails as a Lens: A Tale of Classroom Surprise

Regular readers will remember that last Spring, my immersive learning team—Guy Falls Down Studio—collaborated with Minnetrista to release Fairy Trails. Fairy Trails was a notable project for its unconventional design. It is a geolocative game based on Minnetrista's campus, what I have sometimes called "hyperlocal" because you can only play it at these specific places in Muncie, Indiana. It is driven by an Android or iOS app, but the gameplay happens in the physical world outside the app. This is done in part by designing the app for facilitators, those who bring others to Minnetrista and are focused on these others' enjoyment. This design decision was made in consultation with Minnetrista, who use a local modification of Falk and Dierking's taxonomy of museum visitors to describe who comes to their site.

This semester, I am teaching my Serious Game Design colloquium through the Ball State Honors College. I posted about the course design a few months ago. I am continuing my partnership with Minnetrista, and so one of the major outcomes of the colloquium should be that each student produces an original game design based on our partner's themes. I have also peppered references to Minnetrista through the exercises in the first half of the semester. This semester's students had to play Fairy Trails in the first week of classes. A later assignment involved writing a critical analysis of a game they had played, and I was surprised how many chose to write about Fairy Trails. In almost all of these essays, the students made claims about (1) what, in their mind, the game was supposed to teach, and (2) about how it failed to do so. When I pushed back on these claims in my comments on Google Docs, none responded.

The assignment due yesterday involved the students' reading a chapter and a short summary about taxonomies of players and fun, and then to propose new fairy encounters for Fairy Trails, drawing explicitly upon the taxonomies in the reading. Here's the surprise hinted at in the title of the post: some of their designs were really good. Let me share with you a few of the more memorable ones:

  • Several designs involved the herb garden, in which the players have to try different herbs and then are encouraged to gather a few for home cooking.
  • At the wishing well, the players each make a wish. The fairy asks them to categorize each wish as love, fame, or fortune; they are then rewarded with an excerpt from a classic fairy tale based on the same theme.
  • A fairy wants to get from the Oakhurst mansion to the E.B. Ball Center, but she has to stay in the shade. The players have to take paths through the Oakhurst Gardens rather than taking the direct route along the road.
  • A few students involved the nature area in reflective exercises, including one that involved finding different particular sites or species on the trails. 
  • A fairy explains that the Ball family had an enormous collection of fairy tales because Elizabeth Ball's love for them, and then asks each to share their favorite book.
  • A fairy encourages players to engage in a game of hide-and-seek in the Oakhurst garden.
  • A color fairy in the backyard garden invites the players to find and share colorful discoveries with their friends.
To me, the most fascinating part of this list is how different it is from anything we discussed in the production of Fairy Trails and how different these are from the kinds of prototypes built by last Spring's class. The crucial difference between last year and this year is, of course, the creation of Fairy Trails itself. I suspect that having this game available changes the lens through which students can consider the creative challenge of incorporating Minnetrista's themes into a game.

After their presentations, I asked the students to reflect on how these ideas came to them. In particular, I wondered if these were ideas they had from before playing Fairy Trails, immediately after playing it, or in response to the aforementioned readings that they had successfully incorporated into their presentations. Many of them responded that they felt the original Fairy Trails fell flat for them: they had assumed before playing it, based in part on my explanation of the course, that it should have more explicitly informative and educational material about Minnetrista. Those who have played the game know that it does not: it contains three fairy adventures that are designed to be fun for groups to play, especially family groups, and it is not at all didactic. Many of this semester's fairy designs then were informed by this idea, such as including reference to the Balls' fairy tale collection and the fact that you can sample and collect herbs from the garden. The student who designed the scenario to roam the nature area wanted something "less childish" than the current game. The hide-and-seek designer noted that Fairy Trails does nothing competitive, and so he was inspired to create something that would appeal to player types not currently served by the app; he used the readings to inspire him along an angle that he wanted to include. Only one student said that the readings directly influenced her design: she had noticed earlier in the semester that no existing fairy used the herb garden, but had not dwelt on this. When she read about "sensation" as a kind of fun, it brought this to mind and she realized that she could use taste in her fairy encounter.

We discussed briefly the fact that several people had chosen the herb garden (and one, the community garden) as locations, while the Fairy Trails production team never considered these. I wondered at this for a few moments until I remembered that we designed the game in the winter! Although last Spring's studio team could see where the herb garden and community garden would be, we could not actually see it functioning. I think this gave that team a blind spot that this group, who visited in the peak of herb and vegetable season, was able to see and take as inspiration.

Several of these fairy encounters are very exciting to me, but I don't currently have a team who can put them into the existing version of Fairy Trails. It's still a toss-up as to whether we will expand on the existing app in the Spring or whether we will pursue a different direction. In any case, I wanted to capture some of these experiences here on the blog. Even if we do not come back to them as inspirational fairy encounters for Fairy Trails 2, I think it's quite interesting how having a working version of the game changed students' ability to conceive of interactive, Minnetrista-themed activities.

Wednesday, September 5, 2018

Configuring Perforce Helix for simplified Game Programming project grading

My game programming students had their first mini-project due yesterday, and I'm happy with the results. The "Angry Whatevers" project is designed to get students familiar with UE4 and Blueprint by making a simple fire-a-projectile game. It's essentially the same introductory project I gave in Fall 2017, when I first switched the course to UE4. However, this year's projects were on average much more interesting and higher quality. I attribute this to two factors: first, I used a specifications grading approach, which made it very clear to students what they had to do to achieve different grades; second, I am more seasoned with UE4 and was able to articulate, teach, and clarify more easily than last year.

Last year, we used GitHub for version control, but this caused several problems for us, the most pressing of which was that a distributed model doesn't work well for the non-mergable binary assets that constitute the bulk of a Blueprint-based UE4 project. In the Spring, I learned and deployed Perforce Helix with my studio course, and I decided to go ahead and use that for this semester's game programming course as well. It was a little choppy to get started as I had become rusty with some of the core configuration. My first pass involved making one depot per team, but I my conceptual model of workspace mapping was wrong.

When I spent several hours trying to rebuild my understanding of Perforce Helix and how to integrate it with the course, I realized that I could get away with one structured depot. This depot was called CS315Depot, and I wrote up instructions and recorded a private YouTube video explaining how to set up the mapping. One of the crucial steps that I had forgotten was not to use the graphical mapping tool in P4V. Instead, we used the text-based mapping specification, so that if I was working on a Project 1 ("P1"), then I would make a workspace called PaulGestwicki_Laptop_P1 and use my UE4 project as the project root. Then, I use a mapping to the depot like this:

//CS315Depot/P1/PaulGestwicki/... //PaulGestwicki_Laptop_P1/...

What this does is map all the files in my local workspace to the folder P1/PaulGestwicki on the depot. The real magic, then, is that when I want to grade the first project, I can make myself a new workspace, something like PaulGestwicki_Laptop_GradingP1 and map it to the whole P1 directory:

//CS315Depot/P1/... //PaulGestwicki_Laptop_GradingP1/...

In one action, I can grab all the student projects into my workspace and then batch-grade them, without having to make N trips to the depot or create N different workspaces. Also, I could easily set up one group (CS315) for all the students and give that group write access to the shared depot and read access to my depot of demos. Note that this does mean that students can check out each others' projects, which I am happy to support; it also means that they can accidentally destroy each others' work, but that's why we have version control in the first place.

One of the common errors I saw with several students was that they forgot the last slash in the mapping, which ended up with them dumping their Content, Config, and .uproject files into the root of the P1 directory, prepending their usernames to each file. Some students recognized the problem, and I was able to just obliterate their old files so they could try again. Because it was so easy to recognize the error by glancing at the depot, I was also able to email students and point out where they had unwittingly made a configuration error.

Perhaps this idea will be useful to you, dear reader. In any case, I hope it will be useful to future-me in case I forget how to do such a thing in the future! I offer my public gratitude to Perforce for their academic licensing that allows my students to use their software, and of course, gratitude to Epic for the very generous licensing of Unreal Engine.

UPDATE: When I went to adjust some grades the other morning, half the projects were gone. I looked at the changelog history, and I found one that looked suspicious. It touched every file in the depot, which makes me think they had a mapping wrong. What was really strange to me was how this affected the tools themselves: when I viewed the project in p4admin, I could see all the files I expected; when I viewed in p4v, half the student folders were gone and could not be mapped. I am still quite perplexed about what could cause this. However, the good news is that after I backed out that changelog, everything ended up back where it was supposed to be. Version control for the win!