Wednesday, December 11, 2013

A semester in five burndown charts

It's the end of the semester here, and I am looking forward to writing a bit about the semester once its behind us. In the meantime, here's a visual vignette about my Game Programming course. We had a great team of students working on original video games with a community partner. The semester was divided into five Sprints, following the principles of Scrum. We used burndown charts to track progress throughout the Sprint, plotting actual hours remaining (green) against steady progress (blue).

I find that these five images tell an interesting story of a team's progress. The students had never worked together before, and most of them had never been part of an agile team. They were learning not just technical skills but also professional and social ones. The charts illustrate this learning during the semester, as they improve with their abilities to estimate, to plan, to deliver technical results, and to respond to changes.






That bump in Sprint 2 is interesting. The team recognized that many problems in the first sprint came from a combination of procrastination and poor estimation. They started in on their tasks right away in Sprint 2, and when things didn't go as well as hoped, they blew up the estimated hours remaining on those tasks. It turns out that the tasks weren't as bad as they feared, and as they worked through their problems, the estimates dropped back down to more accurate levels.

The erratic path in Sprint 5 illustrates that the team underestimated the effort required to polish the games for public release. Interestingly, the burndown chart was still useful even though the estimates were quite wrong. The team could look at the chart, which said they were dozens of hours ahead of steady, while knowing that there was still significant work to be done. This discordance forced them to re-evaluate their tasks and estimates, to reconsider the sprint's plan, and eventually, to successfully deliver on every user story.

Tuesday, October 8, 2013

A Concept Dependency Graph Analysis of Dominion

Just five years ago, Rio Grande Games published Donald Vaccarino's deck-building card game, Dominion, and it took the designer game world by storm. In short order, it seemed that every publisher was riffing on this new mechanic, and now, deck-building is an established game mechanism in any designer's toolbox. I include Dominion on my list of recommended games in my game design course because of its popularity and its influence in game design: this game provides the metric by which other deck-building games are judged.


Three recent situations inspired me to look at Dominion more critically—particularly, how one learns, teaches, and conceptualizes the game.

First, one of my game design students wrote a critical analysis of Dominion, and a significant portion of her analysis was based on the opinion that the rulebook was unclear. Specifically, she and her opponent were unable to determine when they were to discard their cards and draw new ones. I was surprised at this, in part because I had a student last year who upheld the same rulebook as a pillar of clarity. It may be worth noting that last year's student self-identifies as a "gamer," whereas this year's student had never heard of Dominion prior to taking my course. In any case, this was not the result I expected.

Second, I was recently with a group of friends who were teaching Dominion to a new player. They began by explaining actions and the chaining of actions and the strategic importance of this tactic, but before they had even taken any cards out of the box. This struck me as a particularly awkward way to introduce the game, since concepts such as "you're building your own deck" were not yet discussed, and it clearly was not resonating with the new player. In fact, his questions revealed that despite several minutes of accurate rule explanation, he had not built a coherent mental model of the game. This suggests to me that it wasn't a problem of accuracy, but of scaffolding—that is, giving the right information at the right time for the learner to advance.

Third, I was retelling the stories above to my lovely wife, and I explained that I thought it was important to focus on core gameplay when teaching a game. I justified that you cannot make sense out of peripheral game design elements without understanding the core, and therefore you should teach games from the core outward. She pointed out to me that sometimes she gets confused when I explain games this way because I don't give enough "big picture." Touché.

This got me thinking about the relationship between core gameplay and learning, and more specifically, how the dependencies among game concepts might reveal effective means for teaching the game. While many designers talk about "core gameplay," there is not consensus on where one draws the line around what is "core" and what is not. I decided to try mapping out the concepts required to play a game of Dominion as well as the dependencies among them, and this is what I came up with:

Concept dependency graph for Dominion.
This is a concept dependency graph, where the nodes represent gameplay concepts and the edges represent dependencies. That is, given AB, one needs to understand concept B in order to understand concept A. I will refer to terminal nodes with no dependencies as atomic concepts. I am only considering the original Dominion game—no expansions—and only those kingdom cards recommended for new players. I also omitted concepts that were not relevant to gameplay, such as the use of randomizers or the determination of starting player.

The graph was drawn using dot, part of the graphviz project. Dot generates layered graphs (that is, each node is in a discrete layer) and follows automatic graph drawing heuristics such as minimizing crossings, minimizing edge length, and maintaining aspect ratio. A quick look at the graph shows a potential "core" of the game emerging from the atomic concept "Each player has their own cards."

Concept dependency graph, detail of potential "core"
If someone had never played a deck-building game before, I think this sets up a compelling description: it's a game in which you have your own deck of cards from which you draw a hand, then you play actions and buy new cards from stacks on the table, and you cycle your discard pile back into your deck when it is exhausted. There's no mention here of what those actions actually are, or the "cleanup" phase, or even victory condition, but I think it's fair to argue that these aren't part of the core gameplay. Keep in mind that "core gameplay" is not rigorously defined; one could just as well argue that the victory condition should be in the core, and be justified. That is, I'm not saying that this is right, but that it satisfices.

The concept dependency graph has three atomic concepts: each player has their own cards; there are stacks of cards in the center of the table; and there is a trash pile. Of these, the last is almost certainly not part of the core gameplay: it's an design element that permits a small number of kingdom cards to permanently remove cards from a game. Like the numerous Dominion expansion, it is a design element that permits the introduction of more action variations. It is interesting that the computer-optimized drawing of this graph put the trash pile atomic concept on the periphery of the drawing, where as the fact that there are stacks of cards ends up "wrapped" within other dependencies.

Choosing only the ten basic kingdom cards allowed me to ignore Curse cards in my concept dependency graph. However, Moat is included in this set, and it required adding several concepts and dependencies. Without Moat, it doesn't matter that Militia is an Attack card, and so that concept could be elided. Moat is a Reaction, and so that requires an explanation. Moat is also the only card in the set that has two different actions on it, from which the player must choose one; in all other cases, the player does everything on the card when it is played. This provides some justification for not including Moat as a beginner card, despite its gameplay utility in foiling Militia. It would be an interesting study to compare beginners' ability to learn the game with and without Moat to see if there was any significant difference.

It is noteworthy that the "ABC" concepts—that your turn comprises Action, Buy, and Cleanup—are relatively deep in the graph. You cannot really make sense of ABC unless you know what Action, Buy, and Cleanup mean. However, my experience shows that it still provides a memorable peg on which to hang your hat when teaching the game. That is, I propose that the mnemonic is there because these are not atomic concepts. Similarly, the atomic concepts manifest in directly observable configurations: without knowing anything about how the game is played, one can see that players have their own cards (based on players' handling them and placing them in easy reach), you can see stacks on the table, and there is a single "Trash" card among these stacks.

I knew I wanted a hierarchical drawing of the graph because I was looking explicitly for layers of concepts. I was curious, however, as to whether a force-directed drawing would yield any further insight. The result of running the graph through neato is given below. There's no new insight to be had from this visualization, at least not as relates to finding a potential core gameplay.

Force-directed drawing of the concept dependency graph
The concept dependency graph provided an interesting tool for analyzing Dominion, particularly with respect to the search for core gameplay. I am curious as to whether this tool can be used on other existing games to similarly fruitful ends or whether this is a coincidence. What I have shared here is the third major version of my concept dependency graph for Dominion: the previous versions did not induce any visual "core", but they also contained what I considered to be mistakes in the concept and dependency articulation.

This project has made me wonder whether other graph formalisms might reveal new insights into game design. A propositional network, for example, permits separate reasoning about cards and the having of cards. I did a bit of work with SNePS in graduate school, and it might be a useful tool to deploy here. I know that Machinations presents a formal language for representing game design elements, but I know relatively little about it; it may provide some insight into the learnability of games and core gameplay as well. I am curious to hear from anyone who has tried either tool toward these ends.

If you find any other uses for this formalism in studying game design and learning, or if you know of any related work in the literature, please leave a note in the comments. Thanks for reading!

Tuesday, October 1, 2013

Two-Week Project Showcase

The focus of my sophomore-level Advanced Programming course (CS222) is a large project at the end of the semester. In the past, this has been a six-week project, delivered in two three-week iterations. It is a project of the students' own invention: they have to pitch a project to satisfy a set of constraints I provide. This semester, I decided to expand the project to nine weeks with three iterations. This goes hand-in-glove with the revised, achievement-oriented assessment model that I am using this semester.

Most students have never worked in a programming team prior to CS222, much less defined their own projects. To warm them up, I give a two-week programming project before starting the big project. The two-week project is done in pairs, and I provide the requirements analysis as user stories. This provides a model for them to follow when they pitch their own projects using user stories.

This semester, I gave a new two-week project that was inspired by my 2013 Global Game Jam project. The students were challenged to write an application that takes a word from the user and then hits Wikipedia's API to determine the last modified date of the corresponding page. The curriculum provides no prior exposure to Web programming or networking, and so I provided a very short sample program that demonstrated how to open a URL in Java and read the stream. This project touches on some very important ideas, including Web services and structured data.

In the past, I have evaluated the two-week project in a rather conventional way: I provide a grading rubric at the start of the project, then I check out students' projects from the department's Mercurial server and give each pair a grade. I wanted to do it differently this semester, in part because of the achievement-oriented assessment. The two-week project provides a vehicle for students to earn achievements and write reflections: I'm not evaluating the project itself but rather how students use it to explore course concepts as articulated through the achievements and essential questions.

I decided to devote a class day at the end of the two-week project to a showcase. Each pair had to have a machine running a demo alongside a summary poster. We rearranged the classroom, moving all the tables to the perimeter and clustering the extras in the center. In order to foster peer interaction, I printed some sheets whereby students could vote on which team had the best UI, the best poster design, the cleanest code, and the most robust input handling.

The students enjoyed this format. There was an energy in the room as the students explored each others' solutions, talking about implementation strategies and UI decisions. A few students had trouble getting their projects working at all, and I heard one student say how disappointed he was, because it left his team unable to fully participate in the activity. This represents positive peer pressure and project orientation, which can be starkly contrasted against instructor pressure and grading-orientation.

I had recommended two strategies in class: using Joda Time to handle time formatting and using Java's DOM parser to deal with Wikipedia's output. I was surprised to see that almost every team used Joda Time (and used it to earn the Third Party Librarian achievement) but only one team used DOM. Every other team read the output stream as a single string and then searched it using Java's String class. This provided an excellent opportunity to teach about input validation. My sample Wikipedia URL queried the "soup" page for its last modified time, and the result looks like this:

<?xml version="1.0"?>
<api>
  <query-continue>
    <revisions rvcontinue="574699285" />
  </query-continue>
  <warnings>
    <revisions xml:space="preserve">Action 'rollback' is not allowed for the current user</revisions>
  </warnings>
  <query>
    <normalized>
      <n from="soup" to="Soup" />
    </normalized>
    <pages>
      <page pageid="19651298" ns="0" title="Soup">
    <revisions>
      <rev timestamp="2013-09-27T05:20:10Z" />
    </revisions>
      </page>
    </pages>
  </query>
</api>

Keep in mind that this is coming in as one continuous stream without linebreaks. Aside from the one group that did an appropriate and simple DOM lookup, students used String#indexOf(String) to search for "timestamp=", and then did manual string parsing using that reference point. This approach works for most cases, but it opens the application up to an attack that I'll explain in the next paragraph, giving the reflective reader a moment to consider it.

If you ask the application for the last modified info of the Wikipedia page "timestamp=", you get a phenomenon similar to an SQL injection attack: the indexOf operation picks up an unintended location, and the manual string manipulations fail. I had seen this when meeting with a pair the previous week who were working on their Advice Seeker achievement. They had thought their solution to be rock solid, and they were appropriately excited when I showed them how to crash it. They became my covert hitmen during the showcase, crashing solution after solution by finding holes in string parsing logic. So, while few students took the opportunity to learn XML parsing in the two-week project, maybe they learned something even better: the embarrassment of doing it the lazy way and seeing your system go down in public!

When I explained to the students what were doing at the showcase, I had expected teams to show up at their stations when I came by. However, it seems they were so excited to see each others' work that they didn't think about this. Since not every team had a person at their station when I came around, I was not able to give my expert evaluation to each group, nor was I able to model for the students how to give critical feedback. On the other hand, the students got a lot of peer feedback and I was able to meld into the group, becoming just one of many people interested in seeing demonstrations and code. I am not yet sure if I would do this part differently next time or not.

One aspect that is still unclear to me is the extent to which students were working for intrinsic motivation versus extrinsic reward. I was approached after class by a student from one of the teams whose solution was not working. During the hour, they had talked to other students and realized what they did wrong, which is an activity I certainly want to foster. The student asked, clearly in a state of agitation, if his group could fix their application even though it was due to be completed the previous night. I confirmed that this would be fine, and the student went away expressing joy and thanksgiving. I suspect his perspective was that he had just been given an opportunity to save his grade. What I really gave him was an opportunity to make his project work and feel good about getting it done, even if a bit late. I don't think he realized that there's no entry in the course grading policy for the two-week project, that the whole thing was just a fun context for us all to play and learn together. I hope that when he figures this out, he sees this as a reflective learning opportunity and not simply smoke and mirrors.

In conclusion, I am very happy with the showcase format. It was definitely worth using a class meeting for this event. I think this two-week project was particularly well-suited to the showcase format since it's fairly small, permits multiple solutions, helps students build better understandings of the modern computing ecosystem, and can have interesting failure states. Perhaps next time around I need to add an achievement related to XML parsing, since this seemed to promote students' use of Joda Time quite well.

(I have some nice pictures of the showcase that I took while standing on top of the teaching station, but I feel like I can't post them here without my students' permission. Sorry.)

Thursday, September 5, 2013

Steering a student toward better reflections

We're in the third week of the semester already, and I have been happy with the changes I made over the Summer. I get the sense that my students like the achievement-oriented grading so far, or at least, they have not staged a rebellion. Many of the students found it difficult to understand the relationship between achievements and reflections. A few students didn't understand this two-step process at all—that you do an achievement first, then write a reflection. However, most of the problems came from difficulty understanding my reflection requirements, as stated in the course description:
A good reflection should address the following:
  • Describe how your artifact and experience characterize one or more of the essential questions of the course.
  • Describe the implications of this characterization on your own personal practice or, when relevant, your team's collective practice.
  • Identify and address potential criticisms of this characterization.
When I talk about these requirements informally, I describe them as: characterize an EQ; implications to practice; and critique your characterization. I did not give the students an example to start the semester since past experience shows that, given an example, all solutions will look like that example. However, looking back on it, I think a simple example would have helped them to see that the reflection is not just the achievement restated, but a shift in focus and discourse.

To this end, I share the following story about a student who came to office hours for help with his achievements. I have the student's permission to share this story, though we'll keep him nameless. He was frustrated that he had not received any points for his reflections, and looking over his submissions, he had simply submitted his achievement artifacts again in the reflections slot on Blackboard. We started by looking at the first achievement he had earned—Studious—which requires reading Bill Rapaport's "How to Study" Guide and then creating a study plan.

First, I pointed out that the reflection has to happen after earning the achievement, and that the first step was to describe how earning the achievement helped him to characterize one of our essential questions of the course. I asked the student to talk through his experience of earning the achievement. He told me that he had been assigned to read Rapaport's guide last year: his instructor had worked with me before and enjoyed Rapaport's guide so much, that he assigned it in a prerequisite class to my class! The student took an informal posture and tone, and told me that since he read it before, he didn't expect to get anything new out of it, but since it was required, he re-read it anyway. As he thought, he didn't learn anything new. At this point, he corrected his posture and his voice turned "cold academic" as he started in with, "Well, according to the author, time management is an important skill for students, and being a student is like being a professional, and so he would say that we should be practicing time management too..."

I cut him off and pointed out how he was telling me how Rapaport might characterize an essential question, not how he would. I encouraged him to take a big step back and think—that is, reflect—on what he had extemporaneously told me about doing the reading even though he had done it a few months before, how he entered it with some negative expectations, and how these expectations were met. So, I asked, how does that experience help you characterize an essential question?

He paused and thought, looking over the list of essential questions. "Well," he tentatively offered, "I guess it means that a professional will sometimes try things he thinks will help, but they won't."

"Yes," I encouraged him. "So, what does that mean for you? That is, what are the implications to practice."

He quickly returned, "If I'm trying something and it doesn't work, I shouldn't get frustrated."

Aha! I congratulated him on the clear articulation of a discovered truth, grounded in his experience with the achievement. We reviewed what we had so far, and I pointed out that he could express it in two to four concise sentences; these reflections were not to be judged on length but on clarity of insight. We moved to the third step, which I have seen to be the most difficult for the students. How might an intelligent layperson critique your characterization? In this case, how might someone argue against this characterization—that sometimes professionals do what they think is best, and it doesn't help—and how would you address such criticism?

This was harder for him to address than the last point. He suggested that, since he's not a professional, he could be wrong. I pointed out that this was not a very interesting observation, and he could use that as a criticism any time he was addressing the essential question of professionalism. He seemed to have trouble pulling something together, so I improvised. One might say that the professional does learn something from this experience: he learns from the failure and tries something new. And if that's the best criticism we have—that what looks like failure is in fact progress—then we are happy to accept it and move on.

The student thought about that, then he asked, "But couldn't we use that as a criticism every time, too?" I laughed and told him that I would probably notice if he did, but he should feel free to use it for the first one anyway.

This was a positive meeting all around. It was good for me to sit one-on-one with a student who simply didn't understand my instructions: talking with a student about his struggles with metacognition grants more perspective than typing responses mechanically into Blackboard. He clearly benefited in the short term because he built a mental framework for approaching reflections, and I hope that the benefits extend beyond the course: the whole point of these reflective activities is to help students become better learners.

Thursday, July 25, 2013

Revising Courses, Part III: Advanced Programming

Inserting reflective essays into my game programming course was straightforward, and the augmented achievement-based grading system of my game design course was an incremental improvement. Now, I will share the story of the biggest renovation in my Summer of Course Revision: complete renovation of my Advanced Programming course.

CS222: Advanced Programming has been around for about four years now, and I have taught it more often than any other. Several posts on my blog reflect on this course, including this one from the first semester's experience and this one reflecting on this past academic year. For the past several offerings, I have used the same fundamental course structure: studio-based learning, using daily or weekly writing assignments for the first several weeks, gradually turning toward a focus on a pair-programmed two-week project and then a small-team six-week project. I have been basically happy with the structure, but reflecting on my experiences, I identified the following pain points:
  • The course was sequenced along a particular path of learning that did not resonate with all the students.
  • There was not enough mastery learning: students would often do poorly on an assignment and clearly not revisit it, since weeks later, they still didn't understand the concept. This caused especial pain when these assignments covered technology that was core to my pedagogy such as distributed version control.
  • It was easy to slip into a mode where I would talk for the whole class meeting. The studio orientation—through which students show artifacts that represent their learning, and these artifacts are subject to formal and informal peer and expert critique—was not guaranteed in my course structure: it relied on daily ingenuity rather than codified form. (The time and space constraints imposed by the university negatively affect studio orientation as well, but these are beyond my control.)
  • It was not clear that all the team members were engaged in reflective practice when working on their projects. That is, I felt that there were people falling in the cracks, not learning adequately from the team projects, sometimes dues to lack of reflection and sometimes due to lack of practice.
  • Students enter the course with high variance in knowledge, skill, experience, and motivation.
In addressing these pain points, I wanted to make sure I kept all the good parts of the course. Students work in teams on projects of their own design, using industrial-strength tools and techniques. Teams have to incorporate some kind of management techniques and give multiple milestone presentations, both of which remind students about the "soft skills" of Computer Science. There is a good overall flavor of agile development, pragmatism, object orientation with patterns, and the importance of reflective practice and lifetime learning.

If you've read my last two posts, you probably see where this is going! I decided to replace the daily and weekly assignments with a system of course achievements (a.k.a. badges), and students will write reflections that relate the achievements to essential questions of the course. The complete course description is available online, or you can choose to view just the achievements.

I have developed the following set of essential questions:
  • What does it mean to be a "professional" in software development?
  • How do you know when a feature is done?
  • How do small teams of developers coordinate activity?
  • How does a Computer Science professional use ubiquitous and chaotic information to be a lifetime learner?
As in my game design course revision, students' grades will be based on number of achievements earned and written reflections on those experiences. There are also four "meta-achievements" that can be earned by completing sets of regular achievements. These are leveled achievements and reflect four different potential paths of mastery: a Clean Coder has documented evidence of applying ideas from each of the first twelve chapters of the book; a White-Collar has demonstrated savvy at project management and presentation; an Engineer is moving toward an understanding of software architecture and patterns; and a User-Centered Designer has designed, evaluated, and iterated on user-facing systems. The introduction of these meta-achievements was partially inspired by an alumnus who, when I told him about this redesign, said that most courses were like a Call of Duty game, but that this was more like Deus Ex, and that it would benefit from having more quests.

I want to help students succeed in this kind of learning environment, and so the first month is designed to help them understand what it means for them to have more ownership over course activity. In the first week, I will provide a review of important CS1 and CS2 topics, focusing on the mechanics of object-oriented programming in Java. Knowing that student experience and comfort with this material is highly variable, I can use this week to focus on how to navigate the course structure. In fact, I will strongly recommend that their first achievement is Studious, which requires them to read William Rapaport's excellent How to Study guide and write a study plan for the semester.

We will still use Clean Code as a shared focus, but I have changed how I am scaffolding students' experience in integrating its expert tips. Previously, I had identified specific tips or chapters for students to read and then asked them to apply these tips to their projects—past or current. There was little evidence that this "stuck" with the students, as later coursework would violate the concepts supposedly learned. This was, in part, due to lack of mastery learning, where students would accept a low grade and move on without having learned the material. Furthermore, because we paced Clean Code readings and assignments through the semester, I would frequently encounter examples in class and in critiques that afforded application of a tip that the students had not yet studied. In the revised course, we will read the book relatively quickly, but then keep returning to it in informal expert critiques, adopting an iterative approach that is more apropos to how one learns and remembers these tips: a bit at a time, and a bit more next time.

In my game design course, students are required to present to the class to earn their achievements, but the enrollment in that course is half that of this one. I toyed with the idea of having my CS222 students post their work on the walls, a portion of the class each day, but I feared that this would have too many negative consequences. Instead, I am requiring students to post their artifacts to a wiki, and my intention is to review the wiki changes between class meetings to identify notable entries. This way, I can bring up the wiki on the projector and model an appropriate critical process. I can also insert mini-lectures as necessary to clarify misunderstandings. I am eager (or perhaps anxious) to see how the density of misunderstandings in this redesigned course, which encourages mastery learning through reflection, compares to that of my current status quo, which encourages correctness up front. We'll be using BlackBoard's wiki only because it's easy for all the students to find; it was frustrating to me during my experimentation that it has no wikitext editor. If the wiki turns out not to work for us, we can always move to an in-person poster-style presentation.

Since I will be doing more just-in-time teaching—reacting to students' insights and confusions—I have decided to expand the conventional six-week, two-milestone project to three milestones over about eight weeks. The students like this part of class the most, and I like the idea of adding another milestone, since this gives them another opportunity to learn from mistakes in tools, design, organizational structures, and presentation. For both this large project and the small project, I will provide common requirements that everyone must follow, such as using distributed version control. Previously, these things were simply worth some points on the project; however, if I want to focus on assessing their reflections while also requiring some shared technological experience, it's fairly easily done with a hard-and-fast requirement. With multiple milestones, there will be an opportunity to ensure each team is following the requirements. For example, if I see a team not using DVCS, then I can remind them.

I am eager to see how students take these changes. I expect to be able to blog about some of the day-to-day experiences, and I anticipate writing some academic papers on aggregate student performance in all these modified classes.

Here are the critical links again:


Wednesday, July 24, 2013

Revising Courses, Part II: Game Design

Following up on my previous post about revising my game programming course, today's post is about a revision to my game design course. This course is an honors colloquium, a special topics course only open to honors students and with an enrollment cap of fifteen. Every honors student has to take six credit-hours worth of colloquia, and the topics depend on who is available to teach them in any given semester.

This colloquium in part of a two-semester immersive learning project undertaken in collaboration with the Indianapolis Children's Museum, and it is being funded by internal funds through the Provost's office. Teaching in the Honors College is not a normal part of my load, and a major portion of the grant is "assigned time," allowing me to teach this colloquium. I mention this because the behind-the-scenes machinations are likely opaque to those outside academia: if I didn't have the grant, I would be teaching a Computer Science course instead of the colloquium. I am grateful to the Provost and his committee for approving my proposal, which allows me to teach this course that aligns so well with my research interests.

I taught a similar colloquium last Fall, and it had the same fundamental objectives: engage students in the academic study of games and learning, and, in collaboration with a community partner, have them produce prototypes of original games.There were two significant differences: I was team-teaching with my colleague Ronald Morris, and the community partner was the Indiana State Museum. It was my first attempt at achievement-based (i.e. badge-based) grading, and I wrote a lengthy reflection about the experience. For the redesign, I decided to focus on a few specific pain points from last time:
  1. The students put off achievements until the end of the semester.
  2. Some students made zero or nominal changes between game design iterations.
  3. Not all the students were engaged in reflective practice: they were not adequately learning from the process.
  4. In-class prototype evaluation time was rarely meaningfully used due to the points already mentioned.
I have met a few new colleagues through the conference circuit in the last year, and I am grateful for their willingness to share tips and tricks. In particular, the following changes reflect some specific ideas I have picked up from Scott Nicholson at Syracuse University and Lucas Blair at Little Bird Games.

Perhaps the most important revision to the course is the introduction of reflective essays. As in my game programming course, I was inspired by the participatory assessment model to grade reflections rather than artifacts. The students will present their weekly artifacts to the class, where artifacts might include summaries of essays and articles, posters, one-page designs, or prototypes. These artifacts will be subject to peer and expert formative evaluation, following the studio "crit" model. However, it will be students' reflections on these artifacts that are actually graded. As in my game programming course, I have decided to frame these reflections around essential questions and grade them based on (a) how they characterize an essential question, (b) the implications to practice, and (c) potential criticisms of the characterization. The research I have read predicts that this combination of achievement criteria and reflections should encourage students to produce high-quality artifacts without sacrificing intrinsic motivation.

Last time, the students had to choose a topic and iterate on it fairly early. The students with the best designs at the end of the semester were, for the most part, those who had to throw away major elements of their design or change themes entirely. This leads toward the desire to do more rapid iteration on ideas, not just prototypes; however, I still want each student to create a significant prototype by the end of the semester. To address this, I have divided the semester into three parts. In the first part, we will survey major themes of the course, such as games, fun, learning, museums, and children. The second part of the semester will be rapid creation of design sketches based on specific themes at the Children's Museum, about one sketch each week. The students will then choose one of these to prototype for their end-of-semester deliverable. I hope that this approach improves all the students' prototypes: even though they have less time to work on their prototypes, that time will be more focused and based on having had more reflective practice earlier.

Going along with the three-part division of the semester, I have organized the achievements into groups, some of which are tied to one of these parts. There is also an "unrestricted" category that can be earned at any time. I have introduced a throttle of one achievement submission per week plus one revision per two weeks. The students' grade is tied to the number of achievements they earn, and so this should help the students pace themselves while keeping me from having to evaluate an inordinate number of submissions at the end of the semester.

Following the most popular design principle for assessing learning with digital badges, I have introduced a "leveled" system of achievements. Certain achievements have gold stars attached to them, designating them as requiring special effort. These stars are tied in with number of achievements and reflection points in order to determine a student's grade. Note that two out of the three of these are directly in the student's control, and the one that isn't—reflection points—permits revision. Hence, students can essentially pick their grade based on their level of legitimate participation in the class. 

The full list of achievements is available online, and you can view it on its own page or embedded into the course description. Each badge is defined by a name, a blurb, criteria, and an image. The blurb and image are new this year, and I think they represent a major improvement. I used OpenBadges.me to design the badge images, making significant use of icons from The Noun Project. In case you're curious or want to sketch up your own, the border is the Ball State red taken from our logo (#ed1a4d) and the starred achievements use light yellow background (#ffff66).

Note that the core class activities in the second and third parts of the class are associated with achievements that lead up to stars. For example, a student who shows design revisions every week for the five weeks of prototyping will earn two gold stars, which are half of those required to earn an A. Indeed, I intend for the standard path through the course to consist of some student-directed inquiry in the first part, then two stars from one-page designs, then two stars from prototyping—and I will make this clear to the students at our first meeting! However, the system gives the students agency to choose a different path: if someone wants to focus on games criticism, or reading and reporting on game design texts, these are still rewarded and earn course credit.

One other change this year is based directly on student feedback from Fall. Last time, I had an achievement that was earned by playing several games that exhibited specific mechanics that I had identified. My intention was that I could guide students to experience genres, mechanics, and themes that I found interesting, but it ended up making the achievement hard to earn and delayed rewards for legitimate course activity. Note that there was no formal "leveled" achievements as I have this year with starred achievements, so this one achievement took much more time for the same reward as any others. This year, I have given the students much more freedom to choose a the games they will study and critique. They can choose analog, digital, or hybrid games, including sports and gambling games. I still provide some scaffolding through the games I chose to put on course reserves, but students who want to go in a different direction are free to do so.

The one weak spot in the course design, as of this writing, is the identification of essential questions. I have come up with two so far:
  • What is the relationship between games, fun, and learning?
  • How do you design an educational game for children?
I thought about introducing a third about design generally, such as "What is design?", but it seems that this is embedded into the second question. I worry that adding such an EQ would diminish the impact of the design-related one I already have. Finally, I will point out that the first essential question has explicitly guided my work over the last several years, and it was the explicit topic of my seminar at the Virginia Ball Center for Creative Inquiry: it is such a big question that others pale in comparison.

After having spent about three weeks this Summer revising my Fall courses, I find myself looking forward to the start the semester. It is good to have the time to rest, reflect, and revise. Of course, all this work has been without compensation, but, as the Spirit of Christmas once said, the rewards of virtue are infinitely more attractive.

Monday, July 15, 2013

Revising Courses, Part I: Game Programming

I spent the lion's share of the last two weeks revising my three courses for the Fall semester. They are the same courses as last time, although some of the themes have changed. After a trepidatious beginning, I am now quite pleased with the results. In today's post, I will describe the revision to my game programming course, an elective for Computer Science undergraduate and graduate students. The actual change to the course may appear small, but it represents a significant amount of research and learning on my part.

I have been structuring this course as a project-intensive, team-oriented experience. For example, last Fall the students implemented The Underground Railroad in the Ohio River Valley. I have also used this course to experiment with various methods of grading. I wanted the grading to be as authentic to the work as possible: students are evaluated on their participation and commitment only, not on quizzes or exams. For example, instead of midterm exams, I held formal face-to-face evaluations with each team member, modeled after industrial practice.

These methods work well, but reflecting on these experiences, I identified two potential problems. First, these methods fail in the case that a student refuses to participate or keep commitments: in particular, these methods produce little that could be considered evidence in the case of an appeal. Realistically, sometimes I get a bad apple, and so I want a grading system that allows me to give the grade I feel is earned. Note that while I admit to having given grades that are higher than I thought were earned, the assessment failure may be twofold: some students may require more concrete evidence of their own progress in order to improve or maintain performance, especially if such students lack intrinsic motivation.

The other potential problem stems from my wanting the students to engage in reflective practice, not just authentic practice. I wonder if some of my high-achieving team members have gotten through these production-oriented courses without having deeply considered what they learned. My model for encouraging reflective practice is based on industrial practice—agile retrospectives in particular—and is documented in my 2013 SIGCSE paper. This model, called periodic retrospective assessment, requires a team to reflect on its successes and failures intermittently during the semester, and at the end of the semester, to reflect on what it has learned. This sociocultural approach to assessment is appealing, and again, it seems to work in many cases, although it affords scant individual feedback.

While at this summer's GLS conference, I attended a talk about game-inspired assessment techniques given by Daniel Hickey. His model is called participatory assessment, and a particular aspect of it—which you can read about on his blog—is that it encourages evaluating reflections rather than artifacts. During his talk, he made a bold claim that resonated with me: writing is the 21st century skill. After having worked with Brian McNely for the last few years, I have come to understand “writing” in a more deep and nuanced way. (See, for example, our SIGDOC paper that takes an activity theoretic approach to understanding the writing practices involved in an agile game development team.)

Putting these pieces together, I decided to keep the fundamental structure of my Fall game programming course: students will work in one or more teams, organized around principles of agile software development, to create original games in multiple iterations. We will continue to use periodic retrospective assessment in order to improve our team practice and consider what we learned as a community. Now, I have also added individual writing assignments, to be completed at the end of each iteration. I want these reflections to be guided toward fruitful ends, and so I have brought in another pedagogic element that has intrigued me for the last several months: essential questions.

I first encountered essential questions (EQs) on Grant Wiggins' blog, and I blogged about this experience in the context of my advanced programming course. The primary purpose of EQs is to frame a learning experience. EQs have no trite or simple answers, and they are not learning outcomes, but they inform the identification and assessment of learning outcomes. With a bit of crowdsourcing, I came up with the following essential questions for my game programming course:

  • How does the nature of game design impact the practices of game programming?
  • How does game software manage assets and resources effectively?
  • How do you coordinate interpersonal and intrapersonal activity within a game development team?

In reading about participatory assessment and the badges-for-learning movement, I came across Karen Jeffrey's HASTAC blog post. What she called “Really Big Ideas” seem isomorphic to EQs, and so I adapted her ideas in defining a rubric for evaluating reflections. I will be looking for reflections that provide the following:

  • A characterization, with supporting evidence, of one or more essential questions of the course.
  • The consequences of this characterization on individual and/or collective practice.
  • The potential critiques of this characterization.

Deciding how to guide student attention is one of the most challenging parts of course design, and I recognize that by introducing these essays, I am reducing the number of hours I can expect students to spend on the development tasks at hand. However, these essays will afford conversation and intervention regarding reflective practice. They respect the authenticity of student work since, if done right, they should yield increased understanding and productivity from the students. This reasoning is similar to that given my proponents of team retrospective meetings as part of an agile practice: by reflecting on what we are doing, we can learn how to do it better. I have been encouraging my students to write reflectively, especially since starting my own blog; these reflective essays codify the practice and reward student participation.

The official course description for Fall's game programming course can be found at http://www.cs.bsu.edu/homepages/pvg/courses/cs315Fa13. I am happy to receive feedback on the course design, particularly the articulation of the essential questions, since they will be central to the students' learning experience.

Next time, I will write about the redesign of my advanced programming and game design courses, both of which involve turning to badges to incentivize and reward student activity.

Thursday, June 27, 2013

Helpful Java Libraries: Notes from my IndyJUG presentation

Introduction

Yesterday, I gave a presentation at the meeting of the Indianapolis Java Users' Group. The presentation, "Serious Game Development in Java," gave some background about how I transitioned from game hobbyist to serious game development researcher, and I talked about my two successful Java-based serious games: Morgan's Raid and Equations Squared.

A portion of the attendees at the IndyJUG meeting
Usually when I talk about these projects, I am describing the immersive learning environment, my experience working with multidisciplinary undergraduate teams, or design and evaluation processes. Given that this crowd was primarily professional Java programmers, I decided to take a different tack, and I talked about the software architecture and what I learned by building these games. In particular, I talked about some of the libraries we incorporated into the game. Several attendees asked me if I could share more information about the libraries and why I chose them, so I decided to write this follow-up.

Morgan's Raid

Morgan's Raid was written using Slick2D, which had been my go-to library for Java game development. However, the original developer and maintainer, Kevin Glass, has moved on from the project, and it seems to be struggling now. Kevin did a good job keeping the libraries in sync with the native libraries of lwjgl, and with him off the project, I would not recommend starting new projects in Slick2D. Kevin has moved on to libGDX, which looks like an interesting project, although I prefer PlayN, as described in the next section.

Here is a brief description of the other libraries we used in Morgan's Raid. For brevity, I'm not including their transitive dependencies, but note that managing transitive dependencies on this project is precisely why I was blown away by Maven, as described in the next section.
  • Apache Commons CLI: robust handling of command-line arguments, which we used to easily modify runtime behavior, e.g. fullscreen vs. windowed mode
  • Apache Commons Configuration: robust handling of application configuration, such as animation speed, default volume, etc.
  • EasyMock: mocking library for test-driven development
  • Guava: broad and robust library that simplifies Java development, making particular use of the Lists, Maps, and Objects classes.
  • Joda-Time: all the time calculations are handled with this fantastic library, and if you're doing any time-based logic, you should be using it too.
  • SnakeYAML: parser for YAML, which we used to describe the cinematic scenes in the game
  • CruiseControl: though not a library, we used this for continuous integration
I have several posts on this blog about the design and development of Morgan's Raid. If you're new, here are some of the most descriptive:
When I began work on this project, I knew I wanted to develop an HTML5+Javascript solution with the least possible amount of pain. I evaluated several possibilities and settled on PlayN. This amazing library allows cross-compilation of the same codebase to desktop Java, HTML5+Javascript (via GWT), Android, and iOS (and Flash, kind of, but its support has not been great).

PlayN relies upon Maven to manage dependencies and project configuration. It took me some time to make sense out of how it was working, but now it's hard to imagine going back to manual dependency management. In Morgan's Raid, for example, when I wanted to add a new library, I had to manually download the binaries, and all the binaries of its transitive dependencies, put them into my project's lib folder, configure the build path, configure native libraries if necessary, and then hope that all the different libraries would work together. To upgrade any library to a new release, which happened to some of our core libraries during development, I had to repeat this process by hand. By contrast, to add, say, Mockito to Equations Squared, I just added this to my pom file:

<dependency>
  <groupId>org.mockito</groupId>
  <artifactId>mockito-all</artifactId>
  <version>1.9.0</version>
  <scope>test</scope>
</dependency>

That last bit, the scope, is really fascinating: it says that the project should use Mockito only when running unit tests. Scopes aren't needed for most of the libraries I use, but this example shows how simple a process it is to specify them. Also, need to update to a new release? Just update that version tag and Maven takes care of the rest.

Speaking of Mockito, it has become my favorite mock object library for Java. The API design is elegant and allows for readable code with minimal boilerplate. Here's a sample unit test—the same one I showed in my IndyJUG presentation—that demonstrates Mockito. By way of explanation, this code builds a token list containing "-" and "1", parses it into the expression "-1", then creates a visitor object and hands it to the expression. The expected behavior is that the visitor visits two nodes in the parse tree: the unary negation and the value 1. Note that mock and verify are static calls to the Mockito library.

@Test
public void testVisitorHitsNegationAndInteger() {
  tokens = ImmutableList.of(
    SymbolToken.MINUS, 
    IntegerToken.create(1));
  Expression e = parseTokens();
  Expression.Visitor visitor = mock(Expression.Visitor.class);
  e.accept(visitor);
  verify(visitor).visit(UnaryOperation.NEGATE);
  verify(visitor).visit(1);
}

Equations Squared uses the pythagoras, React, and TriplePlay libraries from ThreeRings. Pythagoras is a collection of geometry utilities that is well described on its Web site. React brings functional reactive idioms and the slots/signals idiom to Java, and that merits a small example. My GameView class exposes a signal with a method like this:

public SignalView onGameOver() {...}

Any agent in the system that needs to know when the game end condition is met can connect a slot that is notified when the signal is emitted, for example:

game.onGameOver().connect(new UnitSlot() {
  @Override
  public void onEmit() {
    displayListOfBadgesAndDemerits();
  }
});

Signals can have type parameters as well, though here I am using the simplest form. React also provides convenient value objects that emit signals when values change:

Value<Integer> v = Value.create();
...
v.connect(new ValueView.Listener() {
    public void onChange(Integer newValue, Integer oldValue) {
        // Handle change here.
    }
});

As you can see, what I'm doing is using React to provide convenient, quick, efficient reification of the observer design pattern. No extra boilerplate required here, no fat interfaces and adapter classes: just hook up slots and signals and get going.

Where PlayN provides a low-level API for game development, TriplePlay provides many of the niceties one needs to get games up and running, such as handling screen transitions, layer animations, and UI widgets. One of the reasons I love TriplePlay (and PlayN) is that the designer takes care to support fluent programming idioms. This is a direction I have been taking much of my own development as well. Consider this example from Equations Squared that handles popup notifications:

tweenTranslation(popup.layer())
    .from(325, 320)
    .to(325, 300)
    .in(0.9f)
    .easeOut()
    .then()
    .tweenAlpha(popup.layer())
    .to(0)
    .in(0.3f)
    .easeOut()
    .then()
    .action(deleteLayerAction);

The code reads exactly as one would explain the animation sequence. When source code is as short and expressive as it needs to be to convey an idea, that's a good program. Working with some of the fluent styling API in TriplePlay takes a little getting used to, especially if one comes from a push-button background as in Swing. However, I really felt like my own ability to express myself fluently in Java increased after learning to use TriplePlay.

For more on Equations Squared, check out The Story of Equations Squared.

Acknowledgements

I want to thank Michael Dowden for inviting me to present, and to all the IndyJUG community for their warm welcome. The event was graciously hosted by E-gineering, and I could tell by their amazing facilities that they are a company who takes their work seriously and respects their employees. Check out this centrally-located kitchen!
The E-gineering Kitchen
I also want to recognize the valuable contributions of the Apache Foundation, Google, and ThreeRings for their support of open source software development, as well as the significant contributions of Kevin Glass to Slick2D and Michael Bayne to the whole PlayN ecosystem.

Wednesday, June 26, 2013

Unboxing Incan Gold (Udc 4284)

I attended a gaming event at the local library a few weeks ago and was introduced to Incan Gold. My eldest son and I really enjoyed it, and I wondered if it would be accessible to my three-year-old. Remembering that my university library had a copy, I looked it up in the online card catalog. The entry describes it, "Educational resources non-print. Ask at desk." I wrote down the code on the page, "UDC 4284", and asked for it at the desk.


They must store non-print educational resources in boxes like this. Seems like a good idea to use uniform boxes with identifying codes, even though the game itself is in a box. I'm a collector of board games, and I like my boxes to be in relatively good condition too. I opened up the box...


Sure enough, Incan Gold. The labels on the box are probably helpful in case the game box gets separated from its containing box. Putting a university sticker on any library resource is probably a good idea, too. I took the game box out of the larger container and popped it open.



There's the rulebook. Little sticker on there indicates that it's from educational resources with code Udc 4284. I suppose that might get separated from the rest of the materials, and putting the code on there makes sure it ends up in the right place. I already know how to play, though, so let's move that rulebook out of the way...


Hey, take a look at the tent cards... and the temple cards! Looks like they wrote the Udc 4284 code on these too. Here's a close-up of the tent:


Is it only written on the top of each stack? Riffling through the cards reveals that not to be the case: the code is written on each and every card.

These are the Player cards. The code is written on the back of them. On the backs? That's marking the cards! I suppose it would be hard to write a code on the dark-colored front, but still, isn't it bad form to write on the backs of cards?


Same thing with all the Quest cards: code written on the backs, fronts pristine.


Fortunately, it's all one handwriting and the same marker! I would call these visually indistinguishable, for casual use anyway.


Ironically, the Temple cards have the codes on the front. If you're familiar with the game, this is the only kind of card where it wouldn't matter if it were on the front or the back.

Whew, can you believe someone went through and wrote "EdRes Udc 4284" on each and every card of the game? Sounds tedious. Well, let's move on and play the game. Better get out the gems.


Wow, even their own zipper baggie has "UDC 4284" written on it.

Wait, what's that on the gems... could it be?!


Every single gem has "4284" written on them! Black marker on the green and yellow gems, silver marker on the black ones.

Let's hope they never have to change their indexing scheme and relabel all the components!

Aside: My three-year old loves it and has just enough patience for a complete game. His strategy is questionable, but because it's a game that balances risk and reward, there has been times that he is safely in his tent with one or two gems while the rest of us return to base camp with nothing! The game allows for some good mathematical exercises for my elder son as well, including mental addition, change-making, and division with remainders.

Saturday, June 15, 2013

GLS 2013 Conference Report, Day 3

[Day 1][Day 2]

Constance Steinkuehler gave a fantastic keynote presentation this morning. She gave a reflection primarily on her year at the Office of Science and Technology Policy in Washington DC. She focused her comments in four domains: federal agencies, philanthropic agencies, the games industry, and academic consortia. The talk was recorded for Webcast, although the link doesn't work as of this writing.

One of her major points was that we need more businesses coming out of serious games, particularly to deal with the pernicious problems of maintenance, marketing, and community. This echoes what Schoettler said Wednesday and Flanagan described doing on Thursday: seeing commercialism as a means to an end. I find this compelling as I consider my own corpus of work and current interests. However, I also fear the negotiations with university IP people. I chatted with a colleague who works in a start-up, and he described how university lawyers killed a private-public partnership over a $15,000 grant. That's peanuts for any serious project, and from his story, they wasted much more than the overhead cost of the grant in their conversations around it. The school he was working with was larger—and more successful with external funding—than my own. However, Steinkuehler also made a strong argument during Q&A that to get anything done requires people and resources, but that people are the hardest part: if you want to see something done, you have to be the one to invest the effort to make it happen. By that token, I suppose I should bite the bullet and not shy away from the more ambitious projects. I think she used the term "Own that ulcer for two years," which got a laugh, but I feel I need to seriously consider the value of my own time and happiness as well, and that of my family.

She made a strong call as well for the GLS community to be playing, critiquing, and spreading the word about each others' games. This is a nice idea, but I wonder if the community tries to keep on its kids gloves a bit too much. The last two games I've played in this vein were overproduced garbage. That is, the gameplay did not match the learning objectives at all, and the narrative could very convincingly be argued to teach the opposite of their goals.

Steinkuehler described her frustration in trying to collaborate with AAA publishers on learning games, coming to the conclusion that they just couldn't justify the lesser revenue to their shareholders, and that the megapublishers are necessarily risk averse. By contrast, she made the valuable and justifiable claim that we—the community that designs and develops learning games—are indie developers, and we need to embrace that. In particular, she encouraged everyone to go to GDC and walk through the Indie Arcade. Easy sell for me: that trip's going into my next grant proposal.

The four parting points of the keynote were: collaborate; we need more businesses; stop using the "chocolate-covered broccoli" metaphor (because of what one has to believe in order for this metaphor to be meaningful); and that happiness is a global priority (and so not to back away from "fun").

After the keynote, I ended up in a session that was a bit outside my interest area, but I was curious as to what happened in it. The most entertaining talk was one by Edd Schneider and Tony Betrus who reported on a series of experiments on task completion in GTA. The part I found most interesting was how they took a series of easy, medium, and hard tasks and compared the results between having the tasks ordered by a teacher (starting with easy, then into medium, then into hard) versus allowing the player to choose which ones to do in what order. They found that the teacher's choice resulted in the players completing more tasks, but the player's choice resulted in the them completing more hard tasks. Schneider and Betrus argued that this was a win for both cognitivists and constructivists, highlighting that they produced better results—while acknowledging that constructivists probably care less about learners completing small irrelevant tasks.

After this, I wandered over to the Education Arcade, my first trip there of the conference. I walked through to get a feel for what was there, and I overheard a great question from Roger Travis, who was giving feedback to a designer (I think): "What's the verb of the learning objective?" This is the kind of question I've heard posed regarding course syllabi particularly when I was serving on the College Curriculum Committee and as departmental Undergraduate Program Coordinator. I think it's a useful technique for sharpening the discussion of learning outcomes. Yet, I had never thought to apply it to serious games, which surprised me. I had just presented yesterday on the learning objectives for Morgan's Raid, as articulated by the original design class, and they were pretty raw; they wouldn't pass muster on the College Curriculum Committee, that's for sure. I'll need to keep this in mind when I work with this coming Fall's game design course to see if we can make some hay out of this.

Walking through the Education Arcade, I saw a poster about Monsterismus, a game to teach programming. Well, that's up my alley. Turns out it's by Matthew Berland, who was a discussant in several sessions I attended the previous two days, and whose comments at the time made him seem like a kindred spirit. Berland, Don Davis, and I, along with a chap whose name I did not jot down, ended up talking at length about games to teach programming, and I used the opportunity to bounce some of my latest thoughts off of them, including breaking the C-style programming mold (such as by using stream-based or prototype-based languages) and incorporating programming as player action in board games. I tried writing a lengthy blog post on these ideas at the end of last semester, but I had a very hard time inventing a vocabulary with which I was happy; fortunately, these guys seemed to grok where I was coming from, and I look forward to continued conversations with them as well as seeing where their work takes them. (Incidentally, Berland and Davis are also involved in IPRO, a game for iOS that is played by programming soccer robots, for those of you in the Mac space. I guess you can find it on your AppStore, or something.)

After lunch was a fireside chat with Jim Gee, but it was a bit anticlimactic since he mostly talked about The Anti-Education Era, which I just read. He is clearly something of a superhero to many in this community, and he had some good stories. Gee pointed out that schooling should not be geared toward jobs since 3/5 of the jobs are in the service sector and require only rudimentary training; instead, education has to be about something bigger than merely certification for employment. In a similar vein, he mentioned that education is a complex system, and that it is known that control studies cannot work in complex systems because you cannot fix the input variables. Yet, policymakers don't seem to understand this. I would have had some questions for Gee, in particular from my criticism of aspects of the book, but they were only accepting questions via Twitter, which is not a thing I do. I don't mind not having asked my question, but it was strange to me that they limited questions to this one technology.

Several awards were scheduled to be given next, and I was contemplating skipping this part. I tend not to like award ceremonies, but GLS did it right. They brought up a slide with a bunch of names on it and asked all those people to come to the front. Then, they ran through all the categories, handed out the awards, cheered, and were done in five minutes. Attention everybody: if you need to give awards at a conference, do it this way.

In summary, I found this to be an excellent conference. The mix of attendees was refreshing, including professionals (including non-profits such as museums), academics, and teachers. I was able to get lot of new references to read about, which is always good. I definitely hope to return again in the near future, and given that it's in driving distance, perhaps I can convince some of my students to come along as well.

It was good to see some friends at this conference too, people I have met at other conferences and online. When I started working in serious games about three years ago, I felt very disconnected from any community of practice. Now, I feel like there are people whom I can contact for help and inspiration, and indeed, several of the people I've met through these conferences have been generous in sharing their stories and advice.

For those of you who read this far, I will share my collection of Conference Tips from my trip to GLS. Note that most of these are based on mistakes committed (often unwittingly) by graduate students in attendance, but not all.
  • If you come to the microphone to give any kind of formal talk, and you have not been introduced, introduce yourself.
  • Don't say you're interested in making a game to "target women." They're half the population. That's not a target.
  • If your volunteers have shirts, don't put, "Have questions? I can help!" on the back of the T-shirt. This means the person who can help is already walking away from you. Put this on the front.
  • Never put light text on dark slides: it fails with a modicum of ambient light. There's certainly no excuse for this if you're from the host institution and know the rooms that will be used.
  • Don't put thing on the slides that people cannot read, and then tell them it's not important to be able to read it. Design your slides differently.
  • When giving a presentation, don't use a friend as a remote control. Get an actual remote control, or manipulate the slides yourself. It's less distracting than your trying to tell the other person when to move forward or backward.
  • Don't use 3D bar charts for any purpose. They're just bad visualizations. Exceptions can be made for irony.
  • Don't use pie charts either, 2D or 3D. They're ineffective and misleading. Exceptions can be made for pie charts describing their own similarity to Pac-Man.
  • Be aware of the British and American hand signs that involve sticking fingers up at people, and to be safe, don't use any of them. That is, if you need to count, do it with your palm to the audience.
Ice cream on the terrance? Yes, please.
See you next time, Madison.