Thursday, July 25, 2013

Revising Courses, Part III: Advanced Programming

Inserting reflective essays into my game programming course was straightforward, and the augmented achievement-based grading system of my game design course was an incremental improvement. Now, I will share the story of the biggest renovation in my Summer of Course Revision: complete renovation of my Advanced Programming course.

CS222: Advanced Programming has been around for about four years now, and I have taught it more often than any other. Several posts on my blog reflect on this course, including this one from the first semester's experience and this one reflecting on this past academic year. For the past several offerings, I have used the same fundamental course structure: studio-based learning, using daily or weekly writing assignments for the first several weeks, gradually turning toward a focus on a pair-programmed two-week project and then a small-team six-week project. I have been basically happy with the structure, but reflecting on my experiences, I identified the following pain points:
  • The course was sequenced along a particular path of learning that did not resonate with all the students.
  • There was not enough mastery learning: students would often do poorly on an assignment and clearly not revisit it, since weeks later, they still didn't understand the concept. This caused especial pain when these assignments covered technology that was core to my pedagogy such as distributed version control.
  • It was easy to slip into a mode where I would talk for the whole class meeting. The studio orientation—through which students show artifacts that represent their learning, and these artifacts are subject to formal and informal peer and expert critique—was not guaranteed in my course structure: it relied on daily ingenuity rather than codified form. (The time and space constraints imposed by the university negatively affect studio orientation as well, but these are beyond my control.)
  • It was not clear that all the team members were engaged in reflective practice when working on their projects. That is, I felt that there were people falling in the cracks, not learning adequately from the team projects, sometimes dues to lack of reflection and sometimes due to lack of practice.
  • Students enter the course with high variance in knowledge, skill, experience, and motivation.
In addressing these pain points, I wanted to make sure I kept all the good parts of the course. Students work in teams on projects of their own design, using industrial-strength tools and techniques. Teams have to incorporate some kind of management techniques and give multiple milestone presentations, both of which remind students about the "soft skills" of Computer Science. There is a good overall flavor of agile development, pragmatism, object orientation with patterns, and the importance of reflective practice and lifetime learning.

If you've read my last two posts, you probably see where this is going! I decided to replace the daily and weekly assignments with a system of course achievements (a.k.a. badges), and students will write reflections that relate the achievements to essential questions of the course. The complete course description is available online, or you can choose to view just the achievements.

I have developed the following set of essential questions:
  • What does it mean to be a "professional" in software development?
  • How do you know when a feature is done?
  • How do small teams of developers coordinate activity?
  • How does a Computer Science professional use ubiquitous and chaotic information to be a lifetime learner?
As in my game design course revision, students' grades will be based on number of achievements earned and written reflections on those experiences. There are also four "meta-achievements" that can be earned by completing sets of regular achievements. These are leveled achievements and reflect four different potential paths of mastery: a Clean Coder has documented evidence of applying ideas from each of the first twelve chapters of the book; a White-Collar has demonstrated savvy at project management and presentation; an Engineer is moving toward an understanding of software architecture and patterns; and a User-Centered Designer has designed, evaluated, and iterated on user-facing systems. The introduction of these meta-achievements was partially inspired by an alumnus who, when I told him about this redesign, said that most courses were like a Call of Duty game, but that this was more like Deus Ex, and that it would benefit from having more quests.

I want to help students succeed in this kind of learning environment, and so the first month is designed to help them understand what it means for them to have more ownership over course activity. In the first week, I will provide a review of important CS1 and CS2 topics, focusing on the mechanics of object-oriented programming in Java. Knowing that student experience and comfort with this material is highly variable, I can use this week to focus on how to navigate the course structure. In fact, I will strongly recommend that their first achievement is Studious, which requires them to read William Rapaport's excellent How to Study guide and write a study plan for the semester.

We will still use Clean Code as a shared focus, but I have changed how I am scaffolding students' experience in integrating its expert tips. Previously, I had identified specific tips or chapters for students to read and then asked them to apply these tips to their projects—past or current. There was little evidence that this "stuck" with the students, as later coursework would violate the concepts supposedly learned. This was, in part, due to lack of mastery learning, where students would accept a low grade and move on without having learned the material. Furthermore, because we paced Clean Code readings and assignments through the semester, I would frequently encounter examples in class and in critiques that afforded application of a tip that the students had not yet studied. In the revised course, we will read the book relatively quickly, but then keep returning to it in informal expert critiques, adopting an iterative approach that is more apropos to how one learns and remembers these tips: a bit at a time, and a bit more next time.

In my game design course, students are required to present to the class to earn their achievements, but the enrollment in that course is half that of this one. I toyed with the idea of having my CS222 students post their work on the walls, a portion of the class each day, but I feared that this would have too many negative consequences. Instead, I am requiring students to post their artifacts to a wiki, and my intention is to review the wiki changes between class meetings to identify notable entries. This way, I can bring up the wiki on the projector and model an appropriate critical process. I can also insert mini-lectures as necessary to clarify misunderstandings. I am eager (or perhaps anxious) to see how the density of misunderstandings in this redesigned course, which encourages mastery learning through reflection, compares to that of my current status quo, which encourages correctness up front. We'll be using BlackBoard's wiki only because it's easy for all the students to find; it was frustrating to me during my experimentation that it has no wikitext editor. If the wiki turns out not to work for us, we can always move to an in-person poster-style presentation.

Since I will be doing more just-in-time teaching—reacting to students' insights and confusions—I have decided to expand the conventional six-week, two-milestone project to three milestones over about eight weeks. The students like this part of class the most, and I like the idea of adding another milestone, since this gives them another opportunity to learn from mistakes in tools, design, organizational structures, and presentation. For both this large project and the small project, I will provide common requirements that everyone must follow, such as using distributed version control. Previously, these things were simply worth some points on the project; however, if I want to focus on assessing their reflections while also requiring some shared technological experience, it's fairly easily done with a hard-and-fast requirement. With multiple milestones, there will be an opportunity to ensure each team is following the requirements. For example, if I see a team not using DVCS, then I can remind them.

I am eager to see how students take these changes. I expect to be able to blog about some of the day-to-day experiences, and I anticipate writing some academic papers on aggregate student performance in all these modified classes.

Here are the critical links again:

Wednesday, July 24, 2013

Revising Courses, Part II: Game Design

Following up on my previous post about revising my game programming course, today's post is about a revision to my game design course. This course is an honors colloquium, a special topics course only open to honors students and with an enrollment cap of fifteen. Every honors student has to take six credit-hours worth of colloquia, and the topics depend on who is available to teach them in any given semester.

This colloquium in part of a two-semester immersive learning project undertaken in collaboration with the Indianapolis Children's Museum, and it is being funded by internal funds through the Provost's office. Teaching in the Honors College is not a normal part of my load, and a major portion of the grant is "assigned time," allowing me to teach this colloquium. I mention this because the behind-the-scenes machinations are likely opaque to those outside academia: if I didn't have the grant, I would be teaching a Computer Science course instead of the colloquium. I am grateful to the Provost and his committee for approving my proposal, which allows me to teach this course that aligns so well with my research interests.

I taught a similar colloquium last Fall, and it had the same fundamental objectives: engage students in the academic study of games and learning, and, in collaboration with a community partner, have them produce prototypes of original games.There were two significant differences: I was team-teaching with my colleague Ronald Morris, and the community partner was the Indiana State Museum. It was my first attempt at achievement-based (i.e. badge-based) grading, and I wrote a lengthy reflection about the experience. For the redesign, I decided to focus on a few specific pain points from last time:
  1. The students put off achievements until the end of the semester.
  2. Some students made zero or nominal changes between game design iterations.
  3. Not all the students were engaged in reflective practice: they were not adequately learning from the process.
  4. In-class prototype evaluation time was rarely meaningfully used due to the points already mentioned.
I have met a few new colleagues through the conference circuit in the last year, and I am grateful for their willingness to share tips and tricks. In particular, the following changes reflect some specific ideas I have picked up from Scott Nicholson at Syracuse University and Lucas Blair at Little Bird Games.

Perhaps the most important revision to the course is the introduction of reflective essays. As in my game programming course, I was inspired by the participatory assessment model to grade reflections rather than artifacts. The students will present their weekly artifacts to the class, where artifacts might include summaries of essays and articles, posters, one-page designs, or prototypes. These artifacts will be subject to peer and expert formative evaluation, following the studio "crit" model. However, it will be students' reflections on these artifacts that are actually graded. As in my game programming course, I have decided to frame these reflections around essential questions and grade them based on (a) how they characterize an essential question, (b) the implications to practice, and (c) potential criticisms of the characterization. The research I have read predicts that this combination of achievement criteria and reflections should encourage students to produce high-quality artifacts without sacrificing intrinsic motivation.

Last time, the students had to choose a topic and iterate on it fairly early. The students with the best designs at the end of the semester were, for the most part, those who had to throw away major elements of their design or change themes entirely. This leads toward the desire to do more rapid iteration on ideas, not just prototypes; however, I still want each student to create a significant prototype by the end of the semester. To address this, I have divided the semester into three parts. In the first part, we will survey major themes of the course, such as games, fun, learning, museums, and children. The second part of the semester will be rapid creation of design sketches based on specific themes at the Children's Museum, about one sketch each week. The students will then choose one of these to prototype for their end-of-semester deliverable. I hope that this approach improves all the students' prototypes: even though they have less time to work on their prototypes, that time will be more focused and based on having had more reflective practice earlier.

Going along with the three-part division of the semester, I have organized the achievements into groups, some of which are tied to one of these parts. There is also an "unrestricted" category that can be earned at any time. I have introduced a throttle of one achievement submission per week plus one revision per two weeks. The students' grade is tied to the number of achievements they earn, and so this should help the students pace themselves while keeping me from having to evaluate an inordinate number of submissions at the end of the semester.

Following the most popular design principle for assessing learning with digital badges, I have introduced a "leveled" system of achievements. Certain achievements have gold stars attached to them, designating them as requiring special effort. These stars are tied in with number of achievements and reflection points in order to determine a student's grade. Note that two out of the three of these are directly in the student's control, and the one that isn't—reflection points—permits revision. Hence, students can essentially pick their grade based on their level of legitimate participation in the class. 

The full list of achievements is available online, and you can view it on its own page or embedded into the course description. Each badge is defined by a name, a blurb, criteria, and an image. The blurb and image are new this year, and I think they represent a major improvement. I used to design the badge images, making significant use of icons from The Noun Project. In case you're curious or want to sketch up your own, the border is the Ball State red taken from our logo (#ed1a4d) and the starred achievements use light yellow background (#ffff66).

Note that the core class activities in the second and third parts of the class are associated with achievements that lead up to stars. For example, a student who shows design revisions every week for the five weeks of prototyping will earn two gold stars, which are half of those required to earn an A. Indeed, I intend for the standard path through the course to consist of some student-directed inquiry in the first part, then two stars from one-page designs, then two stars from prototyping—and I will make this clear to the students at our first meeting! However, the system gives the students agency to choose a different path: if someone wants to focus on games criticism, or reading and reporting on game design texts, these are still rewarded and earn course credit.

One other change this year is based directly on student feedback from Fall. Last time, I had an achievement that was earned by playing several games that exhibited specific mechanics that I had identified. My intention was that I could guide students to experience genres, mechanics, and themes that I found interesting, but it ended up making the achievement hard to earn and delayed rewards for legitimate course activity. Note that there was no formal "leveled" achievements as I have this year with starred achievements, so this one achievement took much more time for the same reward as any others. This year, I have given the students much more freedom to choose a the games they will study and critique. They can choose analog, digital, or hybrid games, including sports and gambling games. I still provide some scaffolding through the games I chose to put on course reserves, but students who want to go in a different direction are free to do so.

The one weak spot in the course design, as of this writing, is the identification of essential questions. I have come up with two so far:
  • What is the relationship between games, fun, and learning?
  • How do you design an educational game for children?
I thought about introducing a third about design generally, such as "What is design?", but it seems that this is embedded into the second question. I worry that adding such an EQ would diminish the impact of the design-related one I already have. Finally, I will point out that the first essential question has explicitly guided my work over the last several years, and it was the explicit topic of my seminar at the Virginia Ball Center for Creative Inquiry: it is such a big question that others pale in comparison.

After having spent about three weeks this Summer revising my Fall courses, I find myself looking forward to the start the semester. It is good to have the time to rest, reflect, and revise. Of course, all this work has been without compensation, but, as the Spirit of Christmas once said, the rewards of virtue are infinitely more attractive.

Monday, July 15, 2013

Revising Courses, Part I: Game Programming

I spent the lion's share of the last two weeks revising my three courses for the Fall semester. They are the same courses as last time, although some of the themes have changed. After a trepidatious beginning, I am now quite pleased with the results. In today's post, I will describe the revision to my game programming course, an elective for Computer Science undergraduate and graduate students. The actual change to the course may appear small, but it represents a significant amount of research and learning on my part.

I have been structuring this course as a project-intensive, team-oriented experience. For example, last Fall the students implemented The Underground Railroad in the Ohio River Valley. I have also used this course to experiment with various methods of grading. I wanted the grading to be as authentic to the work as possible: students are evaluated on their participation and commitment only, not on quizzes or exams. For example, instead of midterm exams, I held formal face-to-face evaluations with each team member, modeled after industrial practice.

These methods work well, but reflecting on these experiences, I identified two potential problems. First, these methods fail in the case that a student refuses to participate or keep commitments: in particular, these methods produce little that could be considered evidence in the case of an appeal. Realistically, sometimes I get a bad apple, and so I want a grading system that allows me to give the grade I feel is earned. Note that while I admit to having given grades that are higher than I thought were earned, the assessment failure may be twofold: some students may require more concrete evidence of their own progress in order to improve or maintain performance, especially if such students lack intrinsic motivation.

The other potential problem stems from my wanting the students to engage in reflective practice, not just authentic practice. I wonder if some of my high-achieving team members have gotten through these production-oriented courses without having deeply considered what they learned. My model for encouraging reflective practice is based on industrial practice—agile retrospectives in particular—and is documented in my 2013 SIGCSE paper. This model, called periodic retrospective assessment, requires a team to reflect on its successes and failures intermittently during the semester, and at the end of the semester, to reflect on what it has learned. This sociocultural approach to assessment is appealing, and again, it seems to work in many cases, although it affords scant individual feedback.

While at this summer's GLS conference, I attended a talk about game-inspired assessment techniques given by Daniel Hickey. His model is called participatory assessment, and a particular aspect of it—which you can read about on his blog—is that it encourages evaluating reflections rather than artifacts. During his talk, he made a bold claim that resonated with me: writing is the 21st century skill. After having worked with Brian McNely for the last few years, I have come to understand “writing” in a more deep and nuanced way. (See, for example, our SIGDOC paper that takes an activity theoretic approach to understanding the writing practices involved in an agile game development team.)

Putting these pieces together, I decided to keep the fundamental structure of my Fall game programming course: students will work in one or more teams, organized around principles of agile software development, to create original games in multiple iterations. We will continue to use periodic retrospective assessment in order to improve our team practice and consider what we learned as a community. Now, I have also added individual writing assignments, to be completed at the end of each iteration. I want these reflections to be guided toward fruitful ends, and so I have brought in another pedagogic element that has intrigued me for the last several months: essential questions.

I first encountered essential questions (EQs) on Grant Wiggins' blog, and I blogged about this experience in the context of my advanced programming course. The primary purpose of EQs is to frame a learning experience. EQs have no trite or simple answers, and they are not learning outcomes, but they inform the identification and assessment of learning outcomes. With a bit of crowdsourcing, I came up with the following essential questions for my game programming course:

  • How does the nature of game design impact the practices of game programming?
  • How does game software manage assets and resources effectively?
  • How do you coordinate interpersonal and intrapersonal activity within a game development team?

In reading about participatory assessment and the badges-for-learning movement, I came across Karen Jeffrey's HASTAC blog post. What she called “Really Big Ideas” seem isomorphic to EQs, and so I adapted her ideas in defining a rubric for evaluating reflections. I will be looking for reflections that provide the following:

  • A characterization, with supporting evidence, of one or more essential questions of the course.
  • The consequences of this characterization on individual and/or collective practice.
  • The potential critiques of this characterization.

Deciding how to guide student attention is one of the most challenging parts of course design, and I recognize that by introducing these essays, I am reducing the number of hours I can expect students to spend on the development tasks at hand. However, these essays will afford conversation and intervention regarding reflective practice. They respect the authenticity of student work since, if done right, they should yield increased understanding and productivity from the students. This reasoning is similar to that given my proponents of team retrospective meetings as part of an agile practice: by reflecting on what we are doing, we can learn how to do it better. I have been encouraging my students to write reflectively, especially since starting my own blog; these reflective essays codify the practice and reward student participation.

The official course description for Fall's game programming course can be found at I am happy to receive feedback on the course design, particularly the articulation of the essential questions, since they will be central to the students' learning experience.

Next time, I will write about the redesign of my advanced programming and game design courses, both of which involve turning to badges to incentivize and reward student activity.