At 16:47, he presents the five-point rubric that is used to measure the quality of the game:
- Clarity: do the players know what to do?
- Innovation: What new gameplay to stimulate interest?
- Immersion: Is the "story" compelling (implied in setting, art, music)?
- Flow: Does the player feel constantly productive?
- Fiero: Are there multiple big victory moments for players?
In my game design class, I grade only on process, not on the quality of the product. I don't know that this has caused any problems that I can pinpoint, since those who follow bad process tend to have bad products. Those who follow good process tend to have... well, something playable but amateurish, but that's exactly what one would expect. The question for me is whether a rubric like his would give the students something concrete to reflect on. I have the sense that, if I didn't remind my students to try to connect their games to the theories studied in class, they would not do it. Indeed, conversation during our workshopping practically never brings up any theoretical issue from the first half of the semester. A rubric, perhaps in combination with checklist-based grading, might help them see the connections. I would not take this rubric wholesale since I think it doesn't capture my philosophy of teaching game design, but it is a good piece for reflection.
At 17:56, he talks about accountability. This is something that has had me uncomfortable in my classes the last two years or so—something I feel is ripe for reconsideration. The first part of Wiser's model is weekly playtesting framed by the aforementioned rubric. In my classes, I like to lean on Jessica Hammer's EOTA (Experience, Observations, Theories, Advice) framework, which I wrote about back in 2018. I have asked students to follow that framework, but I've never given direct rubrics for critique; maybe it's worth trying that some time.
The next two parts of Wiser's accountability model is posting weekly task divisions and weekly personal progress reports. Reflecting on the accountability mess in my game programming course, I want to bring in something like this. Unfortunately, given where we are in the semester and the upcoming iteration deadlines, I fear I may have missed an opportunity to let something like this drive team activity. I think I will incorporate something like this in the last 2-3 weeks, but it will look like what it is to the students: a late-semester experimental patch.
In any case, the particular components of the personal progress report is interesting. It consists of one paragraph with screenshots (ah, I already love the multimodality!). They address what they agreed to take on that week, what they completed (with no penalty if it's different from what they took on), who helped them, whom they helped, and links to tutorials that they used. That is a really tight model that covers a lot of the bases that my current approaches miss. Really, I like everything about this and think I will pull this into one or more classes soon.
I think it's worth mentioning here that, even with my relatively small class sizes, I have too many students to meaningfully respond to all of their weekly task divisions and personal progress reports. I am not sure an undergraduate grading assistant would have the wisdom to know how to read and respond to these, and I also don't think I can add several hours each week of careful reading on these. I would need to look for heuristics for grading in order to keep my own commitments manageable.
The last piece of Wiser's accountability model is peer evaluations, and here is where my ears really pricked up. He credits his model as coming from the executive MBA program at Boston University, but I have had no luck trying to dig up original sources on this. Wiser describes it as covering teammates' contributions to productivity and morale, dividing points unevenly between them. He does this three times during the team project, but only the last one counts on the final grade, and that at about 20%. Unfortunately, I don't have any guidance beyond what Wiser shares, so I don't know how many points are available for distribution, how "productivity" and "morale" are framed for the purpose of evaluation, nor how these translate into grades. If I tried this approach this semester, I would have to approach it as an educated guess. If anyone has more information or advice here, please let me know.
This self- and peer-evaluation model is significantly different than the model I've been using for around a decade, where students evaluate themselves and each other on a five-item rubric, and students' individual grades are capped by the sum of the medians on the evaluations. A weakness of my model is that students are not limited in how they distribute points, and so I know that students sometimes give each other high marks just because they don't want to impact each others' grades. I had a wonderful conversation once with a student directly about why he gave evaluations that were clearly dishonest, and his candid answer was, basically, that he believed in mercy more than fairness. The spiritual dimensions of the conversation were fascinating! Yet, I am called by my employer to grade on fairness, and so I still think it's a weakness of my system that unproductive students can earn good grades on the mercy of their teammates.
The game evaluation rubric and accountability model were the two things that stuck out most in my memory as being immediately applicable to my own teaching. If I am able to cast these into a form that I can use during this semester, I will be sure to address it in my conventional series of end-of-semester reflections.
No comments:
Post a Comment