As the Fall 2020 semester wraps up and the asynchronous online final exams are trickling in, it's a good time for me to continue my series of end-of-semester reflections. Today's topic is my Introduction to Game Design class, offered as CS439, the design of which I wrote about over the summer.
I finished grading their final projects yesterday, and they were fun to review. I had each student record a short video presentation and post it for the class. These worked well and I may keep this idea even after the pandemic. It's one thing to give an in-class presentation and yet another to have a video that can be any length or watched multiple times.
It was interesting to review the final projects in light of the requirements. This year, for the first time, I added a clause prohibiting the creation of content-centric games. Here is how I explained it:
The project may not be content-driven in the sense that the majority of the work involves producing content. Examples of content-driven games include Apples to Apples and Magic: The Gathering; by excluding these, we ensure that you demonstrate an understanding of systemic interactions rather than asset production.
I admit that this is a vague specification. Looking at the students' projects, there are a few fantasy adventures that collapse under the weight of their content. In all these cases, the students either were hindered from iteration by the quantity of "things" they had made, or in one case, they could not complete the project adequately because of the weight of content. I am unsure how to make the proscription more clear, except perhaps to be more direct about it in my review of their concepts. Then again, I know I warned some of these students that, if they pursued their declared path, they would get caught up in making cards, items, tiles, and so on, but it is not clear to me that my warning made any impact. I am not fundamentally opposed to such games, but it's clear that these students get caught up in too much unfruitful content production work.
The other major structural change to the semester was the inclusion of labor logs. Students were to copy a spreadsheet on Google Docs and log their labors then report them via a Canvas quiz each week. The mapping of effort-hours to course credit was clearly established, and so I did not monitor these: it seemed like a good place for automation. It turns out, I should have! While writing this, I went back and audited the responses, which required students to share a link to their copy of the labor log. It turns out that a third of the students kept a design log, not a labor log. In their defense, most wrote down timestamps with the same level of granularity as the spreadsheet.
This brings up an interesting question: is the labor log any more or less effective than a design log with timestamps? I suppose students were confused because I had recommended the use of design logs in Dan Cook's style. Of course, that was a recommendation, while the instructions clearly spelled out the requirement of labor logs—I suppose there is no underestimating a student's capacity to do what they want rather than what you tell them. Unfortunately, I do not see a way to crack this nut given the data that I have. Looking at the composite data and who ever recorded less than full credit, there is no clear pattern.
This leads into the final point I want to make on grading students based on their hourly commitments, and that is the accuracy of self-reporting and granularity of data. I use triage grading throughout, which means that A-level work gets 3 points, C-level work gets 2 points, and D-level work gets 1 point. For the labor logs, then, given the expectation of nine effort-hours per week, I set the A-level as eight hours per week, C as 5–7, and D as 2–4. Looking at all the self-reported labor grades, and only considering those students who completed the final project, 11% of them report anything besides A-level. There are two potentially overlapping interpretations of these data. The optimistic one is that, 89% of the time, the mechanism of asking students to track and report their hours kept them motivated to slog through and get things done. There is anecdotal evidence to support this, since the students' projects were actually quite good and certainly more polished than in many other years, and I saw students push through to the end of the semester, unlike in CS315 Game Programming, where progress ground to a halt for a week or two. The alternative interpretation is that, because grades were attached to the work, and exacerbated by the significant difference between the A and C level grades, students dishonestly rounded their efforts up to the nearest A. As with the issue of design logs vs labor logs, there is no clear path to determining which is correct.
As usual, students' final projects were graded on process and not on objective quality. I used a different rubric this year, however, which explicitly included 10% of the points for blindtesting. This resulted in almost every completed project having been blindtested, and reading students' status reports, this was certainly worth it. All who conducted blindtesting credited it with significant and substantial improvements in the design.
One of the most interesting comments about testing in general was from a student who said that, before getting deep into the final project, he thought of playtesting as being like the proofreading one does before turning in a paper. After finishing the final project, he realized that it was an integral part of the design process—something you do to improve the design, not just to "check" it.
Those are the major findings from this semester. As I mentioned above, my students are still completing their final exams, and they have until Friday night to do so. If I see anything in the exams that should be mentioned here, I'll post an update. Otherwise, thanks for reading!
No comments:
Post a Comment