Friday, October 5, 2018

Specifications grading and wiggle room

I have been using specifications grading in my game programming class. I just finished evaluating my class' third mini-project, and now I am encountering some interesting problems. I want to share them here, in part to be able to review my notes later, and in part to see if my readers have any thoughts or advice.

First, a very brief background. Specifications Grading is the name of a technique popularized in the eponymous book by Linda Nilson. I actually used this approach several years ago in CS222 without giving it a clever name, but I abandoned it when I had a student tell me that he gave up on the course material because he knew he could not get to the level he wanted. That's just one case, and clearly from my plans for Game Programming, I decided to give it another shot. You can find all the specifications for the four Game Programming Mini-Projects on the projects page of the course site.

In the first two rounds of mini-projects, I had some cases where I ran into subtle problems with the specifications. One specification requires that students follow our established naming conventions. The problem arises, what to do if a student misnames one asset? Is one violation enough to say the whole specification was failed? Clearly not, but how many then? I am thinking about recasting such elements into something like "No more than two violations of the style guide." The problem then is, of course, that I have to actually look for them and keep track, doing the kind of bookkeeping that good specifications should make unnecessary.

A similar problem came up in the project report requirements, where I require that students describe the third-party assets they used and the licenses under which they are using them. Turns out, my students either were unbelievably lazy about this or really didn't understand the requirement. After the first Mini-Project, I encouraged them to take this more seriously; after the second Mini-Project, I realized I needed to intervene. I gave a class presentation about IP and licensing, including specific examples of violations from student work. I thought they should have learned this before a junior-level elective in college, but maybe they didn't; however, even in the third Mini-Project, I had students doing this wrong. I made the project report a low-grade specification: a student needs to have a well-formatted project report in order to get a C, but "well-formatted" includes all this licensing info. According to the specification, students who do not track the licenses properly should get a D. Is that right? Maybe, maybe not. I am thinking of separating the specifications for the project report to make this more generous to the students who really haven't yet internalized concepts of intellectual property.

Another area where students are having trouble is in working with Perforce. I sympathize: it took me some time to make sense out of it, and I had the benefit of working with many kinds of version control systems. It doesn't help that Perforce's nomenclature of "depot" and "workspace" is idiosyncratic. Having a working version of the project in our Perforce depot is a D-level requirement: fail that, and you fail the project. However, many students demoed games in class that are clearly not what they submitted. I am torn on this one: it's a clear, explicit D-level criterion that "Project is correctly configured on the depot so that a new client provides a runnable game." Students are turning in project reports with that box checked, but I doubt they have actually verified that this is the case. Indeed, one student even submitted a project report with that box checked and emailed me to say that he had trouble with the depot before submission. Hence, while I am sympathetic to the frustrations of learning new version control systems, I have very little tolerance for conscientiousness and none for deception.

If you look over the specifications, you will see that are simply for levels D, C, B, and A. In the report, students are supposed to tell me what grade they earned. My intention behind this was twofold: first, it would make them doublecheck the specifications and reflect on what they have done, and second, it would make my grading easier. I am surprised by how many students, in their reports, will make a claim like "I earned a B+" or "I earned at least a C because I worked hard on this." Neither of these are in line with the specifications system at all. It's really not clear to me where they get these ideas, if they are reading into rules something that's not there or, possibly more likely, they are not reading the rules at all.

My friend and colleague David Largent has been using Nilson-style Specifications Grading for several semesters, and so I look forward to picking his brain about some of these issues. He deploys a system called "Oops Bits" wherein students can get another chance if they misunderstand or misrepresent a specification, but I don't know much more about the system than that. I am thinking of sending out an email to my class to give them some kind of timeboxed period in which they can deploy an "Oops Bit", e.g. if they realized that the game the demoed was not properly submitted to the depot. The obvious negative here is that then there's no lesson really learned: I have to grade their work again, when they already claimed in their reports to have met that criterion.

As an aside, I require that the project reports themselves be written in either HTML or Markdown. I am surprised how few students are fluent with one or the other of these. Like understanding intellectual property, it seems to have become a major unexpected learning objective of the class that students understand plain text formats. I provided the students with a Markdown report template for Mini-Project 1, and yet many of them manage to create non-standard or nonsensical reports. I wonder if I can easily modify the Javascript that creates the specifications articulation on the course Web site to automatically generate downloadable Markdown templates for each separate project, which would potentially reduce students' copy-paste hassle and, ideally, provide a more convenient way for students to fill in the blanks and meet the report specs correctly.

2 comments:

  1. I've struggled with some of the same issues. I specify (what seems to me) an easy spec for a low-level grade, and then a student doesn't complete that, but completes all other specs for a significantly higher grade. Specs grading, strictly enforced, says they get the grade lower that than one they missed. But I agree; that feels a bit too punitive.

    On the other hand, if I leave out a required semi colon in my Java program, the Java compiler will refuse to compile my 10,000 lines of code--even if every other line of code is perfect. Is that fair of the compiler, or should it assume I just forgot, and pretend there was one there? ;-)

    Since I provide each student a limited quantity of Oops Bits, it does allow them to mess up occasionally and resubmit within a limited period of time. Hopefully they learn to be a bit more careful next time.

    Since there is this safety net for them, I don't feel too bad. OK, I do, but I get over it fairly quickly.

    ReplyDelete
    Replies
    1. I took inspiration from your Oops Bits and added a QuickSave system (which maybe in retrospect I should have called "SavePoint" which would have been punnier). It's documented at https://www.cs.bsu.edu/~pvgestwicki/courses/cs315Fa18/projects. The idea is that a student can use their one QuickSave during the semester to fix an oversight regarding a specification and request re-evaluation. I had three students jump on that right away, all very simple cases where they had forgotten to submit files properly to the depot. Again, this is something they absolutely could have verified prior to submission, but I also think they can learn a lesson without getting too bummed about it this way.

      Delete