Monday, August 29, 2011

Peer evaluation rubric

I read through a portion of Computer Science Project Work. If found some of the book to be difficult to read because it is written for the UK higher educational system, whose structures I do not fully comprehend. It makes matters worse when the authors throw around terms that I can only assume are equivalent to AP, credit-hours, FTE's, and such in the United States. Ignoring the specifics of the implementations, I did find some good tips that I can adapt to my own practice. In fact, one of their key contributions is a studied observation that transfer expects transformation. That is, the transformation that one applies to found scholarship is an indicator of transfer of practice.

Most of my notes from reading the book fit handily into one of my notebooks, but there is one piece I would like to share and archive here: a sample peer- and self-evaluation form from pages 250-251. It is presented as part of bundle 8.6, "Moderation Using Student Input." The form is in two parts: management characteristics and technical contribution. Both use a seven-point Likert-type scale, but while the first part uses distinct semantic labels for each end-point,
the second simply uses "well above average," "average," and "well below average" for all categories.

These are the categories and scale labels for the management skills portion.

CategoryHigh-endLow-end
time managementhighly organizedunreliable
responsiveness to othersrespects viewsdomineering
coping with stressalways calmpanics easily
cooperationalways cooperatesgoes own way
self-confidenceable to take criticismcan't take criticism
leadershiptakes initiativefollows others
problem analysisincisivewoolly
project managementbest practiceactivity lacks coordination
project evaluationsystematic and objectivecasual and subjective

The categories for the technical contribution portion are
  • Task Analysis
  • Conceptual Design
  • VDM
  • Manager's Meetings
  • Team Meetings
  • Low Level Design
  • Coding
  • Testing
  • Documentation
  • Demonstration
Both forms provide room for open-ended comments as well.

VDM? I have no idea. Unfortunately, while the book acknowledges that portions are written by different authors, it does not identify who wrote this part, nor whether or not is based on existing scholarship. I would like to see an evaluation of this instrument in particular, since it seems to be the only recommendation in the book that is provided in such specificity—despite knowing and withing acknowledging that it should be transformed to be transferred!

On first glance, I was impressed by the custom labels. I love the use of words like "incisive" and "woolly" to describe how a student interacted with a learning experience, and I think it's good for students to think about the capacity to take criticism as an indicator of self-confidence. As I looked closer, I noticed scales like "leadership," which here is measured in terms of taking initiative versus following. I'm not sure I like that dichotomy, but again, that's where I would like to read a formal evaluation of this instrument in particular. The more I read about using this instrument (which was less than one page), the less I liked it, as the author suggested using a fairly straightforward scoring mechanism: average students' self-assessment with their peer assessments, sprinkle with up to 5% secret sauce, and call your work done. Once again I have to admit my ignorance of the grading environment in which this scheme was designed, but I would be unhappy with this in an A/B/C/D/F scheme. Specifically, where would one cut off, say, "A=Excellent" with such an approach?

In any case, there are some interesting categories here for student self- and peer-assessment, and I can see the value in handing out a form like this at the start of a project to get students thinking about how their contributions may be classified. No one wants to be woolly.

No comments:

Post a Comment