I spent the lion's share of the last two weeks revising my three courses for the Fall semester. They are the same courses as last time, although some of the themes have changed. After a trepidatious beginning, I am now quite pleased with the results. In today's post, I will describe the revision to my game programming course, an elective for Computer Science undergraduate and graduate students. The actual change to the course may appear small, but it represents a significant amount of research and learning on my part.
I have been structuring this course as a project-intensive, team-oriented experience. For example, last Fall the students implemented The Underground Railroad in the Ohio River Valley. I have also used this course to experiment with various methods of grading. I wanted the grading to be as authentic to the work as possible: students are evaluated on their participation and commitment only, not on quizzes or exams. For example, instead of midterm exams, I held formal face-to-face evaluations with each team member, modeled after industrial practice.
These methods work well, but reflecting on these experiences, I identified two potential problems. First, these methods fail in the case that a student refuses to participate or keep commitments: in particular, these methods produce little that could be considered evidence in the case of an appeal. Realistically, sometimes I get a bad apple, and so I want a grading system that allows me to give the grade I feel is earned. Note that while I admit to having given grades that are higher than I thought were earned, the assessment failure may be twofold: some students may require more concrete evidence of their own progress in order to improve or maintain performance, especially if such students lack intrinsic motivation.
The other potential problem stems from my wanting the students to engage in reflective practice, not just authentic practice. I wonder if some of my high-achieving team members have gotten through these production-oriented courses without having deeply considered what they learned. My model for encouraging reflective practice is based on industrial practice—agile retrospectives in particular—and is documented in my 2013 SIGCSE paper. This model, called periodic retrospective assessment, requires a team to reflect on its successes and failures intermittently during the semester, and at the end of the semester, to reflect on what it has learned. This sociocultural approach to assessment is appealing, and again, it seems to work in many cases, although it affords scant individual feedback.
While at this summer's GLS conference, I attended a talk about game-inspired assessment techniques given by Daniel Hickey. His model is called participatory assessment, and a particular aspect of it—which you can read about on his blog—is that it encourages evaluating reflections rather than artifacts. During his talk, he made a bold claim that resonated with me: writing is the 21st century skill. After having worked with Brian McNely for the last few years, I have come to understand “writing” in a more deep and nuanced way. (See, for example, our SIGDOC paper that takes an activity theoretic approach to understanding the writing practices involved in an agile game development team.)
Putting these pieces together, I decided to keep the fundamental structure of my Fall game programming course: students will work in one or more teams, organized around principles of agile software development, to create original games in multiple iterations. We will continue to use periodic retrospective assessment in order to improve our team practice and consider what we learned as a community. Now, I have also added individual writing assignments, to be completed at the end of each iteration. I want these reflections to be guided toward fruitful ends, and so I have brought in another pedagogic element that has intrigued me for the last several months: essential questions.
I first encountered essential questions (EQs) on Grant Wiggins' blog, and I blogged about this experience in the context of my advanced programming course. The primary purpose of EQs is to frame a learning experience. EQs have no trite or simple answers, and they are not learning outcomes, but they inform the identification and assessment of learning outcomes. With a bit of crowdsourcing, I came up with the following essential questions for my game programming course:
In reading about participatory assessment and the badges-for-learning movement, I came across Karen Jeffrey's HASTAC blog post. What she called “Really Big Ideas” seem isomorphic to EQs, and so I adapted her ideas in defining a rubric for evaluating reflections. I will be looking for reflections that provide the following:
Deciding how to guide student attention is one of the most challenging parts of course design, and I recognize that by introducing these essays, I am reducing the number of hours I can expect students to spend on the development tasks at hand. However, these essays will afford conversation and intervention regarding reflective practice. They respect the authenticity of student work since, if done right, they should yield increased understanding and productivity from the students. This reasoning is similar to that given my proponents of team retrospective meetings as part of an agile practice: by reflecting on what we are doing, we can learn how to do it better. I have been encouraging my students to write reflectively, especially since starting my own blog; these reflective essays codify the practice and reward student participation.
The official course description for Fall's game programming course can be found at http://www.cs.bsu.edu/homepages/pvg/courses/cs315Fa13. I am happy to receive feedback on the course design, particularly the articulation of the essential questions, since they will be central to the students' learning experience.
Next time, I will write about the redesign of my advanced programming and game design courses, both of which involve turning to badges to incentivize and reward student activity.
I have been structuring this course as a project-intensive, team-oriented experience. For example, last Fall the students implemented The Underground Railroad in the Ohio River Valley. I have also used this course to experiment with various methods of grading. I wanted the grading to be as authentic to the work as possible: students are evaluated on their participation and commitment only, not on quizzes or exams. For example, instead of midterm exams, I held formal face-to-face evaluations with each team member, modeled after industrial practice.
These methods work well, but reflecting on these experiences, I identified two potential problems. First, these methods fail in the case that a student refuses to participate or keep commitments: in particular, these methods produce little that could be considered evidence in the case of an appeal. Realistically, sometimes I get a bad apple, and so I want a grading system that allows me to give the grade I feel is earned. Note that while I admit to having given grades that are higher than I thought were earned, the assessment failure may be twofold: some students may require more concrete evidence of their own progress in order to improve or maintain performance, especially if such students lack intrinsic motivation.
The other potential problem stems from my wanting the students to engage in reflective practice, not just authentic practice. I wonder if some of my high-achieving team members have gotten through these production-oriented courses without having deeply considered what they learned. My model for encouraging reflective practice is based on industrial practice—agile retrospectives in particular—and is documented in my 2013 SIGCSE paper. This model, called periodic retrospective assessment, requires a team to reflect on its successes and failures intermittently during the semester, and at the end of the semester, to reflect on what it has learned. This sociocultural approach to assessment is appealing, and again, it seems to work in many cases, although it affords scant individual feedback.
While at this summer's GLS conference, I attended a talk about game-inspired assessment techniques given by Daniel Hickey. His model is called participatory assessment, and a particular aspect of it—which you can read about on his blog—is that it encourages evaluating reflections rather than artifacts. During his talk, he made a bold claim that resonated with me: writing is the 21st century skill. After having worked with Brian McNely for the last few years, I have come to understand “writing” in a more deep and nuanced way. (See, for example, our SIGDOC paper that takes an activity theoretic approach to understanding the writing practices involved in an agile game development team.)
Putting these pieces together, I decided to keep the fundamental structure of my Fall game programming course: students will work in one or more teams, organized around principles of agile software development, to create original games in multiple iterations. We will continue to use periodic retrospective assessment in order to improve our team practice and consider what we learned as a community. Now, I have also added individual writing assignments, to be completed at the end of each iteration. I want these reflections to be guided toward fruitful ends, and so I have brought in another pedagogic element that has intrigued me for the last several months: essential questions.
I first encountered essential questions (EQs) on Grant Wiggins' blog, and I blogged about this experience in the context of my advanced programming course. The primary purpose of EQs is to frame a learning experience. EQs have no trite or simple answers, and they are not learning outcomes, but they inform the identification and assessment of learning outcomes. With a bit of crowdsourcing, I came up with the following essential questions for my game programming course:
- How does the nature of game design impact the practices of game programming?
- How does game software manage assets and resources effectively?
- How do you coordinate interpersonal and intrapersonal activity within a game development team?
In reading about participatory assessment and the badges-for-learning movement, I came across Karen Jeffrey's HASTAC blog post. What she called “Really Big Ideas” seem isomorphic to EQs, and so I adapted her ideas in defining a rubric for evaluating reflections. I will be looking for reflections that provide the following:
- A characterization, with supporting evidence, of one or more essential questions of the course.
- The consequences of this characterization on individual and/or collective practice.
- The potential critiques of this characterization.
Deciding how to guide student attention is one of the most challenging parts of course design, and I recognize that by introducing these essays, I am reducing the number of hours I can expect students to spend on the development tasks at hand. However, these essays will afford conversation and intervention regarding reflective practice. They respect the authenticity of student work since, if done right, they should yield increased understanding and productivity from the students. This reasoning is similar to that given my proponents of team retrospective meetings as part of an agile practice: by reflecting on what we are doing, we can learn how to do it better. I have been encouraging my students to write reflectively, especially since starting my own blog; these reflective essays codify the practice and reward student participation.
The official course description for Fall's game programming course can be found at http://www.cs.bsu.edu/homepages/pvg/courses/cs315Fa13. I am happy to receive feedback on the course design, particularly the articulation of the essential questions, since they will be central to the students' learning experience.
Next time, I will write about the redesign of my advanced programming and game design courses, both of which involve turning to badges to incentivize and reward student activity.
No comments:
Post a Comment