At the beginning of the project, the students were told that they would have to display competency in the following concepts and technologies through their six-week projects:
- Requirements analysis
- Unit testing
- Acceptance testing
- UML: sequence diagrams, class diagrams
- Design patterns
- Architecture: model-view separation with MVC or Holub-style model-view-presenter
- Logging
- OOP: encapsulation, polymorphism
- Iterative and incremental development
- Distributed version control with Mercurial
- User-centered design
- Programming conventions
- Application of expert tips (e.g. Effective Java, Pragmatic Programmer)
- Estimation of time to complete tasks, including feedback for improvement
- Effective team communication
- Technical presentation
- Reflective practice
Several of these had been discussed in the preceding eight weeks of the semester, but many had not.
Each team had to create a "learning objectives document" and share it with me via Google Docs. The document was required to express how the students intended to show competency in each of these areas. They were instructed that this would be a living document: throughout the semester, the teams should come back to the document and consider how they would demonstrate competency.
For the first three-week iteration, the documents were rarely updated by the teams. I was using in-class meetings to introduce topics from the list that the students had not yet seen, scaffolding these with some light readings while the students focused on getting their first milestone delivered. Immediately following the milestone deliverable, the students were given the assignment to update their learning objectives document, ensuring that all items have clear and explicit plans for how the team intended to demonstrate competence.
As I looked over their submissions, at first I was disappointed that there was so little of substance. For example, under the topic of "effective team communication," most of the teams said something like, "we get together at regular times to work as a team." In my comments on their work, I suggested that they had missed the target: working together is a requirement, but learning to work together effectively is a learning objective. To show the latter requires evidence.
I believe it was that particular case—though certainly not in isolation—that made me realize a connection between what I was asking the students to do and what I myself have to do in terms of assessment. I will give some background in order to explain the case.
It was about three years ago, while serving as chair of my department's Promotion and Tenure Committee and therefore also representing the department to the College of Sciences and Humanities Promotion and Tenure Committee, that I read Scholarship Reconsidered and Scholarship Assessed. I loved these books, finding in them clear articulations of my intuition. I have used the framework from Scholarship Assessed within much of my work with undergraduates, following the principle that if the point of higher education is to be exposed to scholars and scholarship, then having students become scholars themselves is the best way for them to understand it. These books also made me very cold on the devious connotations of "research," as used throughout the institution of higher education.
Since last year, I have been departmental assessment coordinator. My university, like most others, is attempting a cultural shift towards more regular, formal, and fruitful assessment. In fact, the Future of Education Task Force suggested that we become obsessed with assessment and continuous self-improvement. This is a good thing, and I wholeheartedly agree with the mission. The downside is that I have had to attend umpteen workshops and meetings that re-introduce the fundamentals of assessment, presented to dubious or antagonistic faculty (who, I can only presume, have good intentions but see no incentive to change decades of practice). I have also served on the College of Sciences and Humanities Curriculum Committee, which means I have read tens of assessment plans for new and revised courses. Suffice it to say that I've been bumping into assessment pretty much everywhere I go.
Now, when I look at the task I gave my students, it was clearly an assessment task—yet, as it was in the guise of teaching rather than service or research, I had not consciously connected it to my past experience. Looking at how the students tried to approach the task, I see the same kinds of confusion that is common among faculty and staff who have not adopted a mindset of incremental and iterative development, a perspective that views problem-solving as a form of model building that requires positive feedback loops.
This insight made it easier for me to evaluate their work as novice assessors. I do not think that I adequately addressed how assessment works—that is, how initiating and fostering a feedback loop of self-improvement will pay off in pragmatic terms. The students' learning objective documents did not adequately provide evidence, but without seeing the bigger picture, they had no reason to believe that evidence was worth gathering. This might be best exemplified in a student's response to a note I left in the document asking for more evidence: the student modified the corresponding part of the document and then asked, "Does this satisfy your requirements?" My Socratic response was, "Does it help you demonstrate whether you have met the learning objective?"
While there have been some stumbling blocks in this approach, I am happy with the overall shape of it. Part of my current research is an investigation into the role of writing in STEM classrooms, and this CS222 happens to be part of the experiment. I will be teaching the class again in the Fall, and I am now considering how such a learning objectives document might be introduced earlier in the semester. This should help students get the idea that they are responsible for their learning, putting them on the path of reflective practice as early as possible.
Each team had to create a "learning objectives document" and share it with me via Google Docs. The document was required to express how the students intended to show competency in each of these areas. They were instructed that this would be a living document: throughout the semester, the teams should come back to the document and consider how they would demonstrate competency.
For the first three-week iteration, the documents were rarely updated by the teams. I was using in-class meetings to introduce topics from the list that the students had not yet seen, scaffolding these with some light readings while the students focused on getting their first milestone delivered. Immediately following the milestone deliverable, the students were given the assignment to update their learning objectives document, ensuring that all items have clear and explicit plans for how the team intended to demonstrate competence.
As I looked over their submissions, at first I was disappointed that there was so little of substance. For example, under the topic of "effective team communication," most of the teams said something like, "we get together at regular times to work as a team." In my comments on their work, I suggested that they had missed the target: working together is a requirement, but learning to work together effectively is a learning objective. To show the latter requires evidence.
I believe it was that particular case—though certainly not in isolation—that made me realize a connection between what I was asking the students to do and what I myself have to do in terms of assessment. I will give some background in order to explain the case.
It was about three years ago, while serving as chair of my department's Promotion and Tenure Committee and therefore also representing the department to the College of Sciences and Humanities Promotion and Tenure Committee, that I read Scholarship Reconsidered and Scholarship Assessed. I loved these books, finding in them clear articulations of my intuition. I have used the framework from Scholarship Assessed within much of my work with undergraduates, following the principle that if the point of higher education is to be exposed to scholars and scholarship, then having students become scholars themselves is the best way for them to understand it. These books also made me very cold on the devious connotations of "research," as used throughout the institution of higher education.
Since last year, I have been departmental assessment coordinator. My university, like most others, is attempting a cultural shift towards more regular, formal, and fruitful assessment. In fact, the Future of Education Task Force suggested that we become obsessed with assessment and continuous self-improvement. This is a good thing, and I wholeheartedly agree with the mission. The downside is that I have had to attend umpteen workshops and meetings that re-introduce the fundamentals of assessment, presented to dubious or antagonistic faculty (who, I can only presume, have good intentions but see no incentive to change decades of practice). I have also served on the College of Sciences and Humanities Curriculum Committee, which means I have read tens of assessment plans for new and revised courses. Suffice it to say that I've been bumping into assessment pretty much everywhere I go.
Now, when I look at the task I gave my students, it was clearly an assessment task—yet, as it was in the guise of teaching rather than service or research, I had not consciously connected it to my past experience. Looking at how the students tried to approach the task, I see the same kinds of confusion that is common among faculty and staff who have not adopted a mindset of incremental and iterative development, a perspective that views problem-solving as a form of model building that requires positive feedback loops.
This insight made it easier for me to evaluate their work as novice assessors. I do not think that I adequately addressed how assessment works—that is, how initiating and fostering a feedback loop of self-improvement will pay off in pragmatic terms. The students' learning objective documents did not adequately provide evidence, but without seeing the bigger picture, they had no reason to believe that evidence was worth gathering. This might be best exemplified in a student's response to a note I left in the document asking for more evidence: the student modified the corresponding part of the document and then asked, "Does this satisfy your requirements?" My Socratic response was, "Does it help you demonstrate whether you have met the learning objective?"
While there have been some stumbling blocks in this approach, I am happy with the overall shape of it. Part of my current research is an investigation into the role of writing in STEM classrooms, and this CS222 happens to be part of the experiment. I will be teaching the class again in the Fall, and I am now considering how such a learning objectives document might be introduced earlier in the semester. This should help students get the idea that they are responsible for their learning, putting them on the path of reflective practice as early as possible.
No comments:
Post a Comment