"Conventional wisdom holds that different people learn in different ways. Something is missing from that idea, however, so we offer a corollary: Different people, when presented with exactly the same information in exactly the same way, will learn different things. Most models of education and learning have almost no tolerance for this kind of thing. As a result, teaching tends to focus on eliminating the source of the problem: the student's imagination."
Douglas Thomas and John Seely Brown, A New Culture of Learning, 2011, p79.
Wednesday, April 20, 2011
Education and Industry
Developing software is the process of learning how to develop that piece of software. As in any craft, there are similarities among projects, yet each one is different and each one enriches the creator.
If learning is an explicit goal of software development as well as the means to its end, then what is the difference between industry and education?
The best hypothesis I have is that there's a value-oriented ordering of goals. In business, shipping a product has preference over improving the team; in the academy, improving the individual has preference over completing a project.
As we transition higher education toward a more enlightened, learning-centric model, I think that it will become increasingly important for us to address these issues—especially considering decreasing public support and the administrative desire for additional revenue streams. Are my students more or less than my apprentices?
No answers, just more questions.
If learning is an explicit goal of software development as well as the means to its end, then what is the difference between industry and education?
The best hypothesis I have is that there's a value-oriented ordering of goals. In business, shipping a product has preference over improving the team; in the academy, improving the individual has preference over completing a project.
As we transition higher education toward a more enlightened, learning-centric model, I think that it will become increasingly important for us to address these issues—especially considering decreasing public support and the administrative desire for additional revenue streams. Are my students more or less than my apprentices?
No answers, just more questions.
Thursday, April 14, 2011
Teaching Assessment Culture to Undergraduates
My CS222 students are in the middle of their six-week projects, in which they just finished the first of two required iterations. There are three teams, with 4–5 members per team. Last time I taught the course (which was also the first time the course was taught, by me or anyone else), I continued to give rigorous assignments throughout the six week project. This time, I decided to try turning the accountability structure on its head and making the students think about how they would achieve the learning objectives of the project. This inversion of control was designed to foster reflective practice and metacognition.
At the beginning of the project, the students were told that they would have to display competency in the following concepts and technologies through their six-week projects:
At the beginning of the project, the students were told that they would have to display competency in the following concepts and technologies through their six-week projects:
- Requirements analysis
- Unit testing
- Acceptance testing
- UML: sequence diagrams, class diagrams
- Design patterns
- Architecture: model-view separation with MVC or Holub-style model-view-presenter
- Logging
- OOP: encapsulation, polymorphism
- Iterative and incremental development
- Distributed version control with Mercurial
- User-centered design
- Programming conventions
- Application of expert tips (e.g. Effective Java, Pragmatic Programmer)
- Estimation of time to complete tasks, including feedback for improvement
- Effective team communication
- Technical presentation
- Reflective practice
Several of these had been discussed in the preceding eight weeks of the semester, but many had not.
Each team had to create a "learning objectives document" and share it with me via Google Docs. The document was required to express how the students intended to show competency in each of these areas. They were instructed that this would be a living document: throughout the semester, the teams should come back to the document and consider how they would demonstrate competency.
For the first three-week iteration, the documents were rarely updated by the teams. I was using in-class meetings to introduce topics from the list that the students had not yet seen, scaffolding these with some light readings while the students focused on getting their first milestone delivered. Immediately following the milestone deliverable, the students were given the assignment to update their learning objectives document, ensuring that all items have clear and explicit plans for how the team intended to demonstrate competence.
As I looked over their submissions, at first I was disappointed that there was so little of substance. For example, under the topic of "effective team communication," most of the teams said something like, "we get together at regular times to work as a team." In my comments on their work, I suggested that they had missed the target: working together is a requirement, but learning to work together effectively is a learning objective. To show the latter requires evidence.
I believe it was that particular case—though certainly not in isolation—that made me realize a connection between what I was asking the students to do and what I myself have to do in terms of assessment. I will give some background in order to explain the case.
It was about three years ago, while serving as chair of my department's Promotion and Tenure Committee and therefore also representing the department to the College of Sciences and Humanities Promotion and Tenure Committee, that I read Scholarship Reconsidered and Scholarship Assessed. I loved these books, finding in them clear articulations of my intuition. I have used the framework from Scholarship Assessed within much of my work with undergraduates, following the principle that if the point of higher education is to be exposed to scholars and scholarship, then having students become scholars themselves is the best way for them to understand it. These books also made me very cold on the devious connotations of "research," as used throughout the institution of higher education.
Since last year, I have been departmental assessment coordinator. My university, like most others, is attempting a cultural shift towards more regular, formal, and fruitful assessment. In fact, the Future of Education Task Force suggested that we become obsessed with assessment and continuous self-improvement. This is a good thing, and I wholeheartedly agree with the mission. The downside is that I have had to attend umpteen workshops and meetings that re-introduce the fundamentals of assessment, presented to dubious or antagonistic faculty (who, I can only presume, have good intentions but see no incentive to change decades of practice). I have also served on the College of Sciences and Humanities Curriculum Committee, which means I have read tens of assessment plans for new and revised courses. Suffice it to say that I've been bumping into assessment pretty much everywhere I go.
Now, when I look at the task I gave my students, it was clearly an assessment task—yet, as it was in the guise of teaching rather than service or research, I had not consciously connected it to my past experience. Looking at how the students tried to approach the task, I see the same kinds of confusion that is common among faculty and staff who have not adopted a mindset of incremental and iterative development, a perspective that views problem-solving as a form of model building that requires positive feedback loops.
This insight made it easier for me to evaluate their work as novice assessors. I do not think that I adequately addressed how assessment works—that is, how initiating and fostering a feedback loop of self-improvement will pay off in pragmatic terms. The students' learning objective documents did not adequately provide evidence, but without seeing the bigger picture, they had no reason to believe that evidence was worth gathering. This might be best exemplified in a student's response to a note I left in the document asking for more evidence: the student modified the corresponding part of the document and then asked, "Does this satisfy your requirements?" My Socratic response was, "Does it help you demonstrate whether you have met the learning objective?"
While there have been some stumbling blocks in this approach, I am happy with the overall shape of it. Part of my current research is an investigation into the role of writing in STEM classrooms, and this CS222 happens to be part of the experiment. I will be teaching the class again in the Fall, and I am now considering how such a learning objectives document might be introduced earlier in the semester. This should help students get the idea that they are responsible for their learning, putting them on the path of reflective practice as early as possible.
Each team had to create a "learning objectives document" and share it with me via Google Docs. The document was required to express how the students intended to show competency in each of these areas. They were instructed that this would be a living document: throughout the semester, the teams should come back to the document and consider how they would demonstrate competency.
For the first three-week iteration, the documents were rarely updated by the teams. I was using in-class meetings to introduce topics from the list that the students had not yet seen, scaffolding these with some light readings while the students focused on getting their first milestone delivered. Immediately following the milestone deliverable, the students were given the assignment to update their learning objectives document, ensuring that all items have clear and explicit plans for how the team intended to demonstrate competence.
As I looked over their submissions, at first I was disappointed that there was so little of substance. For example, under the topic of "effective team communication," most of the teams said something like, "we get together at regular times to work as a team." In my comments on their work, I suggested that they had missed the target: working together is a requirement, but learning to work together effectively is a learning objective. To show the latter requires evidence.
I believe it was that particular case—though certainly not in isolation—that made me realize a connection between what I was asking the students to do and what I myself have to do in terms of assessment. I will give some background in order to explain the case.
It was about three years ago, while serving as chair of my department's Promotion and Tenure Committee and therefore also representing the department to the College of Sciences and Humanities Promotion and Tenure Committee, that I read Scholarship Reconsidered and Scholarship Assessed. I loved these books, finding in them clear articulations of my intuition. I have used the framework from Scholarship Assessed within much of my work with undergraduates, following the principle that if the point of higher education is to be exposed to scholars and scholarship, then having students become scholars themselves is the best way for them to understand it. These books also made me very cold on the devious connotations of "research," as used throughout the institution of higher education.
Since last year, I have been departmental assessment coordinator. My university, like most others, is attempting a cultural shift towards more regular, formal, and fruitful assessment. In fact, the Future of Education Task Force suggested that we become obsessed with assessment and continuous self-improvement. This is a good thing, and I wholeheartedly agree with the mission. The downside is that I have had to attend umpteen workshops and meetings that re-introduce the fundamentals of assessment, presented to dubious or antagonistic faculty (who, I can only presume, have good intentions but see no incentive to change decades of practice). I have also served on the College of Sciences and Humanities Curriculum Committee, which means I have read tens of assessment plans for new and revised courses. Suffice it to say that I've been bumping into assessment pretty much everywhere I go.
Now, when I look at the task I gave my students, it was clearly an assessment task—yet, as it was in the guise of teaching rather than service or research, I had not consciously connected it to my past experience. Looking at how the students tried to approach the task, I see the same kinds of confusion that is common among faculty and staff who have not adopted a mindset of incremental and iterative development, a perspective that views problem-solving as a form of model building that requires positive feedback loops.
This insight made it easier for me to evaluate their work as novice assessors. I do not think that I adequately addressed how assessment works—that is, how initiating and fostering a feedback loop of self-improvement will pay off in pragmatic terms. The students' learning objective documents did not adequately provide evidence, but without seeing the bigger picture, they had no reason to believe that evidence was worth gathering. This might be best exemplified in a student's response to a note I left in the document asking for more evidence: the student modified the corresponding part of the document and then asked, "Does this satisfy your requirements?" My Socratic response was, "Does it help you demonstrate whether you have met the learning objective?"
While there have been some stumbling blocks in this approach, I am happy with the overall shape of it. Part of my current research is an investigation into the role of writing in STEM classrooms, and this CS222 happens to be part of the experiment. I will be teaching the class again in the Fall, and I am now considering how such a learning objectives document might be introduced earlier in the semester. This should help students get the idea that they are responsible for their learning, putting them on the path of reflective practice as early as possible.
Monday, April 4, 2011
"Object-oriented"
Reviewing papers for an upcoming conference reminded me how difficult it is to communicate when technical terms lose their meaning. "Object-oriented" is frequently thrown around without so much as a wink or a nod—much less a citation to indicate what someone actually means.
In CS222: Advanced Programming, I have my students read a segment from the first chapter of Holub on Patterns: Learning Design Patterns by Looking at Code. This is one of my favorite books on design patterns and on design in general. Holub adopts a pedagogically sound perspective: rather than present the patterns in isolation as in the Gang of Four book (a manner in which they never occur "in the wild"), he demonstrates the patterns as collaborating parts of a system design. Additionally, the first two chapters of the book provide an excellent introduction to the field of patterns and object-oriented design. On pages 13-14, Holub provides a set of heuristics that can be used to determine if a system is, in fact, object-oriented at all. Directly quoting:
In CS222: Advanced Programming, I have my students read a segment from the first chapter of Holub on Patterns: Learning Design Patterns by Looking at Code. This is one of my favorite books on design patterns and on design in general. Holub adopts a pedagogically sound perspective: rather than present the patterns in isolation as in the Gang of Four book (a manner in which they never occur "in the wild"), he demonstrates the patterns as collaborating parts of a system design. Additionally, the first two chapters of the book provide an excellent introduction to the field of patterns and object-oriented design. On pages 13-14, Holub provides a set of heuristics that can be used to determine if a system is, in fact, object-oriented at all. Directly quoting:
- Objects are defined by "contract." They don't violate their contracts.
- All data is private. Period. (This rule applies to all implementation details, not just the data.)
- It must be possible to make any change to the way an object is implemented, no matter how significant that change, by modifying the single class that defines that object.
- "Get" and "set" functions are evil when used blindly (when they're just elaborate ways to make the data public).
Note that he does not say that these are sufficient for good object-oriented design, but rather that a design that does not satisfy all of these is not an object-oriented design. Holub asserts very clearly that this is not a value judgement: OO is not better than procedural. His point is that when he says "object-oriented," he wants to make it very clear what he is talking about. By his definition, almost all ostensibly object-oriented systems that I have seen are actually hybrid designs.
I find Holub's zeal appealing because he leaves no room for guesswork. You can agree with him or disagree with him, but there is no ambiguity in his statement, and it is sufficient foundation for him to build the remainder of the book. Let's not forget the lesson of Humpty Dumpty in Through the Looking Glass:
'When I use a word,' Humpty Dumpty said, in rather a scornful tone, 'it means just what I choose it to mean — neither more nor less.'
Saturday, April 2, 2011
A memory for knights
Although this is primarily a place for reflective practice in teaching, sometimes I cannot help but share stories about the intersection of games, learning, and parenthood.
Last night, my son and I went down to The Wizard's Keep, the local game shop, to pick up a special order. We went after dinner as a father-son adventure since he loves to see all the minis and box art. The last time we were there, before Christmas, he was mostly interested in what looked flashy. This time, he was much more interested in what the games were about. For the last few weeks, he has been spending almost all his free time drawing sharks, naval warfare, dinosaurs, knights, and Jedi. Any game featuring such art caught his attention, and so I encouraged him to look for inspiration, to consider drawing some of these scenes later.
When we got home, he produced the drawing below. Note that the parts in blue were pre-existing sketches on this sheet of paper; my son loves to pack as many images as possible into one page.
When I looked at it, I realized he had reconstructed from memory the box art of Runebound: Mists of Zanaga.
When I looked at it, I realized he had reconstructed from memory the box art of Runebound: Mists of Zanaga.
We looked at a lot of box art last night, but clearly this one made an impression. I remember pointing out to him that the monster had four arms, which I thought was interesting. In his drawing, he has: a four-armed horned demon-creature; a knight in the foreground with the iconic sword and shield; a hammer-wielding warrior in the clutches of the monster; and a third hero grasping onto the monster's arm.
That he remembered all these details, amid all the other boxes, amazes me. The captured warrior is wielding a hammer. Incredible!
Subscribe to:
Posts (Atom)