Regular readers may recall that I was given the Spring 2019 HCI course to teach on rather short notice, so I only made a few structural changes between it and the Fall 2018 section. The most relevant to this post are the increased attention to software architecture and the switch to specifications grading. I also gave the teams nominally more time for their final project, but not enough that it was noticeable from my point of view. We retained our collaboration with the David Owsley Museum of Art (DOMA) and the overall theme that student teams would identify and address real problems they face.. Yesterday, I shared my sending-forth message to the students, and today, I would like to share my reflection on the semester's experience. Feel free to reference the course page, which provides the policies, procedures, assignments, and assessments for the semester.
I used a similar approach to specifications grading as I did in the Fall 2018 Game Programming course, in which there were discrete criteria for each level of grade. I added a separate category of criteria for the project reports, which were designed to provide the process documentation that corresponded to the technical artifacts produced. As before, students had to submit a self-assessment along with their source code and report, the self-assessment's consisting of a checklist of criteria that were met. Unlike the game programming class, where there was rarely disagreement between the students and I about whether a criterion was met, there was a lot of friction this semester. This was especially the case on the final project's two iterations. As I wrote to several students in my formal feedback, I have serious doubts that many of the teams honestly conducted a self-evaluation at all. Consider, for example, one of the most missed criteria asked students to explain how their projects manifested particular design principles from Don Norman's The Design of Everyday Things. Teams submitted a list of examples, with no explanations of them. It seems to me that if a team sat together and worked through the checklist, as I expected them to, someone would have said, "Have we explained this?" I don't think they had anything approximating such a discussion: I think they surveyed the checklist, said "Good enough," and marked the box. That is, I think they defaulted to the "hope for points" model rather than the "ensure success through unambiguous choices" model. Of course, the idea of self-assessment is not to save me grading time but to foster reflective practice and remove ambiguity. When students do it honestly, it does save me grading time, and when it is done dishonestly, I suspect that students might learn something about the importance of self-reflection. I need to think about what I might change in future uses of specifications grading to get around this.
An honest student approached me near the end of the semester, in part to share what he claimed was a voice of many students who were frustrated with the specifications grading system. I explained to him that the goal of the system is to remove ambiguity, both for students and for assessors. I honestly never got a good explanation of what precisely he or other students did not like about it, except that there were standards at all. I think the status quo is that students believe they start a project with full credit, and then I take away points for mistakes. One of the things I like about specifications grading is that it follows my contrary philosophy, which is that students start with nothing and must earn their credit. I think it is this idea, not specifications grading in particular, that students are upset about, because it holds the accountable to demonstrating understanding to earn credit. The fact that I get complaints about grading regardless of the scheme I use is probably testament to students' pushing back against having expectations rather than the particulars of the system. However, foreshadowing some of what is to come below, part of why a subset of students complains is likely that I actually draw upon knowledge they should have from prerequisite courses—knowledge that they may not actually have.
In the first half of the semester, I used a running demo project ("archdemo") to demonstrate some ideas of how to separate the layers of a user-facing software system. In the previous semester, I had done something similar, but using a context separate from our class collaboration with DOMA. Many students that semester did badly with the "warm-up" project, and so in order to help with on-boarding and consistency, archdemo showed a sample use of the DOMA data via the ContentDM database. The resulting application was called "Naïve Search," named thus because it didn't really solve any reasonable search problem: it just showed how to separate the layers of a system. While this worked in the short term, I think it also caused problems as students perceived more value in the example than it was meant to have. It was never intended as a template, but only an example of very specific course concepts.
One of the changes I made from Fall 2018 was that I required final project teams to use a subset copy of the ContentDM database in their projects. My intention here was that each team would have to demonstrate that they could separate the layers of a user-facing software system, regardless of what creative direction they wanted to take the project. The result, however, was that nearly every project looked a lot like archdemo with an added bell or whistle. Last semester, we had a broad range of concepts on a plethora of platforms; this semester, it was dullsville, as the teams just added some minor idea to archdemo. One team even consistently referred to their solution as "an improvement over Naïve Search," despite my repeatedly telling them in their formal feedback that this was not even close to our goal. I have no doubt that our partners at DOMA were uninspired by this semester's projects, although we have not had our wrap-up meeting yet. I would be remiss not to mention one exception, which was a clever interactive map that tied into the database in interesting ways despite the tight project timeline; those guys really nailed it, so if you're on that team and reading this, kudos to you.
Throughout the semester, we returned to five principles of design brought up in Don Norman's The Design of Everyday Things: affordances, signifiers, mapping, feedback, and conceptual models. Despite this being a theme of the class, there is scant evidence that students understand or applied these principles. Instead, my professor's eye tells me that they designed whatever they wanted, and then they tried to shoehorn those designs into these principles, or to justify their work after the fact. Although they had several assignments and much formative feedback about these principles, students continued to show misunderstandings through the final exam.
What was missing? I believe a big part of it was that they didn't follow my advice. This is exemplified with one key example: taking notes. The course plan says explicitly that students should always have their notebooks available for taking notes and jotting questions, and furthermore, that they should not have their distraction machines (laptops and phones) in their way during class discussions. Taking a friend's advice, I even made a first-day quiz in which students had to answer questions about this aspect of the class. Yet, very quickly (and for some, instantaneously), their old habits took over, and I would stand in front of class looking at the backs of open laptops rather than faces. Almost no students took any notes on any of our discussions, and is if to drive the nail of the coffin, some of them only got out their pens when I wrote something on the board. Even if they had a glimmer of understanding about affordances and signifiers during class discussion, there is no way that they held on to this fifteen minutes after class unless they actually expended the effort to do so.
I wrote on Facebook the other day about how I was feeling conflicting emotions about this class. On one hand, I am unsympathetic that they did not learn the material because they chose not to follow my advice on how to do so. The advice is not complicated: it primarily involves reading and taking notes. However, at the same time, I pity the students, because I think a large majority of them—if not all of them—know neither how to read for understanding nor how to take notes while reading or discussing. To me it begs the question, "Where does the buck stop?" If I get undergraduates in my upper-division elective Computer Science courses who lack these skills, is it my responsibility to teach them or just to assess what is in the master syllabus?
There is a related puzzle, which was foreshadowed in my sending-forth message to the class. An uncomfortably large proportion of the class showed very little proficiency in fundamental programming skills. When I brought this up in honest, private conversation with trusted undergraduates, they showed no surprise: they said that it was fairly easy, and common, for students to "cheat" on assignments. This manifests in two ways: either copying the work of a peer and submitting it as their own, or stringing together bits and pieces of code found online. Neither approach forces the learner to confront the useful struggles required to build firm understanding. It reminds me of the advice from Make it Stick that I wrote about last December, and that in its absence, students really don't know how or what it means to learn.
Going a little further, I witnessed a curious phenomenon several times during a guest presentation by a CS alumnus and successful professional. The speaker is currently in a position with a lot of creative flexibility, and he has his own team of programmers to implement parts of what he designs. However, many students seemed to misunderstand his story, thinking that this meant one could just "have ideas" and tell others to program them. They missed the part where he worked on rather tedious programming tasks for ten years to prove his capability, vision, and leadership. Instead, they rejoiced, saying things like, "I cannot program, but I want to tell programmers what to do, so now I see that there's a job for me!" This sentiment was shared primarily among CS minors despite their having taken at least three programming courses in the prerequisite chain to this course.
These students don't seem to see a connection between the HCI goals of the class and the fundamental skills of software development. It appeared to me that these students were not being tripped up by the accidental complexity of software development (such as the placement of braces or the quirks of a UI framework) but rather by its essential characteristics, which include precision and sequential reasoning. How I frame HCI as a Computer Scientist is essentially that it is user-centered precision and sequential reasoning. What happened on the students' final project teams seemed to be that those who had programming skill were relegated to doing persistence- and model-layer data manipulation tasks—required tasks for the program to work at all—while those who could not program worked on the UI. The result is that the UIs were badly conceived and executed, because those working on them couldn't conceive of the problem as requiring precision and sequential reasoning. Part of my evidence for this manifestation is the difference between teams' paper prototypes and their final products. Every team decided upon a paper prototype that was developed from a user-centered design process, but practically no team's final product looks at all like their prototype. Instead, their products looked like the archdemo sample, but with a few more widgets added via SceneBuilder. One could say they did what they could rather than what they wanted, or more pointedly, they decided to fail conservatively rather than succeed differently—which was exactly one of the human failure modes we discussed in class.
A student confided in me that, when he signed up for the course, he expected he would learn how to design a good user interface. By this, he meant that there would be some thing I could teach him that would suddenly make him good at it—a silver bullet. He pointed out that some students seemed to think that it was all in the tools: if they learned the tools, they would be able to make good UIs. I am grateful that he took the time to share this with me. I asked him if, after studying this topic for fifteen weeks, he understood why I could not meet his desires, why I could not dump ideas into his head that would suddenly make him good at UI design. He indicated that he did understand it, but he also sounded disappointed.
As I wrote about yesterday, I had forgotten about the emotionally powerful reflection session I had with my students at the end of the Fall 2018 HCI course. I didn't schedule for it this semester, and so it didn't happen. I think it would have helped all the students to frame their difficulties and challenges within their authentic context: yes they struggled, but they did so because what we are doing is legitimately hard.
The puzzles I face in considering how to change the course for Fall 2019 are significant. I expect this week to be able to meet with representatives from DOMA to talk about their take on the experience. Our primary contact is their Director of Education, Tania Said, and she has intimated that she would like to see the class work together to produce something with more staying power rather than a series of prototypes. This makes me a bit nervous, given my previous experiences having entire classes taking on a project, but there may be a possibility to set it up like competing consultancies rather than trying to do a whole-class team. One of the reasons I wrote this up now is that it has helped me serialize and articulate some of my thoughts in preparation for meeting with her. I hope that conversation will help me turn some of these reflections into actionable course plans for Fall. As always, I expect I will be able to share my summer planning activity in a blog post in the coming months. Until then, I think it may be time to spin up my summer project.
No comments:
Post a Comment