Showing posts with label cs345. Show all posts
Showing posts with label cs345. Show all posts

Thursday, December 27, 2018

Planning CS445 Human-Computer Interaction for Spring 2019

Around the Christmas celebrations, I have spent many hours the past two weeks planning my Human-Computer Interaction course for Spring 2019. I wrote my reflection about the Fall semester back on December 8, and I just put the finishing touches on the Spring section's course plan before lunch today. Now, I would like to write up a few notes about some of the interesting changes I have in place.

First, though, a funny caveat. Since I originally designed the HCI class for CS majors, it has been CS345. Due to administrative busybodies, the course will now be numbered CS445 instead. Which label should I use for my blog posts? I think I'll switch over to "cs445" now, and I'll have to remember to use both codes when I'm searching for my old notes.

Canvas Grading Caveat
Prompted in part by my frustrating experience at the end of the Fall 2018 Game Design class, I have a more explicit statement on the course plan telling students to ignore Canvas' computed grade reports. I would always say this in class, but I did not have it explicitly in the course plan before. Also, I found out that I could mark assignments as not contributing to the final grade, in which case students will be able to see their assignment grade but not a false report of their "current" final grade, so I need to remember to mark all the assignments that way. Also also, please take a moment to consider the epistemological tragedy that is the concept, "current final grade."

Writing
I was surprised in the Fall when one of my more talented HCI teams brought up their project report, highlighted a place where I pointed out grammatical and spelling errors, and asked if they had lost points because of it. There are two things wrong with this question. The first is that it assumes the students had some points to lose in the first place, which is simply not true. I don't take away points from anyone; instead, I award points for demonstrating competence. You cannot take away something that someone doesn't have. The second, more pragmatic problem is that the students also had significant conceptual and descriptive problems in their report, but they seemed more concerned about the spelling and grammatical errors.

Last semester, I included a link to the public domain 1920 version of Strunk's Elements of Style, along with advice to read it. This time, I've made my expectations more explicit. On the evaluation page of the course plan, I have explained briefly the importance of writing and the fact that Elements of Style will provides my expected standards. I also explain there that I expect to give feedback on both conceptual problems and spelling or grammar problems, along with a primer about how to interpret that feedback. I thought about making an assignment around Elements of Style, but I decided against it, partially because I did not want to shift my early-semester plans ahead by a day. My professional opinion is that the book should be remedial to anyone who has a high school diploma, but I am also a realist about the variable quality of writing instruction these students may have received.

Software Architecture
It was a little disappointing to see so few teams really engaging with principles of quality software construction last semester. I have written about this before, and the students are aware that the culture of other classes is one that values only software output rather than software quality. I have carved out some design-related topics from the HCI class to make more time to work through examples of refactoring toward better architectures. I'm still working on the exact nature of these assignments, but I have a few notes to draw from. The schedule I have online right now actually goes right up to the point where I want to switch gears from design theory to software architecture practice.

Specifications Grading
After a positive experience in last semester's game programming class, I have converted my HCI class project grading scheme to specifications grading. I laid out my expectations for each level of poor (D), average (C), good (B), and excellent (A) grades. This was an interesting exercise for me, especially around the source code quality issues. Last semester, students could earn credit for following various rules of Clean Code, and a mixed grade simply meant that they got some and not others. Now, I have put all of these rules at the B level, to reflect the fact that "good" software is expected to follow such standards. For the A level, I've included demonstrated mastery of the software architectural issues mentioned above.

I had some fun with the Polymer-powered course site as well. My new custom element for presenting specification grading criteria uses lit-html to concisely express how they are presented. It took a bit for me to wrap my head around lit-html, but I think I have a good sense of it now. The other fun new feature I added was the ability to download the specifications as Markdown. The specifications are internally represented in a Javascript object, and that object is transformed into the Web view. Of course, with this model-view separation, it's reasonable to provide other views as well, such as Markdown. I used this StackOverflow answer to write some functions that convert the JSON object to downloadable Markdown. I hope that this makes it easy for students to write their self-evaluations in Markdown. I did not use checkboxes on the Web view, as I did for game programming last semester, because they don't copy and paste well. I hope that having the Markdown version available removes the need for students to manually copy over each criterion into their self-evaluation.

More Time on the Final Project
In addition to switching the evaluation scheme, I am giving more time to the final project. I decided to keep the short project as a warm-up, since it also provides a safe-fail environment as students pull together ideas such as personas, journey maps, user stories, and software quality into a coherent process. Some of my dates for the final project aren't quite sorted out yet, as I'm debating whether to take a purely agile approach or a milestone-driven one. The advantage of the latter is that I can specify exactly which design artifacts I want to see at each stage, and the level of fidelity I expect. However, I expect that I will follow the more iterative and incremental approach, but I've put off the final decision until the semester gets underway and I can get to know these students a bit.


We are continuing our relationship with the David Owsley Museum of Art, who have been a great partner. I look forward to working with them and seeing what my students can develop during the semester.

Saturday, December 8, 2018

Reflecting with the Fall 2018 Human-Computer Interaction Class

The truth is that I had been a bit down on my HCI class. I set up what I thought would be a wonderfully inspirational cooperation with the David Owsley Museum of Art, and I gave them the challenge of prototyping real solutions to some of DOMA's problems. As with my Spring 2018 HCI class, I wanted these solutions to be firmly grounded both in human-centered research methods and in the general design theories we studied in the first half of the semester by reading Don Norman's The Design of Everyday Things. However, as we moved through the stages of the project, I was not seeing teams doing either of these. In the first iteration reports that they submitted before Thanksgiving, it was obvious that their designs were not really rooted in their research, nor were their designs intentionally applying the principles we discussed. This left a heaviness in my heart, both because I doubted the efficacy of the class and because we were going to be showing our results to DOMA at the end of the semester.

Our primary contact at DOMA was their Director of Education, Tania Said, who has been a gracious and kind partner in this endeavor. Due to the schedule of docent training, we scheduled our final presentations for Tuesday of last week, which was the penultimate day of class. I was impressed by how well my students presented their work and the effort they had put into revisions in the 1.5 weeks since Thanksgiving. It helped that Ms. Said has a grace and wisdom that made her questions uplifting to the students: her questions were fair and honest, yet always supportive and encouraging.

I think we would all call this meeting a success, yet there was one more meeting yet in the semester—Thursday, December 6, which was also the deadline for their projects. I had originally planned to have them present their final projects to each other, but this clearly would have been redundant after the meeting on Tuesday. I decided that my goal for the meeting would be to try to wrap up some of the loose ends, to share with them some of my joys and frustrations from the semester, and to prime their reflections to prepare for the final exam. To this end, I decided to present them with three questions and an activity: What went well this semester? What did not go well this semester? What still puzzles you? The activity would be to write final exam questions.

We pushed all the tables against the walls so we could sit in something like a circle, doing the best that the room permits. I opened the class with a few remarks about my perspectives, and I got them into groups of three or four students to answer the first question. Here is a transcription of the notes from the board after we openly shared our results, slightly expanded from my blackboard shorthand:

  • Project pride & satisfaction
  • Presentation to Tania Said
  • Getting class feedback
    • Using the Click-Share system from our seats promotes discussion 
  • Second round of paper prototyping helped clarify the difference between sketch and prototype
  • Museum visits
  • Connecting with a real place / real partner
  • Two-week warm-up project provided a trial run before working with the real partner
  • Relationship between design principles and Clean Code
  • Reading Design of Everyday Things
It gave me satisfaction to see them bring up many of the items that I too thought were our greatest successes of the semester. Of course, we didn't vote or rank these, so some may only be important to one or two students, but that's fine for our purposes. One of the biggest surprises came from the comment about the second round of prototyping. This was a particular exercise where everyone dropped the ball, so we agreed to reschedule literally the rest of the semester so that they could do this homework exercise again. At least the student who spoke up, this gave her the opportunity to really understand it, which is infinitely more valuable than just plowing forward. I was also glad to see students reflect positively on the two-week project warm-up rather than frame it as wasted time and effort.

The next question to cover was, "What did not go well this semester?" I had them get up and move around the room to sit with different people for this conversation—something I had warned them I would do in my introductory remarks. For many of the items, I asked a follow-up questions, whether the conversation included ideas about how to address the things that didn't go well. I used two markers for this: black for the original item, and green for the suggestions. Again, my notes are below, with the green parts offset in square braces.
  • Project reports
    • Balancing the need for functionality against the practical application of design principles
      • [Emphasize making high-fidelity executable paper prototypes]
    • Justifying designs as meeting the principles rather than intentionally applying them
    • Struggling against "just make it work" vs. a good report
      • The former is enforced by department culture
      • One commented that I am the only faculty who grades on anything besides whether the software "works"
      • [Focus on design (look, feel, operation) over implementation]
      • [More practice exercises to practice implementations]
    • "Two-class" phenomenon, where first half of the class focuses on DOET and second half on the project, without a clear bridge.
      • [Again, high-fidelity executable paper prototypes might help here]
      • There were more conversations in the first half than the second
      • [Sitting in a circle would have increased conversation quality]
      • [Use an Interactive Learning Space for this class]
  • Test-Driven Development
    • Still confused about types of testing (unit, integration), test-friendly architectures, and how to test first.
  • Time management and problem slicing
    • Other faculty do not emphasize Agile slicing approaches
  • Continuous communication with DOMA
    • We had to work around their schedules, which held us back on our tight timeline
  • The collection database we used was inconvenient
I will quickly add comments about those last two points. Regarding working around their schedules, I explained that this phenomenon was one of the reasons why the granddaddy agile methodology Extreme Programming calls for an on-site representative of the client, so that they are always there to answer questions and work alongside you. Regarding the database we had, I pointed out that our data was a smaller copy of a live system in use by the University Libraries. That is, it showed what a "real" database looks like, rather than a fabricated one for a class example. 

It was delightful for me to hear students sharing so candidly about their struggles during the semester. Again, they identified many of the elements that had been on my mind as well, but it was more powerful to hear it come from them. I was particularly pleased at their comments about how the architecture of the room was an impediment to our activity.

At this point, we only had about twenty minutes left in the meeting. I told them that I was cutting the third question but still wondered if there was anything that still puzzled them. One student posed a really interesting epistemological question: how do we know if Don Norman's rules are the right ones? This got us into a quick conversation about how his rules in DOET are like Robert Martin's rules in Clean Code: they are not universal rules, but they are frameworks that help us consider what the truth might look like. In retrospect, I should have spoken about shuhari here, but I was trying to push ahead and it didn't come to me in time.

I moved on to the last question, again shuffling the groups. Here are their ideas of what might be a good final exam question for the course.
  • What does good design mean to you?
  • What are five of the seven Universal Principles described in DOET?
  • What are the four stages of the Double Diamond?
  • What went well or what did not go well?
  • If you were to take this class again, what would you do differently?
  • Give examples and applications of the seven Universal Principles
  • How would you change the organization of this course?
  • Reflect on something valuable you learned and how you will use it in the future.
  • Reflect on the outcomes of the course.
When the first item was shared, I asked how the student would assess responses. I think they didn't expect this question, but I assured them it was a good question, and that I was trying to understand the nature of it. A few students suggested that it would be enough to see if the hypothetical test-taker could answer it coherently, to show that they had thought about it. 

The second and third questions really surprised me. Those are so unlike the kinds of questions I give. I wish I had noted at the time whether these were students who had ever taken a class with me before, but I didn't. By contrast, that fourth item is exactly the kind of question I like to ask on a final exam, and the student who suggested it pointed to our notes on the other boards and said, basically, "Just ask those. Those are good questions." 

I had been feeling a bit down in the dumps on Thursday morning for a variety of reasons. The way I put it on Facebook was, "I feel emotionally dead inside, and now I have coffee. I can be apathetic FASTER." At the middle of the day, though, my Game Programming students showed their final projects, and most of them did a great job. That lifted my spirits, but after this meaningful, honest conversation with my HCI students, I felt almost giddy. Of course, maybe it was sleep deprivation, but I'll give them the benefit of the doubt. Maybe I let myself grumble too much about those students earlier in the semester when I was feeling uncertain. They were really a good group, maybe an uncommon mix, but they were paying attention and they were learning.

My original plan was that I would work with DOMA to choose a project they really wanted to see polished up, and then use that in my premiere graduate-level course on Software Engineering. As I recently reported, however, those plans have gone up in smoke. Instead, I will be teaching another section of Human-Computer Interaction. This gives me a chance right away to incorporate some changes, although it has to be done in the all-too-short winter break. But first, I better actually write my final exams. Thanks for reading!

Friday, July 27, 2018

Summer Course Revisions 2018: Human-Computer Interaction (CS345/545)

Wrapping up my series (1, 2) of summer course revisions is this one: the revision to CS345/545, Human-Computer Interaction. Regular readers may recall that I wrote a public reflection of the Spring semester's offering early in the summer. In a strange twist of scheduling, the course is being offered again in the Fall, which means I get to make revisions and apply them right away. (Even stranger, another faculty member offered a summer section that made enrollment, but that's not directly relevant to my own work.) I made some of these revisions alongside the changes I made to my other courses, and then I picked the course back up yesterday morning for the finishing touches. 

I tried to keep what was good about Spring's section while remedying some of the items that I found problematic or confusing. The overall structure of the course is the same as before: we will spend a few weeks on background readings and regular exercises, focusing our attention on Don Norman's The Design of Everyday Things. We will be moving from three meetings per week to two, so I adjusted the readings and assignments accordingly. It's a rather aggressive reading schedule from the get-go, which should help the students recognize they need to allocate adequate time. I added a few readings and videos, including Steve Krug's usability testing video from Rocket Surgery Made Easy, a recent Designer vs Developer episode from Google that gives a good overview of design principles (though I take serious issue with the implicit epistemology of that series' title), Jakob Nielsen's "Why You Only Need to Test with 5 Users," and Andy Rutledge's series on Gestalt principles of perception.

I have kept the grading scheme essentially as it was, although I added a new required assignment for graduate students. This will help distinguish their work a bit more than it was before during the project section of the course. Grad students will have to choose one out of four options, each having a different theme: scholarly research, software architecture, design principles, and design methods. As before, they will also have an additional set of assignments early in the semester, which give them a crash course in some of the ideas that the undergraduates would have seen in their prerequisite, CS222. Our graduate program has no equivalent course that can be used as a prerequisite for grad students, so these extra assignments help to catch them up to where the undergrads are.

Last time, I used triage grading as the theme of the short project: students had to talk to others who were exposed to triage grading and design something that would help them with the transition. This backfired when many of my students did not take the time to learn triage grading for themselves, and so they were unable to create interventions of any merit. One group even insisted that the best solution was to change triage grading, essentially replacing it with conventional grading; despite repeatedly discussing this with them, they didn't ever seem to understand that this was not within their jurisdiction. Several times during the final project, I referred back to their failures on the triage grading project; some students seemed to be able to take this as a learning experience, but others didn't seem to show any recognition of what went wrong, judging from what they ended up with.  Suffice it to say, I am going with a different theme this time: BSU freshmen who do not know local jargon and landmarks. Freshmen are plentiful for use as research subjects, and my students are not themselves freshmen.

About two weeks ago, I set up a partnership with the David Owsley Museum of Art (DOMA), which is really a treasure of Ball State University's campus. My students will go on a tour of the museum early in the semester, and then, after we go through some of the background material and the short project, we will talk to them again about their mission and goals. The students then will have creative freedom to create an original, prototypical software system that explores these themes. They were very happy to have my students accessing their digitized data as well, although it is managed through the university's Digital Media Repository.

Here is where things started to break down. I assumed that my students and I could just get read-only access to the database. They shied away from that and asked if we could work with a dump of the database. I've been going back and forth on emails for the past two weeks now trying to sort this out. To be clear, everyone is very supportive of the idea, but nothing seems to be happening. At this point, I'm awaiting a dump of a database to see what I can do with it, but I don't have an ETA. I've been working pretty consistently all summer, and I'm heading into a family vacation and "taking some time off" mode, so if I don't get my hands on that data soon, I won't have time over the summer to sort it out. The good news is that I found a good back-up plan. Searching the Web for art museums with public APIs, I came across the Digital Public Library of America. They have a nice, open API that seems to aggregate many other data sources. If we cannot get access to DOMA's data, I know we can use DPLA's, at least for the sake of our prototype. If we go this route and a student makes something that really excites DOMA, we can look at stitching it back into their data.

I've copied over the new decorum section from my other two course revisions into this new one. As I wrote about earlier, this section is designed specifically to address some of the frustrations from Spring's HCI course. You can be sure I'll let you know how it all turns out by the time the Fall semester is over, if not before.

That pretty much wraps up my summer course revisions, modulo some potential extra work specifying another mini-project in game programming or tinkering with data sources for HCI. It's been a productive summer so far. I spent a week writing a manuscript for Meaningful Play, which I just found out was accepted, so I look forward to returning to Lansing in October. It needs just a little bit of revision which I will do tomorrow morning or next week. I also am one click away from submitting a grant proposal for a new educational game collaboration. Of course, I started the summer by sinking a lot of time into Collaboration Station and Fairy Trails. I've triaged the former into a holding pattern: although the anonymous authentication feature I needed was finally added to the third-party library we used for multiplayer, the changes to that subsystem require more effort than I am willing to donate. I'd rather spend that time on some new prototype ideas, playing around with Unreal Engine 4.20. I think the next project on the docket, though, will be building some custom cars for Gaslands: that will make it really be a summer vacation.

Thanks for reading!

Thursday, May 10, 2018

A Reflection on Spring 2018 Human-Computer Interaction (CS345/545)

I started a narrative approach to a CS345/545 (Human-Computer Interaction) reflection yesterday, and it came out really negative. It was honest, but too negative—that's no way to be. I'm going to try again today, but with a different format, and see if I can make it both shorter and more constructive. Let's pull a trick from Sprint Retrospectives and start with...

What Went Well

Controlling scope. There's a lot that could be covered in an intro HCI class, and the conventional textbook approach sacrifices depth for breadth. Put another way, it sacrifices understanding for recognition. I wanted my course to center on a few fundamental principles, and ours ended up being Don Norman's Seven Principles of Design (from Design of Everyday Things: Revised and Expanded) and the Double Diamond design model. We also reviewed importance of model-view separation and layered software architectures, although this was not really in any more detail than I would cover in CS222. I had hoped to have more time to talk about software architectural issues, but seeing the students struggle with the other topics, I pulled back on this.

Focus on principles. Similar to the point above, I had to remind myself several times during the course that it is not really about how to design a user interface, but about the principles of human-computer interaction. There's a difference here, I believe: we could spend a lot of time on issues like font choice, spacing, the use of tools to aid in design. We didn't, though, which meant we could talk at a higher level of abstraction and not get lost in pixels and palettes.

Allow for failure the first time. The students completed a small project before Spring Break, a project that was essentially a small version of what we would do after break. It made them put the design principles into play within the double diamond context. They almost all did badly, from an objective point of view, but this was a success from a pedagogic point of view. This showed them that it's a different thing entirely to claim understanding vs. to apply knowledge in context.

Socratic Day. There was one day where I was feeling quite frustrated about my students' inability to show empathy for each other and for me, and so I ran about twenty minutes of class via the Socratic method, starting with the question, "What do you think I see?" We touched on a lot of interesting ground here. Interestingly, they did not really come up with the answer I had in mind, which was "The backs of laptops and the tops of heads." I don't think I've ever gone Full Socratic (tm) on my students before, but it's something I need to keep in mind, especially if I am feeling upset or disoriented.

The "A" Group. Despite my frustrations with the course, there was one team of guys who attended practically every meeting, most of them completing all assignments satisfactorily, paid attention and asked questions during class, followed instructions and applied what they read during class activities, and produced a good and thoughtful final project. None of them had significantly different prior experience from the rest of the class, and not all of them earned stellar grades in the prerequisite courses. This tells me that what I asked the students to do was on target for those students who were on point, if you don't mind mixing metaphors.

Many small assignments. I set up an aggressive schedule of reading and crafted in-class activities to support them. I needed to make sure students were keeping up with the pace, so I set up a series of assignments to be done before each class for the first half of the semester. This worked in terms of keeping people together: I could tell that almost all the participants in class had done the preparatory work. During Spring Break, as I reflected on what I had seen in the first half of the semester, I carried this model over to the second half as well: when there was a day that I needed students to have something particular prepared, I set up an assignment for it. The assignments were graded rather generously by an undergraduate grader, but that generosity was fine since the assignments were more about keeping up than mastery.

I learned. I think I mentioned in my course planning post that I was wonderfully surprised by the revisions in the new edition of DOET. One piece in particular that stood out, as someone interested in methodologies, was the double diamond model. I've never deployed that myself, so I figured I would use the semester to try to understand it. I gave a wrap-up presentation in the final week of classes where I explained my understanding and frustration with the model, putting it in contrast with Scrum and my spin on George Kembel's design thinking framework. I actually started planning out a blog post called, "The Double Diamond is Malarky," and in doing reading and preparation for that post, I came across a different visualization than UK Design Council's—this one from ThoughtWorks.
All the pieces fit together for me now: using this model, the iterative and incremental software development approach sits within the second diamond entirely. At first, I rejected this, since my predilection is to consider each iteration anew, with the possibility of pivoting on the problem completely. Then I realized, however, that this is exactly how I have been running my immersive learning projects! I use one semester with the Honors College to figure out what problem we can actually solve, and then that input is given to the Agile, cyclic development model of the Spring Studio course. Hooray for reflective writing!

What can be improved

One of the biggest surprises of the semester is that I was assigned to teach CS345/545 again in the Fall instead of a section of CS222. This means I have the opportunity to improve the course right away, while the ideas are fresh in my head—an opportunity for which I am grateful. Expect a "Summer Planning" post in the next few months as I sort things out. In the meantime, here are some things I can improve for next time.

Stow laptops or GTFO. That is, put your laptops away or get thine fanny out. Those blasted distraction machines are ruining our students. Attendance is not required for my class, and that's a fact I reminded them of many times. People clearly are engaged in something else and thinking that if they sit in class they will magically collect knowledge. It's ridiculous, it's infantile, and I'm done with it. The lingering question is whether or not I want to incentivize the use of paper notes or not. For example, I could offer a grade or something like an achievement for using paper notes, or I could ask them to keep a design log with their notebooks. I need to think about the logistics of this still, but one thing's for sure: the laptops are going away. 

A quick related thought: I had one guy this semester who, when I asked him and his chatty colleagues to close their laptops and join the group, did not, and instead sat in the back smugly with his laptop clearly open. The question then, is, what should I do in such a case? I don't want to play power games, that's just more juvenile nonsense that doesn't belong in the classroom. I am thinking of making a policy that I will simply leave if the policies aren't followed, which then makes it a matter of social pressure. I'm not sure how that will play out, but I feel like I need a plan so that I react appropriately.

Iterate on the final project. Now that I have a better understanding of the Double Diamond, I want to bring that out in class by having students complete short technical iterations within the context of the bigger design project. This will give them a valuable opportunity to assemble and test an artifact and get feedback about it, from both me and potential end-users. It seems simple enough, but getting this to fit into the calendar may be tricky. It's possible that a small project may not be necessary if instead we allow iteration within a bigger project.

External partnerships. Many years ago when I taught HCI with a focus on mobile app development, one of the best parts of the class was setting students up with external consultants. These were not clients but rather alumni, friends, and generous strangers who agreed to give students feedback on their work. You know how it goes, teachers: you can say the same thing a hundred times, but sometimes students won't hear it until it comes from someone else. This past semester, we were on track to have an interesting community partner for all the projects, but this fell apart in a sea of bureaucracy and red tape. As a result, the student projects were a bit "fakey". This had the immediate result that most (if not all) of the students did not conduct authentic evaluation at the end of the project. Many asked friends and family to evaluate their work, which is absolutely the worst thing to do. Setting up real partnerships would help here, as there would be someone else with skin in the game besides the students—someone with different objectives, not just getting a grade.

More check-ins on the principles. As part of the small and large projects, student teams had to submit project reports, both a draft and a final. The project reports are where students had to explain how their projects manifested Norman's seven principles. What I saw was, by and large, rationalization rather than principle-informed design. That is, students explained decisions they had already made, situating these within the principles, but it's pretty clear to me that they did not consider the principles before or while making the decisions—only after. I designed a final exam question to help students tease these ideas apart, but students who did a poor job in their project reports also misinterpreted the question itself and provided similarly superficial or unjustifiable responses. I should be able to craft additional discussions, assignments, or activities that help students frame their works-in-progress within the principles, which I hope will lead to a better understanding of them.

What still puzzles me

Graders. Since I knew I would have so many assignments, and it was going to be a busy semester, the department hired an undergraduate grader for me. She was a good student who I have worked with before and whom I trust. However, she could not attend classes, so she had a real outsider's view of what was going on. It's still a blind spot to me, if there were opportunities to give feedback to my students that I missed because she was handling the day-to-day assignments. I asked her to report anomalies to me, which she did for most of the semester and which led to some interventions, but this died off as the semester's pressure built.

Bad UIs and Lack of User-Centeredness. As I mentioned above, we focused on principles, but the fact is that some students developed some truly hideous interface designs. Some of these were bad because of design decisions that the students made, and these carried on into nonsensical UI choices; others were bad because the layout was just silly. A lot of students used JavaFX and SceneBuilder with the mistaken idea that because they have a tool to lay out elements, they must be doing it right—a myopic, developer-centered rather than user-centered perspective. The question for me, then, is whether there is a modicum of UI design knowledge that I can help students acquire that would actually help here. My intuition says "no", that if they don't have taste they cannot develop it in the middle of an already-packed semester. My intuition has been wrong before, though. The bigger question is how to get them to focus consistently and enthusiastically on the users. I am thinking of bringing in something like task modeling from Software for Use, which I had good luck with years ago, although that book is now comically dated, with examples drawn from Windows 95.

Empathy. I wrote earlier about a particular example of how one of my students failed to show empathy, but I think this is a bigger problem—as in, a really, really big problem. If you're a junior or senior in college, and you don't know how to build empathy, what in tarnation has been going on in your core curriculum and pre-college experience? What's the point of studying history, culture, or language if you cannot put yourself into someone else's perspective? If you get to an upper-level elective on HCI, and you do not know how to have empathy for others, is it actually possible to learn it? Is it my responsibility to teach it?

There's a related and troubling problem here related to student disabilities. The university rule is that students with disabilities should be reviewed by the Disability Services office, who will then develop accommodations for the student; faculty are given a form that indicates what accommodations are considered reasonable. In my experience, most of these are "Extra time on tests" or "Can take tests in a distraction-free environment." That's all well and good, but that office doesn't actually provide me the information I need about the disabilities that impact the work that I do. If I had a form that said, "Cannot empathize," well I would know I better not count such an assignment against the student—but as far as I know, that office has never produced such a form. It puts me into a bad situation where I have to guess at student disability, despite my having no training or expertise in this area. Yikes. There's a sense in which the lawyers are on my side: if an autistic student sued the university because they failed my course due to unsatisfactory completion of an empathy-related assignment, the university could say, "They didn't have an accommodation on file for their disability." This doesn't change the fact that such a thing seems impossible to have been filed in the first place. Maybe I'll check in with Disability Services over the Summer to chat about such cases.

Wrapping up

This post went much better than the previous one. It has helped me articulate some of my ideas that can feed forward into my planning process for Fall semester. Thanks for reading! As always, if you have questions or ideas, feel free to share them in the comments section below.

Wednesday, March 28, 2018

Students' preference for discussion over prototyping, despite instruction to the contrary

Monday afternoon was a perplexing one, forcing me to look back at my goals and direction for a variety of reasons. What kicked it off was my one o'clock meeting with my HCI class. After Spring Break, we started in on our final project, and given their surprising reaction to our pre-break mini-project, I decided we would use the final project to gain a better understanding of the double diamond approach rather than try to introduce a different model. Briefly, before break, we did a quick run through the double diamond; in evaluating their results, it was clear that the vast majority of students did not invest the time to understand the context, let alone to identify a real problem. Indeed, what seemed to happen is that they chose to do something they could do rather than trying to solve a real problem. That shook me pretty hard—hard enough that I realized I couldn't let that be their broken understanding of the process.

We spent a week on the discovery phase, and I pushed them out into the field to talk to real human beings. Based on this, they had to make a few empathy maps and personas. They then interpreted these into journey maps, almost all of which were identical—not because of academic dishonesty but because of collective myopia. From this, they identified the problems they would work on, and that brings us to Monday: the first day of the "develop" phase, in which we would review and practice making low-fidelity prototypes.

I started by asking the class to list the tools of low-fidelity prototyping. They quickly came up with paper, markers/pens/crayons, and PowerPoint, along with other lightweight drawing tools not specifically designed for prototyping but certainly amenable to it. Their next answer was people, which surprised me but I think is appropriate. I added whiteboards, and I pointed out that one of the students had previously deployed a system specifically for UI prototyping (uxpressia by name, but that's just one example among many).

I created a second column, which—since I had sort of backed myself into a taxonomic corner—we called the "Anti-Tools" of low-fidelity prototyping. What sorts of things should we avoid? One student quickly mentioned code, which I agreed is exactly right: avoid code until it's the best tool. Another mentioned templates, which at first I didn't understand, but as he explained it, he was really talking about locking yourself in to particular approaches too early: a template is a reusable abstraction, but we don't know a priori that the given abstraction is appropriate. They paused here, and it was my chance to introduce two critical ideas; indeed, the primary reason for the exercise was for me to share these two points. I added brainstorming, which required me first to define the term—I am regularly frustrated by how students want to use this as a trendy synonym for "thinking." I briefly explained to them that brainstorming in a group will tend to push early convergence rather than divergence: that the best approach was to just start making. The second one I put up was related: analysis paralysis and discussion. A kissing cousin of brainstorming, I explained that I have seen practically every team I have mentored fall into this trap, thinking that sitting and talking about a problem will help us solve it. It won't. Primed by these observations, I returned to the positive column and suggested that timeboxing is one of the greatest tools of creative prototyping.

With that, they voted on a 15-minute timebox, I set the timer, and they got to work.

Or something that looked like work to them, anyway.

There was one group whose only discussion was about distributing index cards—I had told them a low-fi prototyping exercise was coming up, and so a few people brought supplies—and then they set to work, cutting, drawing, crumpling. Another group did a brief powwow before going in what appeared to be a similar direction, although that might be up for interpretation. The rest of the class, roughly 70%, engaged in discussion. One group took to the whiteboard to draw something like a flowchart, the rest sat in their clusters and discussed while they drew.

I observed all of this happening, of course, and as the timer kept ticking, I kept thinking, "Any moment now they will break and start actually prototyping, right?" At about ten minutes in, it was clear that this was not going to happen, so I wrote two questions on the front center board: What makes it a prototype? and What makes it a good prototype?

When the timer went off, I invited them to look at these questions. Honestly, I wasn't very hopeful in getting good answers, based on what I saw, but we went to the board anyway. In answer to the first, a student (from that first group) said, "It's testable." That's exactly right: a prototype has to be testable, otherwise it is something else. I pointed out a secondary part of the definition is that the prototype points to a possible future. I couldn't think of the word for this, but a student suggested, "portent." In retrospect, the word is "portentous," but still, that's a 10-cent word. We moved to the second question, and right away a student (from a different group) said, "It gives you good feedback." That's the right idea here too, in my opinion, although I tell you what I told them, that I prefer to frame it as, "It answers a design question."

This is interesting, isn't it? They seem to understand the theory. I asked them, then, how many of them had prototypes from the 15-minute timeboxed exercise? The students in that first group, they all raised their hands, and rightfully so. The only other hand that went up was from one student in a group of three. I asked them to dive into this: they had all worked together, on one artifact, in discussion, during the timebox, and only one of them characterized it as a prototype. I invited them to describe their disagreement, and one of them attempted to justify that what they made was a prototype because it had arrows and indications explaining to someone how it would work. It was clear to me, and I think to the rest of the class, that this was not a prototype at all, but some kind of sketch or schematic.

My next question to them was, "What did you notice that was different about how the groups worked?" They all seemed to recognize that the people who came up with prototypes used their fifteen minutes to create their prototypes, while everyone else engaged in discussion. I explained to them again how this phenomenon was something I had seen many times before, particularly on immersive learning teams, where I advise working on prototypes and instead, students talk to each other—at length, with no real output.

I ended class, then, with a challenge: first, that they actually follow the instructions I give; second, that perhaps they consider breaking their teams in half, with half doing timeboxed prototyping and half focusing on brainstorming and discussion, and compare the results at the end. Honestly, I would rather they do the first, but I'd be satisfied if they did the second.

As this post was bouncing around my head this morning, I realized that I have seen a similar phenomenon before, regularly, in my teaching. It is the phenomenon of CS222, where I tell the students that they need to start with CRC cards, then make a list of tasks, then use test-driven development to approach those tasks. From many years of teaching the course, I my best estimate of what actually happens is that teams get together, talk a bit, and then start programming. This comes, in part, from student essays admitting to it. This inevitably leads to failure of the two-week project or the first iteration, and I provide vociferous feedback about what is wrong and how to improve. The "how to improve" is, essentially, to follow the steps. Yet, students don't. Even to the third iteration, I regularly have 20-40% of teams who are still not following the steps that I have laid out and that they know they will be evaluated on. Whether or not they ever read the requirements is moot here, since they all look at the feedback I provided and claim to be seeking to improve; yet, if we judge intention by results, the real intention seems to be maintain the status quo rather than learn something new.

I am left with a burning question about how to push my role as a mentor. Should I be interrupting their 15-minute timebox to point out what I see, in the hopes of pushing them more quickly in the right direction, or do I need to let them make this little mistake so that they can learn from it? I am afraid of them treating me like an exam proctor: if only the proctor weren't here, we could just collude on the exam and get out of here. I have been thinking about how this applies in CS222 as well, and I have been thinking about having more formal check-in points; for example, if teams had to turn in their CRC cards two days into an iteration, it would show them that I mean business. However, it would also mean that they would do it because I was collecting it and not because they should in order to learn what is being discussed.

My next meeting with the HCI class is coming right up. I think I may begin class by asking them to share the processes they used to construct their prototypes, and perhaps I will push them a bit further into a root cause analysis, to consider why they didn't follow the instructions they were given.

Wednesday, February 14, 2018

Empathy, Listening, and Hearing

I have more stories I want to share here than I have made time to share them, but here's one that I find my idle mind keep returning to. I want to capture it here to make sure I don't forget it.

We are reading Design of Everyday Things in my HCI class (CS345/545), and several meetings ago, the students read Norman's presentation of the UK Design Council's double diamond model for design. After some thinking and Googling, I developed a series of in-class exercises to help students understand the model. Particularly helpful was Heffernan's overview of activities associated with various stages of design. I decided that an exercise on empathy mapping might be just the right way to start.

Of course, to build empathy, you need a focus and a context. I decided to use, as a running example, students' experience dealing with the triage grading system that I use. This is a brilliant grading system that I learned from William Rapaport at University at Buffalo when I worked as a TA with him. It is coherent, philosophically-sound, and unfamiliar to almost everybody. I get the occasional question about it and the student whinging in course evaluations, but by and large, student experience with it is unrecorded: it happens in the shadows or in passing conversations. It's also something that all of my current HCI students are experiencing, and practically all of them have either experienced it before in one of my other classes or know someone who has.

I introduced the context in class and asked students to work in small groups to develop a map, writing them on the classroom whiteboards. We used the conventional map labels as shown in Heffernan's overview: what do students think & feel, hear, see, and say & do in their experience with triage grading.

It's helpful to have just a passing understanding of triage grading before we move on. Whereas conventional grading is based on percentage correct, triage grading is based on discretely measured quanta. For example, if you assume that 90% is an A, then as a test designer, you would design the exercise so that A-level work is attained by completing 90% of the prompts correctly. Notice that this is the tail wagging the dog: the fact that you are using conventional grading determines your test structure. With triage grading, any given item is scored out of three points: essentially correct (3 points), essentially incorrect (1 point), or somewhere in between (2 points). The letter grade is determined by weighted linear interpolation across scores, assuming that "A" means correct, "D" means incorrect, and "C" means middling.

Almost every group wrote down that they see percentages when they first encounter triage grading. That is, they see "1/3 points" as 33%, which they interpret as "Low Failing Grade" even though in triage grading it is a low passing grade. (You might consider 33% in triage grading as having a qualitative interpretation like 65% in conventional grading, right on the border of poor and failing.) There was broad consensus about this in the class.

I pointed out that the maps appeared to come from initial experiences with triage grading, but one of my bright students—who has taken my classes before—noted that his was more of a mid-semester view. He had recorded in his map that he became able to see the feedback as qualitative, as coarse-grained values that drove him to change his behaviors. I do not remember exactly the words he wrote on his empathy map, and indeed I didn't understand what he meant by the words he chose, but our conversation came back to the concept "seeing quanta" rather than "seeing percentages."

That was pretty interesting in itself, but here's where it kicks up a notch. A student in the back chimed in, saying essentially, "But it's still a percent." The first student acknowledged that mathematically it was, but that's not what he saw, and the one in the back insisted more strongly, essentially, "But it's some number of points out of a total, and so it's a percent, and so you're still seeing it as a percent."

Wow! What a teachable moment for empathy! I pointed out, treading carefully, that this was an example of the second student showing no empathy for the first student. The second student saw the world in his way, insisted that it was the right way, and that everyone else must also see it that way. I introduced the idea that whether or not there is an objective reality, perception drives a person's lived reality, and perception is subjective. Two people can look at the same thing and "see" two completely different things. The expression on the second student's face told me that he understood what I was saying, but he was still working on the implication; of course, maybe I observed this wrong.

We are continuing to work with this example, and I am finding it a rich context for discussion. I hope to share a few more stories here on the blog, but for now, it's time to head off to class. Thanks for reading!

Tuesday, January 9, 2018

Why students want to learn HCI

In the first day of my HCI course yesterday, I decided to pull out a simple exercise that I learned from my friend Joyce Huff, a faculty member in the English department.  Around a year ago or so, she told me about how on the first day of class, she likes to ask her literature students what they hope to learn in the course, and how this helps her engage with them in a conversation about the course topics and goals. I did something similar, asking my students what they hoped to learn by studying Human-Computer Interaction.

I joked at the opening that I didn't want to hear, "I am in the course only because it fits my schedule," and I suggested that this would probably be a good reason to take a different course. The first student was a friendly chap who had taken my Game Programming course last semester, and he introduced himself saying that he was only there because it fit his schedule. I assume he was being honest with me, and I responded with something like, "Well, even if that's true, it's not really what I want to hear." Again, I was speaking half in jest, but in retrospect, maybe I shouldn't have; it might have actually been good to let students acknowledge that they had no goals besides three credits of elective. Whether it was a missed opportunity or an instance of forcing reflection, I suppose I cannot now know.

Many students mentioned that they want to learn make GUIs and to make them well. A few admitted that their own UI design skills were not very good and that they hoped the course would improve these. Only one student that I can recall expressed explicit interest in programming GUIs; the others whose expected outcomes went in this direction were talking more about design. No one mentioned evaluation explicitly, although it seems any undergraduate who went through CS222 should know that some kind of acceptance testing would be part of this process. It may simply not have been worth mentioning to them, or they have not considered usability evaluation as something that can be extracted and studied on its own.

A few students talked about wanting to know more about design philosophy, and I suspect these students might be the happiest with my course plans. One described his goal as being to approach the "bridge" between the technical artifact and human psychology. He was wary of using that metaphor, but I encouraged him. I did ask a clarification about whether he meant a bridge as might be found in software architecture, but he confirmed that he was talking about the divide between the technical and the human.

The only time a specific technology came up was when a student mentioned interest in PLCs. One other student explicitly said he was interested in HCI design beyond simply screen-based interactions. I was a little surprised that this only came up once, given the popularity of VR and AR, but I was glad to hear it come up once.

A few students said that they were inspired by experiences with bad user-interfaces, and that they wanted to be able to critique more effectively, justify their critiques, and create something better. Games came up once, and so did medical technology, as particular domains of interest.

Two students mentioned their capstone projects specifically. All of our undergraduates have to complete a two-semester capstone sequence, which means that these students are taking the HCI elective while completing their capstone projects. That may prove to be challenging for them, as teams are perennially cramming for these projects at the end of Spring. At least they see the opportunity to apply knowledge between classes; I will have to remind them to manage their time wisely.

One student said that she simply likes design, which I think is a great honest answer. If we had the benefit of time, I could have asked her how she defines "design" and used that as a springboard into a broader conversation, but this was really an end-of-class discussion.

That's everything I can pull out of my notebook and my memory. I thought it might be interesting to share here, and for me to have for reference later in the semester. I already wish the class were 75 instead of 50 minutes, since the short time really cramps our conversations, but I'm hoping to hit stride soon enough.

Monday, January 8, 2018

Winter Course Planning: Preparing to teach CS345/545 after an eight-year hiatus

Good morning, blog. This is the first day of the new semester here, and there's one more story I want to share before the semester gets rolling.

Last semester, after teaching assignments were made and approved and after students registered for their Spring courses, one of my colleagues realized he had a critical conflict that left him unable to teach one of his courses: CS345/545, our cross-listed course on Human-Computer Interaction. I designed this course around 2008 as a major substantial revision to a course on GUI Programming, that course being a holdover from when graphical and event-driven programming were still pretty novel. However, the last time I was able to teach the course was Spring 2010, when I was also experimenting with managing a course entirely in public using Google Sites. I have become more involved in game programming, immersive learning projects, and managing the CS222 course since then, and this has left me without spare time to teach the HCI elective.

There have been a few significant changes since 2010. Last time I taught the course, I was just finishing my work on App Inventor for Android, now MIT App Inventor, and I had about 16 Android G1 phones. I themed the HCI course around mobile development with Android, and I lent the phones out to my students. I was basically the king of the faculty: touchscreen smartphones and mobile development were brand new ideas, and I authorized the students to take a deep dive into this new area. One group even got their app onto what is now called the Google Play Store.

Here's another interesting change since 2010. I had an undergraduate teaching assistant that semester—a graduating senior named Austin Toombs. Austin had done several research and creative projects with me as a student. He went off to get his Ph.D. in HCI and is now a professor at Purdue Polytechnic. When I agreed to swap courses with my colleague, he was the first person I contacted to ask about what he thought were some of the best resources and ideas to share with my students in the course. I'm proud to have had some role in the development of this young scholar, and it's great to be able to reach out to him for help as well!

CS345/545 continued to be taught during these past eight years, but it was always done by faculty who have no real interest or expertise in that area. This always struck me as tragic, since I believe this is one of our most important courses. I got involved in Computer Science because of the intersection of technology and people, approaching this idea through games, visualization, and education. Educating students to understand this intersection strikes me as more crucial to the modern computing environment than any particular piece of technology, but I suppose we all have our biases.

The departmental syllabus for CS345/545 includes both technical and human-centered learning objectives. I have decided to focus primarily on the latter, in part because our CS222 course provides a good foundation for the former. Last time I taught HCI, we didn't have a prerequisite course that introduced concepts of Single Responsible Principle or layered software architectures; now that we do, I can draw upon what students learned before to talk about a few GUI-specific concepts. Honestly, I haven't planned complete details that far out yet, but I am thinking of discussing concepts like data binding and MVP vs. MVC. Unfortunately, my department's graduate curriculum committee seems to have no real understanding of the role CS222 has for undergraduates and does not have any real equivalent prerequisite for the grad students: whereas undergraduates need to have CS222 to take the HCI elective, grad students only need two semesters of programming and an algorithms course. There's a sense in which we are setting them up for failure, since they are roughly 33% less prepared than the undergraduates; I suppose we can just hope that a few years of life experience is enough to make up for it. Perhaps I'll try yet again to suggest prerequisite changes to them, but that rarely seems to move forward.

I decided to start the course by reading the revised and expanded edition of Don Norman's classic The Design of Everyday Things. I first read this book when I was prepping a section of CS345/545 years ago, and although it was influential, the examples were fifteen years old at the time. This 2013 revision is amazing: he basically rewrote the book with the same core ideas, but with updated examples and newer research and practical issues. The students will be reading this book together during the first several weeks of class, and I have set up a series of assignments and in-class exercises to get them thinking about design writ large as well as design of computing systems. This series of readings and assignments can be found on the course description that I have been working on.

Once the change in my teaching assignment was official, I reached out to my friend and colleague Ronald Morris, Professor of History, to see what kinds of interesting projects he had going on in the Spring that perhaps could dovetail into a CS345/545 projects. As I expected, he's involved in a veritable buffet of projects. One of these has him mentoring a team of students who are captioning historic photos for Indiana State Forests. We're in the process of determining whether my students could use these data to create original interactive timeline systems to help users understand the chronological —and perhaps the geographical—relationships among the photographs. This project jumped out to me since it seemed like something that risk averse students could approach in a rather conventional way, while creative or ambitious teams could take it in novel directions. I haven't mentioned this in the course description, but I did hammer together an outline of how I expect any such project would be graded: as with my game design course, I would be looking more at process than product, and particularly, research-informed justifications of design processes and artifacts.

I am glad to be working with this course again after such a long hiatus. It also gives me a break from teaching CS222, my first such break since my Spring 2012 fellowship at the Virginia Ball Center for Creative Inquiry. Another positive outcome of this 11th-hour change in teaching responsibility is that another tenure-track faculty will be teaching CS222, and perhaps this will help more of the department to understand this slightly peculiar course whose requirement is not often capitalized upon in other courses. CS345/545 and my immersive learning game production studio course will be my only two courses as I work with a small student team to wrap up the enhancements to Collaboration Station, and so I'm looking forward to a challenging and rewarding semester.

Thanks for reading! If you have ideas for this semester's HCI class or memories from taking the class in the past, please feel free to share.

Thursday, November 4, 2010

Safe Fail

Regular readers will remember that in Spring 2010, I taught a Human-Computer Interaction class (CS345/545) in which students worked in teams to develop Android applications. The inspirational goal—stated at the start of the semester and mentioned many times during—was to release novel applications on the Android Market. Out of seven teams, none actually released their applications on the Market during the semester, and in fact, many did not exhibit core functionality by the end of the 15 weeks of development. One team did add some spit-and-polish over the Summer and has since published theirs: it's called Elemental: Periodic Table, and it has 1/2 star more than my own Connect the Dots. (I think there's a good reason for that, and it has to do with expectations management, but I'll write about it another day.)

There is a significant contrast between CS345/545 and the current CS315 course (game programming, a.k.a. 3:15 Studio). This has been kicking around the back of my head for a while, but I have not done any real rigorous reflection about it. However, this is all just set-up for the story, so let me carry on.

I was in the lab the other day with some of 3:15 Studio, and things were going well. I turned to one of my students who was also in CS345/545, and I mentioned that I wished that the Android development had been as productive. This particular fellow is a good student, but it's fair to say that his project tanked last Spring. His response to me was not one of regret or criticism, but rather, he observed that it was better to fail at a big team-based semester project last Spring than to fail at the Morgan's Raid project or his senior capstone.

What phenomenal wisdom! In my own idealistic and individual analysis, I had been thinking about the failure as being a problem that needed to be solved. This student recognized that the learning still happened. Specifically, it was learning from failure, which might be the only kind of learning that matters.

Wednesday, July 28, 2010

Android class evals

I just read through the student evaluations from Fall's CS345/545 HCI class, a topic of many blog posts in Spring. They are quite positive, with the vast majority claiming to have enjoyed and learned significantly from a team-oriented, "real-world" project experience.

The request that shows up the most is for more explicit milestones during the semester. I had mostly left milestones up to the individual teams since each project had different characteristics, but in retrospect, I could have offered more assistance here. Perhaps next time I will use timeboxing techniques, forcing the students to complete vertical slices and deliver working code on a regular basis; without this, several groups meandered through feature development.

A few students requested more time invested at the beginning of the semester explicitly teaching Android development techniques. That is a reasonable request, although I would like to be able to sit with these students to discuss what they really want. For instance, one student requested more examples, but there are myriad examples online and packaged with the Android SDK. Another student complained about the online resources feeling "sort of scattered". These students clearly missed the point that such is the nature of modern software development: examples and resources are scattered throughout the Web and the library, and gaining experience in dealing with this was an explicit goal of the course. In the student's defense, I did not directly assess their capacity and learning with respect to navigating this information. Still, I think that this shows that students often come away from coursework with the wrong impression, that there is a Single Truth and the professor has it, and that every problem has a solution in the textbook somewhere.

Most of the departmental evaluation form deals with 5-point Likert scales, though under "Overall rating of the instructor", I received my first 10 out of 5. The student actually wrote in 6, 7, 8, 9, and 10 so that he or she could circle the ten. It made me grin, though I still recorded it as a five in my database.

I will share with you, loyal reader, the best of the bunch:

What aspects of the course did you especially like?
The freeness

How can the content and the instruction in this course be improved?
More formal structure.

Such are the paradoxes of higher education!