Tuesday, December 23, 2014

CS222 Fall 2014: What we learned, and where we're going

The final exam for my CS222 class once again featured the construction of a list of what the students learned (or more properly, what they think they learned). They compiled a list of 111 items, and each student was asked to pick their top seven. Once all the votes were tallied, these was most consensus around these four items:

ItemVotes
Test-Driven Development16
JUnit testing12
Make realistic goals12
You can learn a lot from failures10

Transcribing the list is like a semester-in-review. There are some very powerful ideas that the students brought up, many of which only had one or two votes. Examples include "Nobody owns code in a group project," "Dirty code is easy to write and hard to maintain," "Be wary of second-order ignorance," and "Donuts solve most transgressions." I could tell stories about each one of these—and I know that if I don't write down the stories, the details will be lost to the sands of time. However, I also know that this is going to be a long post, so I will have to leave out some of these details. It's worth noting that some of the items in the list also embody lingering confusion. I just write down what the students say during this exercise, only asking for clarifications. Still, my teacher-sense was tingling when students offered items like "Instantiating variables," which shows a misunderstanding of terminology, semantics, or both.

The students were asked to write about what experiences led them to learn one of these items. I believe everybody chose to write about learning from failure, which turned out to be a good theme for the semester, as I talk about more below. One of the students, in his closing essay, pointed out that "make realistic goals" and "learn from failure" are the kind of thing you'd expect to hear from a weekend leadership workshop at a hotel, which I thought was an astute observation and brought a smile to my face. He himself acknowledged that these items are easy to say and hard to follow, and I found it uplifting to read about how so many students had transformative experiences around personal and team failures during the semester. My hope is that they integrate these lessons into better practices as they move forward in college and then in their lives as alumni.

Taking a step back from the end of the semester, let me set the context for this particular class. About two years ago, the Foundations Curriculum Committee approved a change to our introductory Computer Science class. We decided to adopt the combination of Media Computing, Pair Programming, and Peer Instruction that proved so successful at San Diego. The CS222 course that I have been working on for a few years comes two semesters after this course, and so this Fall I had my first cohort of students from this change. The change of context went along with a change of language, so whereas previously my students had two introductory semesters in Java, this group had a semester of Python and a semester of Java.

I was a bit surprised that these students did not appear to be any better or any worse prepared for CS222 with respect to maturity of programming. As always, they could generally hit the keys until programs kind of worked. There were some who still struggled with basic control structures, almost uniform misuse of technical terminology, and practically no understanding of either object-oriented programming or its syntactic elements. I suppose this means that our first round of data point to the change being a success, since I did not notice any real change in preparation, yet more students are sticking with the major. (Then again, it could just be the economy.)

I think I had slightly overestimated their understanding of Java in the early part of the semester. My intention was to give some warm-up exercises, but I think these were neither formal enough nor scaffolded enough. There were several days where I posed a challenge and said, "Try this for next time!" but—with shades of when I tried making the final project not worth course credit—because it wasn't collected and graded, I do not think many people really tried. For the Spring, I have integrated three formal assignments, one per week before the two-week project. Because these assignments are intended to form students' for the coming weeks of activity, I have decided to adopt a mastery learning approach here: students have to do the assignments until they are correct. (As I write this, I realize that there may be a hole here: right now, I have it set up so that the assignments must be correct to get higher than a 'D' in the course, but this means students might put them off. I think I will have to revise that before Spring, to actually stop them from moving on in the course somehow until the assignments are done, or put a strict time limit on how long they have to submit a reasonable revision.)

The two-week project in the Fall was a good experience, although I think the students didn't realize it until for a few weeks afterward. I gave them a good technical challenge, involving processing data from NPR's RSS feeds, and Clean Code with TDD was required. Of course, many students did not start the project when they should have, but more importantly, I don't think that anybody actually followed the requirements. Several students had projects that ostensibly worked, and they were proud of these, but they were horribly written. I was honest in the grading, which I think many of the students had never experienced before either. Many of them panicked, but then I pointed out to them that the project had no contribution to their final grade at all. This allowed us to talk honestly about the difference between requirement and suggestion, and it forced them to rethink their own processes. In their end-of-semester essays, many students came back to this as one of the most important experiences of the course—a real eye-opener. I am fairly certain this contributed to "learn from failure" being one of the top items of consensus in the final exam.

I realized too late in the semester that I had a problem with the achievement-based grading system. I had designed several interesting quests, which a student had to complete in order to unlock A-level grades. One of them, "Clean Coder," was designed to be the easiest one: it required the completion of several achievements related to Clean Code, which was required for the project anyway. However, it looked like it was harder, because it had more steps than the others. The other achievements had fewer steps required because I knew that students would be doing Clean Code activities as well. Sadly, the students didn't think this through, and I did not convey it clearly enough, with the result that nobody pursued the Clean Code achievements. Not coincidentally, many teams struggled with fundamental Clean Code ideas in their projects.

I also encountered something new this semester, which could actually be novel although I suspect it was previously under my radar. As in previous semesters, I allowed collaborative achievement submissions, since much of the work is intended to be done by the team collectively. However, it came to my attention that a few teams were assigning an "achievements person" who became responsible for doing the achievement work while the rest did other things. This is quite a reasonable division of labor, but it's not at all what I intended.

Because of the quest difficulties and the unintended division of labor, I made several changes to the achievement-based assessment system for the Spring. All achievement claims will now be made by individuals, which helps me ensure that I know each student earns their keep. I also reduced the number required to unlock different grade levels. However, the total workload should be about the same, as I am bringing in end-of-iteration reflection essays. Several semesters ago, I required both achievements and reflection essays, but I found this to be too much work. I find myself wanting students to tie their experiences more to our essential questions, and so I'm pulling the end-of-iteration essay idea from my studio courses into CS222. I think it should be a good fit here, although I am dreading the return of the inevitable "This feels like a writing course!" evaluations from students who don't recognize the epistemic value of writing.

I have also completely removed the quest construct. Many of the items that were quests are now simply achievements. The quests are fun and provide a nice narrative on top of the achievements ("I am becoming an engineer!" "I care about user-centered design!"), but they also bound the students too early to a specific path: a team who decided on one quest could not feasibly change paths to take another, which is unfortunate since the whole thing was really designed to be inspirational, not constricting. In the Spring, then, there will be no other barrier between B-level and A-level grades aside from the number of achievements. Realistically, it's the project grade that most influences final grades anyway.

Before winter break is over, I plan to make a checklist for the final project iterations. I don't know if students actually will read it or use it, but maybe it will help those teams who are working diligently but suffering from second-order ignorance. Common items that teams forget before the submission include removing compiler warnings, tagging the repository, and checking out their project onto a clean machine to ensure that all of their dependencies are properly configured or documented.

I incorporated self- and peer-evaluations into each iteration in the Fall, and these provided a throttle on how many points an individual could earn from a collective team project. The design, obviously, is to catch freeloaders. I used a similar system in my game programming course, where I also asked students to reflect on it explicitly, and that group was pretty evenly split between people who liked it, those who didn't, and the neutral. Although I did not ask students to reflect on these evaluations explicitly in CS222, I was surprised that it just didn't come up in most students' writings, and where it did, it was very positively. There were some teams where I think they even helped students to air some difficulties early and get over them. I still need to find a better way to help students recognize that, given a rubric, all I want is an honest response. I did hear students talking about giving each other "good" evaluations, and I tried to convince them that it wasn't about "good" and "bad", but about honest feedback. Perhaps it's an intractable problem because these evaluations contribute to a grade, and so inevitably students pragmatically see "good" and "bad" as having priority over team cohesion.

The course evaluations for Fall were nice to read, as the students praised the areas of the course that I think went well, and they provided constructive criticism in places where things were rocky. At least two students described the achievement-based system as feeling like it was "still in beta," but both were quite forgiving of this as well. I think drawing on video game metaphors here helped me, as students recognize that "beta" incorporates their feedback for the betterment of future players. Despite generally high praise for my course, I cannot get out of my head the fact that more than one student wrote something like, "Don't believe the people who say you are a monster." Two used the word "monster" specifically. These were all couched in grateful and kind ways, from students encouraging me to continue to be honest in my critical feedback. I suppose I must have a mixed reputation. I know I shouldn't let it get to me, but I cannot help but wish to be a fly on the wall.

This was a really fun group of students, and I think we did a lot of good work together. I hope they carry these big ideas of software development and craftsmanship with them as they move on. In my last presentation to them, I reminded them that I spent the last fifteen weeks holding them accountable to high professional standards, and that in the coming semesters, they will be responsible for holding themselves and each other accountable. I hope they do.

Some references:

Friday, December 19, 2014

Painting The aMAZing Labyrinth

I try to play a lot of board games with my boys, although because of their ages this often restricts us from playing the more complex games in my collection. One of our games that sees the most play is The aMAZEing Labyrinth, a classic and relatively simple tile-shifting game.

This one was recommended to me by a student when my oldest son was quite young. This alumnus had fond memories of playing the game when he was a kid, and he still played it on occasion with his parents. It's easy to teach, although as the spatial reasoning can be tricky for adult players, kids often need a bit of help. Each player normally has a deck of six goal cards, and you seek one at a time. With very young players, we let them see all six at once and try to figure out which one is closer. This means they almost always win, but this does not bother me as long as we are consistent with the rules. Letting each player have the same number of turns increases the fairness as well as the pressure for adults to make clever moves.
The game comes with four wizard characters:
Given how I've enjoyed painting miniatures for about the last year, and that this game sees much more table time than almost anything else, I decided this would make a fun painting project. As I've written about before, I have been experimenting with different primers, and lately—inspired by The Painting Clinic—I have been seeing what I can do with black primer and layering. I decided that these bold-colored, cartoony miniatures would be a good case study.
Without further ado, here are the final results, front and back.
The sculpts are mostly robe, and there was not a lot of detail there to bring out. On the blue and green models, I played with using short strokes and dots to imply a cloth-like texture, and I think it's effective in comparison to the smooth shading of yellow and red. I picked up some new inks specifically for glazing these figures, and I think it helped bring together the colors on blue, yellow, and red. This was done with simple ink wash, just thinning with a little water. I also did some blacklining with black or dark brown ink, which was mixed with Future polish to help it get into the cracks. In the picture, you can see some examples of this under the belt of the yellow and red figures and separating the robe and cloak on the blue and green figures.
I decided to do some object-source lighting (OSL) with the blue wizard's staff and the green wizard's orb. This was done with very thin layers of paint, occasionally using my glazing medium to give the ultra-thin mixture a bit more body. Blue's was fairly straightforward, the only real trick being working around the conical shape and the brim of the hat. I'm still not so sure about green's. Originally, I had the light going down to her elbows, but once that was done, it wasn't clear there was a direct line from the orb to there, so I re-painted the sleeves. Before OSL, her face was highlighted as if the light came from above, and so once I put on pale green from underneath, it made her look weird... but then again, that's what happens when you put a colored light source under your face. There's a reason we put flashlights under our chins when telling ghost stories: our brains are hardwired to recognize faces using lighting from above, and going the other way is just strange. In green's case, it's not a very bright light from below, just enough to give her a green pallor, making her look a bit unhealthy.
I'll point out one other bit that I'm rather proud of: the left sleeve of the blue wizard. The model actually has no detail there at all, but I think that my use of color gives the illusion that the sleeve is open. In particular, putting a little highlight at the bottom of the faux cuff makes it look like light is hitting the inside of the sleeve.
I think the overall effect of the figures is nice, and I'm happy with the layering and detail. However, once put on the board, they do look a little drab.
Note that the lighting here is suboptimally centered above the board, so we're on the shadow side of yellow and red. Still, you can tell that they're a bit darker than the tiles. I think this again speaks to my unintentional avoidance of using white as a highlight color. If I were to do it again or consider touching them up, I would definitely try brightening the highlights to see if this gives the whole figure a more bright look. Regardless, I think they look good on the board, but they don't match the tone of the tiles as well as I would have liked.
The next figure I'm painting is Argalad from Middle Earth Quest, who is a mostly-green elf. He's also primed in black and being painted in layers, but I'm trying to be more intentional about bringing up the highlights. I know I read somewhere that one of the challenges of black primer is that you get extremely high contrast while you can still see any of the primer, and I think that's part of what's happening here. Argalad's blonde hair was one of the last features I painted, and so since his hair was jet black until then, everything else looked weirdly bright. Once I painted the hair, it came together. I wasn't planning to post a picture, but since I'm talking about it, I'll take a shot now and put it here. It's nearly complete, but still a work-in-progress: I need to do the sword, touch up some shadows and maybe some highlights, and finish up the base.
Thanks for reading!

Friday, December 12, 2014

Digging up my first CS education research work

Once again, it is Computer Science Education Week. In previous years, I feel like my inbox and news feed were inundated with reminders and requests to take action. My participation has taken the form of blog posts the last few years. This year, no one requested it—and maybe they didn't last year either, since it seems I made no CSEdWeek post in 2013—but it turns out that I have a good and relevant story to tell anyway.

My first semester as an Assistant Professor was Fall 2005 at Ball State University. I had come to Ball State on the strength of my doctoral work in interactive program visualization, which was a line of inquiry I intended to continue pursuing. One of the reasons I had become interested in this work was because it had implications to Computer Science education, helping people to understand the semantics of program execution; however, my own doctoral work was more theoretical and engineering, not assessment of application. 

In that first semester, I was assigned to teach CS120, the department's introduction to Computer Science, which at the time was taught in C++. The textbook contained classic and uninteresting problems, and as I had done in graduate school, I spent some time crafting my own programming projects. The students' final project was to make a text-based adventure game set in Mounds State Park. This was inspired by John Estell's Nifty Assignment. I remember the project having been a great success in terms of students' engagement and learning outcomes.

I was reminded of this assignment yesterday, when I was gathering scrap paper to bring to my CS222 final exam. My exam followed a similar format to what I described two years ago, although I have stopped asking for mind maps. My scrap paper supply has been getting smaller, and so I grabbed the whole stack to bring downstairs. At the bottom of the stack were about thirty of these forms:

It's a survey I designed in Fall 2005 to gather some feedback from students about the final project. This represents the first effort I can remember of trying to conduct actual Computer Science Education research: to have real data and coherent theories about the relationship between what I was doing and what the students were learning. Looking at the form now, having been involved in Computer Science Education research for almost ten years, I was bit surprised to see that I think it's a decent instrument. At the time, I knew nothing of qualitative research methods or even quantitative research methods, and I had no formal understanding of constructivism or constructionism. Not too bad for a novice.

Today, in 2014, I feel much more comfortable with Computer Science Education Research as part of my identity as a scholar. I have written several articles in this area, and while my more technical work still has more citations, the metrics suggest that my work has made a real difference in the community—a small difference, but a difference nonetheless. This gives me great satisfaction, to know that I am helping students here in Muncie, and that through a network of like-minded scholars, there can be a ripple effect to other places and instutitions, similar to how John Estell's Nifty Assignment prompted me to make an interesting new assignment for my students.

It was fun to post this same picture on Facebook and hear back from students I had that semester. It was a memorable assignment for them as well!