(Here's a link to day 1.)
I went to the speaker's breakfast this morning and was able to talk at length with Stephen Edwards of Virginia Tech about their use of test-driven development in CS1. Part of his discussion that resonated with me was that students with prior programming experience tend to be the most vocal opponents of TDD, often willing to defend their implementations sans tests as correct. It is noteworthy that Virginia Tech uses Web-CAT for automatic grading of programming assignments, and that this grading includes unit tests and test coverage. I experience this phenomenon in CS222, where the students push back on TDD, instead assuming that the ad hoc methods that got them through CS1 and CS2 are somehow good and universally correct—and no one pushes harder than the ones who had prior experience and easily aced those prerequisites. I notice, however, that their scale is a weakness. At Virginia Tech, they cannot automatically detect whether tests were written first or not, but I have found in CS222, with manual inspection, I can. There are traces that are left in proper TDD, and while I'm not sure I can build a taxonomy of them at this time, I do get a definite sense when reading implementations as to whether TDD was followed. In particular, with an emphasis on "Refactor" as a critical part of "Red-Green-Refactor," code that was tested post-creation tends to have much worse semantic connection between test and production code, and the tests tend to be poorly constructed. When writing just enough code to make a test pass, it's generally obvious. There's probably an empirical research project in here somewhere involving machine learning and automated grading.
This morning's plenary talk by Michael Kölling was excellent. In part, I felt vindicated in his embracing his status as a tinkerer, not a researcher. This was important to me, since there's a sense in which we were intellectual competitors in the early 2000's, not that he would have known it. I developed a visual operation semantics for Java as part of my doctoral work, and I had several pages in my dissertation comparing my notation to his for Blue and BlueJ—of course, with the claim that mine was a superior notation. However, my contour-based approach never had enough traction to make a tool like BlueJ, let alone a textbook, workshops, staff, federal funding, etc. Still, it was good to hear him happily say that he just made something up and it caught on, which I think situates my work well in a historical context as being more driven by issues of soundness and completeness—programming language issues, not pedagogic ones.
Kölling made a fascinating analogy to Haeckel's Recapitulation Theory, summarized as "ontogeny recapitulates phylogyny." Of course, that's meaningless to mere mortals, but he explained it as the idea that human embryos grow through all the stages of evolutionary heredity, which is why they look sequentially like fish, frog, or chicken embryos. This theory is disproven, but he described how well-meaning professors treat Computer Science learning—and programming in particular—as if Recapitulation Theory were true: I learned programming by learning pointer arithmetic, so you must have to learn programming by pointer arithmetic. His bigger point, then, was that in the layers of abstraction in computing, we all have to draw a line somewhere and treat the stuff underneath as a black box. This resonated with me, and I appreciated drawing upon falsified science for the analogy.
My presentation was the first in my session, and I feel it went well. I was able to embellish the presentation with some personal stories that don't show up in the paper, but work really well to "sell" the ideas in the research. I hope that some people are inspired to take a closer look at the paper.
The other two presenters in my session had interesting work, and it was fun to talk to them a little about it. Henry Etlinger drew explicitly upon Contributing Student Pedagogy (as defined by Hamer et al.), which had slipped off of my radar, but which is probably a useful frame for some of my research. Etlinger also described how he organized students into "coalitions": not quite teams, not quite groups. When asked to clarify, he described his taxonomy. To him, a team had a shared mission and a leader, while a group was just an unorganized collection. He used the term "coalition" to frame to students that they had to work together and had to negotiate with each other, but that they didn't necessarily have a holistic team identity. I thought this choice of words and explanation were interesting, though I'm not sure what I'll do with it.
I was able to meet up with Bonnie MacKellar at lunchtime to talk about my CS222 course and compare it to her course, which is designed to meet similar constraints. This inspires me to write something a bit more academic about the design, execution, and evaluation of CS222; watch for this in a future CCSC or SIGCSE submission.
I went to the Rediscovering Passion, Beauty, Joy, & Awe panel after lunch, which I didn't realize was one in a series. It seems it started with Grady Booch's SIGCSE keynote several years ago, which I still remember as one of the best keynotes I have seen. This time around, the theme was innovative and engaging approaches to introductory computer science. The presentations were good, although I regret a bit my question to Mark Guzdial about paradigms for media computing. I am very interested in how learners are impacted by procedural vs. object-oriented introductions to programming, although my one experiment in this area yielded nothing publishable; unfortunately, I think he mistook my question about paradigm for an attempt to start a language war. Media Computing is available in Java and Python, and what I really wanted to know was whether the procedural implementation (Python) offered different learning outcomes from the OO implementation (Java). From his response, I think the answer is no, but I feel bad that I didn't articulate the question clearly, or maybe should have just emailed it to him.
Regardless, I think it is noteworthy that this panel was packed, with hundreds of people in the room, and meanwhile, there were four paper sessions that collectively probably had fewer attendees (or at least close). I think this is symptomatic of what I wrote about yesterday. I would rather have the best and the brightest attending paper sessions and asking hard questions of the presenters. In this way, they can model the perceptions and insights that allowed them to rise to the top of the field. Also, if they get sick of sitting through low-quality research, they have the clout to change the direction of the conference.
After the panel, I spent some time in the exhibit hall, talking to some friends and a few vendors. I spent quite a bit of time at the Broadening Participation Alliance, trying to figure out which of the partnering organizations might be interested in a pet idea I've been kicking around. One of the gentleman I met was Manuel Pérez-Quiñones, who is involved with the Coalition to Diversify Computing. Also, it turns out he is a Ball State alumnus! We're going to try to get him to come and give a talk some time next year.
I attended two more paper presentations after this, and the best story here is that the last presenters were hoping to run slides from their iPad. Even with the proper dongle, the projector would not recognize the device. They tried pulling up a backup presentation from Google Drive, but it was in Keynote, and the presentation laptop was a Windows 7 PC. They were a bit flustered, understandably, and proceeded to give their presentation without any visual aids. It was excellent. I shook their hands afterwards and encouraged them to be confident and always deliver that talk without slides. After seeing bullet points for the last two days, it was refreshing to have a pair of presenters just talk to the audience.
It turns out that their presentation drew a little bit upon Vygotsky's Zone of Proximal Development. This reference in their paper title was my main reason for attending the session, but I was a little disappointed. I had hoped for a more formal theoretical framing of the work in terms of internalization of the instructors' guidance, but from the discussion, it seemed that they simply used ZPD to rationalize scaffolding. I have nothing against good scaffolding, but as I've been reading more about activity theory, I find myself wanting to tie this line of work with CS education in a more formal way.
I returned to my hotel room and brought up my slides for tomorrow's talk, expecting to just do a little mental rehearsal. I was dismayed to see that I had never actually finished them. Whoops. I think I did a good first pass, and then devoted all my attention to the first talk—the one I gave this morning. Revising these will keep me off the street, at least. I visited a few friends who were setting up a workshop for MIT App Inventor, got some seafood chowder and a beer, and now I'm back in the hotel room. Next up: slide revision.
I'm not convinced the falsenessof Haeckel's theory is enough to keep me from teaching pointers. To do that, we'd really need a theory of learning that told us where the line can be drawn between the important and the unimportant foundational elements.
ReplyDeleteA major difference between computing and Haeckel's view of development is that our primitive forms stick around. You cannot escape the hardware. Programming languages retain a notion of identity that is keyed on address; many areas of technology must address performance concerns, which usually involve low-level memory management; debuggers toss around addresses flagrantly; and C and C++ are still in popular use outside of academic settings.
Joel Spolsky puts forth the hypothesis that abstractions save us time working, sure, but not necessarily in learning -- because all abstractions leak:
http://www.joelonsoftware.com/articles/LeakyAbstractions.html
I think his main point still stands, though: a teacher will always draw a line, beyond which is considered a black box, and so this should be done intentionally. It sounds like you have made peace with drawing the line somewhere below explicit pointers.
DeletePersonally, I like to think about essential vs. accidental aspects of computing. Loops and variables are essential to imperative programming, for example, but pointers are not; there are plenty of fine programming environments that do not have them. In assembly programming or computer engineering, pointers become essential.
I think it's important to remember that Kölling was talking explicitly about introductory CS. I don't think anyone would say that pointers should be abolished from a curriculum just because you don't have to know them to get a good job, since as you say, they reveal that layers of abstraction exist.
Your point about introductory computing is well-received. Someday I'm going to write a book called Object-oriented Program: An Everything-First Approach. It'll only have one chapter, on exceptions, loops, conditionals, test-driven development, and well, everything.
ReplyDeleteThe essential vs. accidental is an interesting dichotomy. I'm not sure I could label everything correctly. For example, I teach a week of assembly in our programming languages course. Not because I learned it in school -- I didn't. Rather, I feel that despite it being very low level and reflective of the underlying hardware, there are a number of concepts that strike me as essential. The notion of stack memory, for example, is an excellent design choice even for a paper computer. A stack manages the interaction of code in a safe and straightforward way. The caching of values (in registers or otherwise) seems like it might be accidental, but fundamental physical limits do make caching an essential practice, one that we humans do in other areas of life all the time. Also, assigning types to do our data is not essential; assembly proves that. However, using types is a darn good idea. But I wouldn't call it accidental or essential.
I suppose I should just make sure I'm not teaching more accidental things than essential ones.