Recent events have given me the opportunity to spend some time with a colleague at the university from another department. He told me how, many years ago, he met a developer who told him about Linux, and it was the first time he realized that Computer Science involved philosophy. Really, how else would someone from outside the discipline know?
The discussions involved making a distinction between two ways of looking at Computer Science. I identified myself as being in the "human-centric" camp, and a respected colleague identified himself as being in a "math-centric" camp. I believe that the distinction I made was between human-centric and technical- or formalist-centric. Interestingly, when he later recounted the distinction, he described me as "anti-math" and him as "math". That, itself, demonstrates an interesting sort of Manichean reasoning seen from his philosophy: that either you're for math or against it. I responded that I was not anti-math, but human-centric, and that one can do math in a human-centric way. This was a sort of distraction from the main purpose of our meeting, and predictably, the conversation did not go anywhere except to clearly establish that there are different ways of looking at the discipline.
As I have thought more about this, I think there's a clearer way to make a distinction between these philosophies. One of them sees mathematical formalisms as an ends in themselves: the math is beautiful, and you should study it and admire its beauty. Another perspective is to look at them as a means to an end: the math is a tool that allows us to do things we could not do without it. These have very different implications for curriculum design, which in large part is the most important thing that CS computer science departments have to do collectively.
It is import to note that most of the formalists that I know will also claim that the mathematical formalisms empower you: they are not just beautiful, but also useful. Indeed, the inciting incident of the conversation was a claim that studying algorithms helps you write more efficient, more effective, and more correct programs. This claim is ubiquitous in discussions of academic computer science, and yet I have never seen any evidence for it. Indeed, the educational theory that I know suggests that it is not true. Transfer is hard and is never free, so to suggest that studying one thing makes you magically better at a different thing is specious at best. Studying algorithms makes you better at algorithms, but if your goal is to get someone to be an efficient, effective, and correct programmer, you're much better off teaching them Test-Driven Development, Pair Programming, intellectual humility, and a love of lifetime learning. When I took this issue to Facebook, one of my alumni put it best: the most likely case is that there is a correlation between being a good programmer and being good at algorithms because both are difficult and can serve as a measure of intelligence and conscientiousness. That is, correlation is not causation.
This is crucial when deciding, for example, where to place discrete mathematics, algorithms, and computational theory in a curriculum. I know many math-trained computer science faculty who claim that you cannot understand data structures unless you study discrete math first, but that claim is easily refuted by counterexample: we have plenty of students who drop and retake discrete math and yet have no problems with CS2. To steelman the opposition for a moment, it could be that the problem is that our courses do not draw enough upon their prerequisites, and indeed, I think that's a good point. We—collectively, as a discipline—have not distinguished clearly between what exactly we want students to know coming out of a three or four credit-hour discrete math course and how exactly it manifests in the follow-up courses. We are locked into a "course" model of higher education, where ideas come in neat, delineated packages, taught by people who focus on those areas or, in some cases, focus on no areas. A lot of faculty get uncomfortable at the very mention of shaking this up.
A few years ago, when our Foundations Curriculum Committee tried to look into the question of what elements of discrete math were needed for courses that listed it as a prerequisite, the answers we got were completely inactionable, being either "They need all of it" or "They don't need any of it." Indeed, I have led that committee for something like twelve years, and we have been constantly fighting against problems of how to improve the discrete math and algorithms courses, in large part because the people who teach those courses see nothing wrong with them.
The old joke goes like this: ask five computer scientists to explain computer science and you get seven different answers. I would like to know if other academic departments have this kind of culture war. Are mathematics departments hobbled by differences about whether math is a means or an end? How about physics? Biology? What about non-science fields such as Architecture, which I think some of us see as closer kin than mathematics.
I make the case when I teach CS222 that Computer Science is applied philosophy. While anybody can use computers, we are in the amazing position to be able to imagine worlds and then make them real. This is what drew me to the field. To be clear, I loved my Computational Theory course in grad school, but at the same time, I'm a melancholic introvert who loves ideas. I also loved reading The Design of Everyday Things, and I think it helped me much more in my day-to-day scholarly work than Godel, Escher, Bach.
I think it would be interesting to try to measure the distribution of different philosophies among Computer Science faculty. It would be a challenge to develop a reliable and valid instrument for measuring such a thing. I know people at big schools and at small schools, and I'm not sure that the distribution of philosophies would be that different. I suspect you would find a greater concentration of human-centric computing at schools with a longer tradition of HCI research, and at places like Georgia Tech, where the media computing approach was born. That is, I suspect a human-centric approach to computing may be able to be grown.
Naturally, we need productive disagreements, but I think my field is hamstrung by a preponderance of activity on the formalist side. I have been on many search committees now, so I have some sense of what freshly-minted PhDs look like—or at least the ones who apply here. Most tend to have gentle words about student-centered teaching, but it is less common to find those whose scholarship is integrated. Of course, I don't think my work was integrated with my scholarly self either when I was fresh out of graduate school. But, if we continue to reward isolation and scientism, we will miss out on the opportunity to help scholars develop a real philosophy. I consider myself fortunate to have ended up, whether by chance or by grace, at a place that valued my interests in exploring a Boyer-inspired view of scholarship. Indeed, sometimes I wonder where I would be today if I had not taken the time to read Boyer and Glassick around 2007-2008.
How does this relate to Linux? Understanding the Free Software movement requires an integration of technical and non-technical aspects of computer science, in alignment with an understanding of economics and ethics. There's a lot of room under this umbrella, and it's not just applied math.
No comments:
Post a Comment