One of my students sent me an email last week with three thoughtful questions in it. Following the principle of no-wasted-writing, I decided to share my answers here on my blog. The questions are in bold, and I did not edit them for posting here. I'm going to encourage the student to consider his own answers before reading mine, and you, dear reader, may try the same.
What really defines if a student successfully completes the undergraduate CS program at Ball State? I’m not talking about the program and university degree requirements, but rather if they got everything they are meant to out of the CS program. Is there a “tipping point”?
That's a great question, and there's a couple of different ways to answer it. I will start with the structural approach that is conventional in industrialized higher education.
For most of its history, the department operated with implicit objectives, but almost ten years ago, we formally agreed to a set of five. These were adapted from ABET's accreditation criteria, and they define what a student should be able to do after graduation. They are:
- Mathematical foundations, algorithmic principles and computing theory: Given a software task, the student will be able to choose an appropriate algorithm and the related data structures to efficiently implement that task. The student will be able to do an asymptotic analysis of the chosen algorithm to justify its time and space complexity.
- Design and development skills: Given an appropriate set of software requirements, the student will be able to formulate a specification from which to design, implement and evaluate a computer-based system, process, component or program that satisfies the requirements. At a minimum, this must include the ability to design, implement and evaluate a software system using a high-level programming language, following the conventions for that language, and using good practices in software design to satisfy the given requirements.
- Ethics, professional conduct and lifelong learning: The student will behave consistently with the ACM Code of Ethics and Professional Conduct. This includes developing the capacity for lifelong learning in order to maintain professional competence.
- Communication skills: The student will be able to communicate effectively through oral presentations and written documentation, including well-organized presentation materials to explain software concepts to non-technically oriented clients, clearly written software specifications and requirements documents, design documents, and progress report.
- Teamwork skills: In order to contribute effectively and meaningfully to any team development effort, each team member must be able to think independently and arrive at creative and correct solutions to problems. This includes the ability to find and evaluate existing solutions to similar problems and adapt them to solve new problems. Each team member must be able to communicate ideas effectively to other team members and cooperate with them in integrating the solutions of subproblems to accomplish the team's common goal
In theory, then, a graduate should have all of these qualities, and our departmental assessment should tell us to what extent we are meeting our goals. We have done a fair job over the past several years of conducting mid-major assessment: after CS222 and CS224 (Advanced Programming and Algorithm Analysis), we use a formalized assessment process to determine if students appear to be making progress, particularly toward departmental objectives #1 and #2. For example, we were able to see no measurable difference in CS222 performance before and after our change in the intro course from a conventional CS1 approach in Java to a media-centric approach in Python, and we were able to push down some critical concepts earlier in the curriculum, like good naming and functional decomposition.
The upper division courses and capstone assessments have, unfortunately, been inadequate for any practical purpose. Put another way, the department agreed upon these objectives as guideposts, but we have not done due diligence to see if we are meeting them—much less making modifications to ease us toward better performance. I think a CS student in any program can look at this list and think of how conventional higher education privileges some aspects over others. Our own two-semester capstone course should be a proving ground for practically all of these, but we lack meaningful assessment data to draw defensible conclusions. Note that your particular question about a "tipping point" should be measurable if we had a clean mapping of these objectives to particular courses and curricular activities, but right now the best we have is anecdotes.
There's another, more holistic way to look at your question, and it tends to be the approach I favor despite its being arguably less measurable. Students who have taken my CS222 class should be familiar with the Dreyfus Model of Skill Acquisition. It is a descriptive model for any set of skills, where aptitude can be measured in categories of Novice, Advanced Beginner, Competent, Proficient, and Expert. Anyone can become a Novice with very little effort, and raising to Advanced Beginner generally takes some education and practice. However, a defining characteristic of Advanced Beginners is second-order ignorance. That is, they don't know what they don't know. This aligns with the psychological phenomenon of the Dunning-Kruger effect, where Advanced Beginners misclassify themselves as Experts. Overcoming second-order ignorance allows one to truly become Competent, which includes a measure of intellectual humility—a recognition of how much more there is to learn. That, then, is my shorter answer for what it means to earn a bachelors degree in Computer Science: students should have climbed the Peak of Mount Stupid and come back down to recognize the need for lifetime learning.
Incidentally, I learned about the Dreyfus Model by reading Andy Hunt's Pragmatic Thinking and Learning. He and I had an email exchange about it, and we agreed that, broadly speaking, we should expect a bachelor's degree to produce Competent skills, a master's degree to produce Proficient skills, and a doctoral degree to produce Expert skills. In practice, I don't think that higher education has assured that this is the case, but I think it's a worthy consideration if nothing else.
The upper division courses and capstone assessments have, unfortunately, been inadequate for any practical purpose. Put another way, the department agreed upon these objectives as guideposts, but we have not done due diligence to see if we are meeting them—much less making modifications to ease us toward better performance. I think a CS student in any program can look at this list and think of how conventional higher education privileges some aspects over others. Our own two-semester capstone course should be a proving ground for practically all of these, but we lack meaningful assessment data to draw defensible conclusions. Note that your particular question about a "tipping point" should be measurable if we had a clean mapping of these objectives to particular courses and curricular activities, but right now the best we have is anecdotes.
There's another, more holistic way to look at your question, and it tends to be the approach I favor despite its being arguably less measurable. Students who have taken my CS222 class should be familiar with the Dreyfus Model of Skill Acquisition. It is a descriptive model for any set of skills, where aptitude can be measured in categories of Novice, Advanced Beginner, Competent, Proficient, and Expert. Anyone can become a Novice with very little effort, and raising to Advanced Beginner generally takes some education and practice. However, a defining characteristic of Advanced Beginners is second-order ignorance. That is, they don't know what they don't know. This aligns with the psychological phenomenon of the Dunning-Kruger effect, where Advanced Beginners misclassify themselves as Experts. Overcoming second-order ignorance allows one to truly become Competent, which includes a measure of intellectual humility—a recognition of how much more there is to learn. That, then, is my shorter answer for what it means to earn a bachelors degree in Computer Science: students should have climbed the Peak of Mount Stupid and come back down to recognize the need for lifetime learning.
Incidentally, I learned about the Dreyfus Model by reading Andy Hunt's Pragmatic Thinking and Learning. He and I had an email exchange about it, and we agreed that, broadly speaking, we should expect a bachelor's degree to produce Competent skills, a master's degree to produce Proficient skills, and a doctoral degree to produce Expert skills. In practice, I don't think that higher education has assured that this is the case, but I think it's a worthy consideration if nothing else.
What makes an internship worthwhile?
This is another great question. Let the record show that I have never had the responsibility of deciding whether or not a student internship is worthy of course credit, so I am speaking from a position of educated opinion and not (as above) as an academic trying to do a competent job.
The conventional structure of higher education is one of analysis: the breaking down of complex wholes into manageable parts. You can see this from course and textbook organization all the way up to administrative organization. It is a convenient model for many reasons, although it falls into the fallacy that my analysis should work for you. All analyses are really design projects that have to be evaluated for fitness. Turns out, grouping academics by similarity of department works well for peer evaluation in conventional situations. It doesn't always work though, such as with interdisciplinary problems like the game design and development work that I do: some of that work looks like Computer Science, some looks like humanities, some looks like communications, education, art, etc. Who should evaluate whether or not I am fulfilling my scholarly role in such cases? The administrative structure imposed by the analysis becomes dysfunctional.
Let me bring this back to internships. Higher education structures are pretty well ossified. There are a few calls to enhance multidisciplinary education, such as our own campus' laudable immersive learning projects, but at the end of the day, it's the administrative structure that determines your budget, and it's the budget that allows you to operate. So, a good internship is one that exposes a student to a different analysis—a different way of breaking down a complex world into workable parts. In a way, it's like the idea of having two majors or picking up a minor that is different from your major: learn to see things in a different way.
There's a secondary goal of an internship, which is to engage in legitimate peripheral participation in a field where you would like to work. There can be enormous advantages to this, but I think a lot of them deal with the property I mentioned above: seeing people doing their practice teaches you about the culture of that practice. Students, by and large, are watching faculty do our practice, but our practice is really strange. There are relatively few people who do the work of higher education—especially in higher education. [rimshot]
Anyway, that's what entered my head when I read your question. Take it for what it's worth. If I had the chair's job of evaluating whether or not an internship was worth major credit, I would probably have to deal with things like authentic mentorship and learning objectives, but I think I can get away with my more epistemological stance for now.
What are your thoughts on AWS (Amazon Web Services). It seems like a lot of companies are transitioning to AWS, but do you think this is what we are transiting to as an industry or just the latest bandwagon a lot of people are jumping on?
This is not in my field of technical expertise, though I've read some and talked to alumni and friends doing this kind of work. It seems to me that there are great gains to be had by adopting microservice architectures, from a maintenance point of view in particular. Distributed processing can lead to significant performance gains. Also, there's a very high initial cost of setting up a proprietary, globally-distributed network of reliable servers. If someone else (Amazon, in this case) has already done that work for you, then it seems reasonable to rely on their expertise here. So, it appears to me to be a natural transition rather than simply a bandwagon. The people I know who are doing it are being very careful, from a business and technology point of view. Of course, there are implications for security and accountability as well.
As for me and my projects, we choose Firebase.
That's the end of my responses. I hope that sharing them here was of interest to my readers. Feel free to leave a comment to share your thoughts, as always.