Wednesday, December 20, 2023

Reflecting on CS222 Advanced Programming, Fall 2023 Edition

As you may have noticed, I tried something a little different this year and extracted topical reflections into their own posts [1,2] rather than embed them into a lengthier reflection about a class. Aside from those concerns already expressed, CS222 went quite smoothly this semester. I had a small section, and I feel like I had a good rapport with the students. 

I had most recently been teaching the course on a MWF schedule, but this semester it was back to my preferred Tuesday-Thursday schedule. This means more time in one session to dive into a topic, but it also meant I touched on fewer topics. This is a worthwhile exchange, but I didn't get to all the extras I like to cover during the semester. For example, we didn't get a chance to explore state management in Flutter as much as I would have liked. That particular context is where we get into the Observer design pattern, which this batch of students will not know by name. I also did not get a chance to talk at all about software licensing and intellectual property aside from a quick, hand-waving statement that the students own the rights to their projects.

I also added a fourth week to the pre-project portion of the class, cutting a week off of the final project to compensate. This gave more time in the early part of the semester, where students tend to struggle with the basics. Shortening the iteration lengths for the final project did have the anticipated positive effect that students worked more consistently. That is, reducing the time between deliverables gave students fewer opportunities to procrastinate.

The most surprising finding this semester was that the first Clean Code assignment was too easy. I've been giving this assignment for years: read the first chapter of Clean Code and write a paragraph reflecting on which definition of "clean code" most resonates with you. It is intended as a warm-up exercise to get students used to the unconventional method of documenting and submitting work for the course. One of my students pointed out that it gave him a false sense of what to expect from assignments, all of which take orders of magnitude more effort than this first one. I am thinking of simply dropping the assignment in favor of more meaningful ones.

I teach CS222 almost every semester, but I have a break next semester while I work on a funded research project. It will be good to have a little break from it, and I imagine I will be back on rotation some time next academic year. We also had a new faculty member teach the course this Fall, but I haven't made the opportunity to talk to him about the experience yet. I will do that in Spring.

Tuesday, December 19, 2023

On the ethics of obscurity

Years ago, I experimented with what is now called "specifications grading" in CS222. I set up a table that explained to a student how their performance in each category would affect their final grade. These are not weighted averages of columns but declarations of minima. For example, to get an A may require earning at least a B in all assignments, an A on the final project, and a passing grade on the exam. This gave a clarity to the students that was lacking when using more traditional weighted averages. While publishing weighted average formulae for students technically makes it possible for them to compute their grade for themselves, in practice, I have rarely or never found a student willing to do that level of work. Hence, weighted averages, even public ones, leave grades obscure to the students, whereas specification tables make grades obvious.

What my experiment found was specifications grading made students work less than weighted averages. The simple reason for this is that if a student sees that their work in one category has capped their final grade, they have no material nor extrinsic (that is, grade-related) reason to work in other columns. Using the example above, if a student earns a C on an assignment and can no longer earn an A in the class, they see that they may as well just get a B in the final project, too, since an A would not affect their final grade.

This semester in CS222, I decided to try specifications-based final grades again. It probably does not surprise you, dear reader, that I got the same result: students lost motivation to do their best in the final project because their poor performance on another part of the class. It's worse than that, though: the final project is completed by teams, and some team members were striving for and could still earn top marks while other team members had this door closed to them. That's a bad situation, and I am grateful for the students who candidly shared the frustration this caused them.

The fact is that students can and do get themselves into this situation with weighted averages as well. A student's performance in individual assignments may have doomed them to a low grade in the class despite their performance on the final project, for example. However, as I already pointed out, this is obscured to them because of their unwillingness to do the math. What this means—and I have seen it countless times—is that students will continue to work on what they have in front of them in futile hope that it will earn them a better grade in the course.

And that's a good thing.

The student's ends may be unattainable, but the means will still produce learning. That is, the student will be engaged in the actual important part of the class. 

Good teaching is about encouraging students to learn. That is why one might have readings, assignments, projects, quizzes, and community partners: these things help engage students in processes that result in learning. It is a poor teacher whose goal is to help students get grades rather than to help them learn. Indeed, every teacher who has endeavored to understand the science of learning at all knows that extrinsic rewards destroy intrinsic motivation. 

What are the ethical considerations of choosing between a clear grading policy that yields less student learning and an obscure one that yields more? It seems to me that if learning is the goal, then there is no choice here at all. How far can one take this—how much of grading can we remove without damaging the necessary feedback loops? This is the essential question pursued by the ungrading movement, which I need to explore more thoroughly. 

I also wonder, why exactly haven't we professors banded together and refused to participate in grading systems that destroy intrinsic motivation? 

The Conflict of Individual Mastery Learning and Team Projects

I am uncertain how to optimally balance the desires for mastery learning and teamwork. This is not a new problem, but a conversation with a friend this week helped me articulate the particular pressures. I hope that summarizing the problem here will both give me practice explaining the tensions and foster conversation toward solutions.

Mastery Learning has students do work until it is done correctly. In theory, this is one the simplest and best ways to ensure that students gain the benefit we intend from assigned work. I have used a form of mastery learning for years in CS222. Students can resubmit assignments all the way through the end of the semester in order to show that they have learned from them. From very early in my experiments with mastery learning, I imposed a throttle on how many resubmissions students can use per week. This serves two purposes: it prevents students from dumping piles of work on me all at once, and because it reduces the number of evaluations I have to do at any time, it minimizes the time it takes me to give feedback to the students. I have a hard time imagining how advocates of "pure mastery learning," with any number of resubmissions allowed at any time, manage this.

I am an advocate of teamwork in undergraduate education with an important caveat that teams should only be given work that requires a team to accomplish. That is, if the work can be done by one person, it will be done by one person. There is a challenge here, where teams may not recognize how much work is actually required. That is why learning how to set scope and communicate is an implicit learning objective in almost every course I teach. I expect there to be some struggles here, the kind necessary for learning.

Mastery learning and teamwork come into conflict in my classes. Ideally, students should learn the skills and dispositions required before joining a team and working on a project together. The best way to ensure that all students learn that material is mastery learning, but with mastery learning, I cannot know when students will have completed all the work. One option is to impose an additional deadline before the project, but this seems counter to mastery learning: what would a student do after that deadline who has not mastered the requisite material? Another option is to gate the final project behind completion of the requisite material, but then teams would form in a staggered way. It is hard enough to get teams to form that can schedule sufficient out-of-class meetings to succeed, and adding any more impediments to this seems troublesome at best. An alternative way around this is to convert all the courses into studio courses, where there is no excuse for not being able to schedule time because it's built into the schedule. That comes with a significant cost to me: as much as I love studio classes, one has to recognize that they take twice as much of my time without a commensurate release of responsibilities elsewhere.

I am not sure whether traditional assignments or portfolios make a big difference in terms of this conundrum. For example, I could have students get into team projects right away, and by the end of the semester, turn in a portfolio that demonstrates their having individually met the learning objectives. This runs into the same problem as traditional resubmission of assignments, that students could put off working on the portfolios until much later than when that knowledge would have been helpful on the team projects.

The options I have sorted out seem to be the following.

  • Allow students to join team projects before they have shown mastery of the requisite material, continuing to allow resubmission of individual work while projects are ongoing.
  • Allow mastery learning resubmissions up until team projects start, or some other deadline before the end of the semester. A student's grade on those individual elements would be fixed at this point similarly to how they are at the end of a semester.
  • Gate team projects behind completion of mastery learning exercises: students can only join a team once they have shown that they have individually learned what they need to know. 
  • Separate the courses entirely: require a particular grade in a course that is all about prerequisite knowledge in order to get into a course where teams apply and build on that knowledge together.
Now that I write out that last one, it makes me realize I don't have a good answer as to why the deadline for mastery learning assignments is the end of the semester. That seems to be the tail wagging the dog. Isn't the end of the semester also an entirely arbitrary deadline? Perhaps that gives some credence to the second bullet above that I had not given it before.

I fear I'm looking for a silver bullet. In the absence of such a thing, I am curious how other faculty balance these issues.

Wednesday, December 13, 2023

Reflecting on CS315 Game Programming, Fall 2023 Edition

Game Programming is such a fun course to teach. In the best of cases, I get to bring interested students along with me on a journey through the fundamentals of the craft. This semester, I had several such students who clearly understood what we were doing, why we did it, and that it was interesting. These students were great, and I hope I take nothing from them by focusing on areas of the course that disappointed me. The challenge is always to keep what works and change what doesn't.

Checklists

The first topic to explore is checklist-based grading. Game Programming is the course where I pioneered this approach, and I have generally been keen on it. However, I have never had so many cases of students failing to follow protocol as I have this semester. No matter how many times I reminded, remonstrated, encouraged, or commanded, a significant number of students checked items that were incomplete. 

I wish had more data to distinguish between cases where students thought they had items that they didn't vs. cases where students were either lazy or trying to get away with something. The idea of Save Points—where students get an opportunity to resubmit work and address one of the missed criteriais for the former, not the latter. Responsible students do not struggle with this: they make mistakes and fix them. The mystery comes from irresponsible students. 

My lovely wife made a suggestion to me just the other day based on her observation that there is no penalty to the students for checking all the boxes and forcing me to deal with it. There's a solution, then: impose a penalty on checklist errors. I wonder if that one simple fix will help students see that I am serious about checklists being their responsibility, that they should not check an item that they are sure is satisfied. The impact of honest mistakes could be softened by being able to use a limited number of Save Points on such errors.

I also wonder about the mode of checklists. I wrote before about the complications of having the checklists in version control within the project (although I cannot find it right now to link to it). I suspect part of the problem that irresponsible students run into is that they download the checklist and start marking things, and possibly the project changes afterward. For multi-iteration projects, I am sure that students just keep the checklists in place and do not re-evaluate them. I could circumvent these problems by requiring that checklists be completed on paper. That is, I could make it easy for students to download printable checklists, and then have them turn these in at the deadline. It would be extra paperwork to manage, but it would be worth it if it helps them understand the gravitas of checking a box.

Scoping and Slicing

I feel like I had more students struggle with how to slice game programming problems this year than in the past. It's a hard problem! I do have them submit project plans to me, which I comment on, but it's not clear to me that they are acting on my feedback. That is, I may point out that their plans for a user story are too big, for example, but this doesn't seem to stop them. As above, it's hard to distinguish between whether they don't know how or aren't taking the feedback. 

It is not clear to me what I could do to improve this aside from give them more practice at this. At some point, it may simply be a combination of scale, difficulty, and lack of experience. That is, with this many students, doing something that is legitimately hard, without knowing that it's hard, it leaves me in a difficult position. Put another way, one way out of the problem is to declare it instead as a learning objective. This is potentially useful as this course, which used to be terminal, now leads into our preproduction class in the Game Design & Development concentration.

However, it's also possible that there may be other cognitive tools I can teach the students to help them understand the difference between a half-baked cake and a half tray of cupcakes. Too few students come out of CS222 with an internalized understanding of the agile approach despite the use of incremental and iterative development there. 

One way to help them with this would be to spend more time together. That is, we could take a page out of the College of Fine Arts playbook and make this a studio class, with double contact hours. I've been teaching studio courses like this in Computer Science for many years. I don't think my colleagues are aware of it, which is only relevant when you're actually spending twice the contact hours with a class than otherwise. At the risk of stating the obvious, it's twice as much time. If we made Game Programming a studio course, I would have twice as much time to work alongside with students and help them out. The cost is that I would essentially give up half a day a week to the endeavor. I still struggle to see exactly where the optimum is on the cost-benefit curve here.

Engagement

I'll close with an odd little story. The penultimate meeting of the class was my opportunity to introduce some topics of artificial intelligence to the class. Specifically, I talked about behavior trees, which is a topic for which I have had a group of students doing research with me this semester. This presentation had to start with some lecture about the material. Once I got the basics out of the way, I gave the students a challenge to try drawing a behavior tree for Tag. I told them to get into groups of three or four to see if they could sort out how to combine selectors, sequences, and actions to write a simple in-engine simulation. 

About half of the students clumped up into two groups and began approaching the problem, either on paper or at a whiteboard. The other half? Well, one of them looked like he tried to get the guy sitting next to him to talk, but he was rebuffed. The rest of them just sat there, faces into their laptops, exactly as they had been a moment before. This isn't the kind of course where I can just say "Laptops down" since we were working together in a demo repository, but clearly, there was a mismatch here of expectations. 

I was faced with a dilemma: should I call out the people who were disengaged or should I ignore them? I decided to simply ignore them. The day before the final presentations, I just cut off half of the room and stopped talking to them while the other half of the room and I explored the nifty space of behavior trees. I'm not exactly happy with this decision, but I think it was the right one. It may also be worth mentioning that a third of the class was not there, so really there were three categories, not two. It will surprise no one to hear that there's a near perfect correlation between the performance of students across the categories with respect to measurable effort and results. 

The problem is that this was just a clear symptom of the disengagement that had happened well before. Part of the issue is the room in which we had the class. It is a great space for studio work and a terrible place for interactive workshops. My usual room makes it easy for me to walk around and see what people are doing: it is practically or literally impossible in the room I was in. I leave this note here in part so that I can make sure next year that I am back in the teaching lab. I think if I had a more clear pattern of walking through and seeing what's on students' screen, I would not have had a whole side of the class physical and mentally far from me.

Friday, December 8, 2023

Nine-year-old notes about The Burning Wheel

Opening up Facebook this morning, I found a "Facebook Memory" from nine years ago in which I shared my reaction to having read The Burning Wheel. I'm glad I read it, but the odds that I would ever find it again on Facebook are pretty low, so I am copying the post here.

I finished reading The Burning Wheel, and it is a very interesting system. I hope to play a game of it someday, but even without playing, it was enlightening to read about this cult-favorite system. Thinking back on it, I think the most important piece is one of the most mundane elements, but something that can be brought into many interactive storytelling scenarios. When a player wants to act, he must identify his intent first, and only afterward to identify his task. The intent is the goal, the intended outcome, the reason for doing anything, and it ties into the systems' articulation of Beliefs. The task, then, is what the player actually does, and it is the task that is tested with dice and rules.

I can imagine how this would help a table of friends understand who the other characters are. In many settings, I have seen (or engaged in) players describing only what players are doing, but not why. The result is that other players have to guess at motives, infer what characters are about. This is the nature of social reality outside of the game, and it takes time to get to know people and their motives and beliefs. By making these explicit in the game, we model the idea that characters can get to know each other, and that we players are separate from them. That is, the players can get on with telling a good story because they understand more than just what is in their own character's head.

This works into another Burning Wheel rule, which is that the outcome of any die roll is articulated prior to the roll's being made. Hindsight is golden, but this strikes me as an essential rule for handling characters' social skills. Just as "hit the target with my sword" is the (potentially unstated) effect of succeeding at an attack roll, there should be a similar established context before a social roll. For example, "If you succeed, you convince the Duke to lend you his magic sword." Getting that explicit means that appropriate situation modifiers can be established. This, too, ties into BW's "Let it ride" rule, which says that once the dice have been cast, there's no do-overs in the same situation, whether it's social or combat. If you didn't convince the Duke, you didn't convince him---tell an interesting story about it and move on.

I have not played this RPG in the intervening nine years. 

Thursday, December 7, 2023

What we learned in CS222, Fall 2023 edition

Once again, I gave my CS222 students the challenge to make a list of everything they learned this semester. This semester's group came up with 155 items, and then each individual voted on their top seven—the seven that they thought were the most important. Our top items this semester are as follows:

  • Clean Code (8 votes)
  • GitHub (7 votes)
  • TDD (5 votes)
  • Time Management (4 votes)
  • OOP (4 votes)
If we had kept stepping down the line, the next cluster was at three votes, and only two items had these: Software Craftsmanship and Resubmissions. This is interesting to me, since I don't remember "resubmissions" being such a focus in past semesters on this list. However, a lot of students did a lot of resubmissions this semester. I don't fully understand why they would put this on the list and vote for it, but I am hopeful that it is because they have recognized the utility in thinking about learning this way: that it's not a case that you learn something or not and then move on, but that if it's worth learning, it will probably take some time to get it right.

This really was a fun group of students in CS222 this semester. I feel like we had a good rapport, with many of the students catching on to the method behind the class. I think this manifests well in one of the items on the list today: a student said that they learned how to follow detailed system configuration instructions early in the semester, when configuring their Flutter development environment. I usually try not to interject while students are brainstorming their list of things they learned, but I did take the opportunity here to point out that this was on purpose, that part of the course design was to help them understand how to do this so that they would be better at it next time.

I expect to write a more thorough retrospective on the semester next week, as I usually do. In the meantime, here is the full list that the students came up with, sorted by votes.

Clean Code8
GitHub7
TDD5
Time Management4
OOP4
Software Craftsmanship3
Resubmissions3
SRP2
Acceptance Testing2
Naming conventions2
Debugging2
Good names2
Shu ha ri2
Unit tests2
Incremental Development2
Teamwork2
Flutter1
DRY1
Tokens1
Beck-Style Task List1
SMART1
Abstraction1
Encapsulation1
Research1
APIs1
Getters and Setters are Evil1
Comments are evil1
Code should explain itself1
Android Studio1
Exploratory coding1
Experimental code1
Conditions of Satisfaction1
Ctrl-Q1
Keyboard shortcuts1
Refactoring1
Efficient programming1
Gamedev jobs/internships1
Internships1
Overcoming procrastination1
Commit messages
Red-Green-Refactor
UI
Mob Programming
User Experience
CRC Cards
FIRST
User Stories
Law of Demeter
Aliasing
MVC
Listeners
Stepdown Rule
Branching
Feature branching
Defensive Programming
Game Jams
Code Coverage
Resume
Psycho/sociopaths on teams
Dart conventions
Linked In
Communication
English
Scheduling
Priority lists
JSON decoding
JSON
API Keys
Configuration files
Secure key management
Serialization
Authorization
Directoiry management
Semantic Versioning
Alan Kay
RCM
Holub
Kent Beck
Dijkstra
The Null Guy
Widgets
Null Safety
Semantics
Bureaucracy
Tim Cain videos
Coding ethics
Intellectual property
Triangulation
Testing
Good use of types
Enumerated types
Dart
Error handling
Separation of concerns
Sorting
Monads/Arity
Kanban boards
Code review
Network clients
Integration tests
Learning tests
Being programmed
Data structures
Obvious implementation
Fake it
Avoid "cute" names
Dependencies
pubspec.yaml
Imports
Cornell Notes
README
Beams' commit format
Mike Cohn
Using the terminal
"Problems" tab
Shift-F6
Alt-Enter
Mind maps
Flutter Cookbook
Syntax
Exceptions
Format on save
Android SDK
Following configuration instructions
Macbooks are evil
Windows Developer Mode
Visual Studio
Stack Overflow
git usernames
git commands
pull requests
Google Drive
Google Docs
Achievements
PowerPoint
Preparation
Presentation
Alternative grading systems
Reflection essays
Leap years
Foo/foobar
FizzBuzz
GradeTool
Printing in production code is bad
Cannot build across async gaps
Demonstrating understanding in assignments
Patience
Trust the process
Wikipedia
Soup
Self responsibility
Career is not a zero-sum game (win-win situations)
EMRF
Arxiv
Developer jobs

Wednesday, November 29, 2023

NoGaDeMon

It's almost the end of November 2023, and sadly, I don't have a National Game Design Month project to share this year. It was a good run, but I needed to take this November to focus on other things. It's been a busy semester, but a good one. I helped run the Symposium on Games this month, and I'm the chair of the search committee that is looking to hire a new colleague to help with our games curriculum. I am also currently working on a grant from the Indiana Space Grant Consortium to make a game that helps teach middle school youth about paths to college. We demonstrated our work at the Symposium and received positive feedback, and now the team is making plans for the next steps. In fact, I will have a course release next semester to focus on this project, but since I didn't have one this semester, it was like teaching an extra class to try to keep the project running. I also have a group study of game AI that has been interesting, and those students will be presenting their work in next week's department colloquium. 

So, to make a long story short, no NaGaDeMon this year, but plenty of game design (and game design adjacent) work happening. I am currently in the process of setting up Ball State to host Global Game Jam again in January, and that's the next major jam event on my radar.

Friday, September 29, 2023

Automatically publishing Godot 4.1 projects to GitHub.io

Some time ago, I wrote a blog post and made a video about how to automate the publishing of web-based Godot games on GitHub.io. It worked fine in 3.x, but with the release of 4.x, things got complicated because of SharedArrayBuffer. I have come across some tips online that claim to solve the problem, but I was unable to get any of them working entirely as advertised. After some effort last night, I've been able to pull all the pieces together into this GitHub workflow configuration:

Briefly, this makes use of barichello's godot-ci Docker container to build the web export for the game, then it brings in gzuidhof's service worker to provide the secure context necessary for SharedArrayBuffers. Deployment to GitHub pages is conveniently managed via peaceiris' actions-gh-pages.

Here is what you need to make this work:

  • The script assumes you have a "project" folder at the root of your repository, and that your Godot project and its corresponding project.godot file are there.
  • You need to have configured HTML5 export and called it "Web," which is the default. Set the export to go to the path "build/web/index.html" off of the repository root. That is, within the export settings, the export path will be "../build/web/index.html".
  • Make sure your export configuration (export_presets.cfg) is in the repository. That is the default for Godot 4, but you'll find some sample .gitignore configurations that want to hide that file.
  • Your repository must be public.
  • You have to enable GitHub Pages in your repository. In the corresponding section of the GitHub repository settings, make sure your source for the pages is to "Deploy from a branch" and that the branch you select is "gh-pages."
I was blindsided by the HTML5 export complications when trying to build a web version of my last Ludum Dare game. I'm hopeful that this new configuration will make my deployment easier for Ludum Dare 54, which starts tonight.

UPDATE: Although I did not complete an LD54 project, I did get far enough to set up GitHub automation. I ran into a problem Saturday that I did not see on Thursday, which was the need for the build system to be able to write to the repository. It comes up in peaceiris' gh-pages documentation, and the fix is a one-liner: the addition of a permissions statement. I have updated the gist above to include this.

Monday, September 25, 2023

Teaching TDD... but what just happened?

In my CS222 class, I spend a lot of time talking about, demonstrating, and giving feedback on Test-Driven Development (TDD). Indeed, from the very first day of class, we're doing TDD. For several years, I've opened the course by talking about confidence and how we can take small steps to maintain it. Last week was the fifth week of the semester, and my students had just started the "two-week project." That project integrates the concepts from the first four weeks of the semester into a single deliverable that of course includes TDD.

On Thursday, I was showing them how we can write a custom parser for data pulled from Wikipedia. I started by writing a test that, given my sample data, my as-of-yet unwritten parser could extract the piece that I want. Then, we wrote a clearly naive implementation of the parser that just returned that value as a literal. The literal has no place in the specification, and so our refactoring moved us to a correct solution. I made a big deal about how we had just completed a red-green-refactor sequence, summarizing each step to them as I have just done for you above. I especially emphasized that at the time of writing the test, we didn't care at all how we would make it work.

A student pushed back a little on the solution, asking why we didn't just go back and change the test case itself to something else. I responded that this would not be right, because before we wrote any production code, we first had agreed that this case was correct--and it was. I reminded them again about red-green-refactor. At this, a different student raised his hand to comment. These are not his exact words, but his observation was essentially this: "Are you saying, then, that at the red step, we write a test that tells us what we need next, but we don't worry about how it will work; then we make it work however we can; and then we clean it up to make it work right?"

I paused for a moment and then agreed, telling him that I really couldn't have said it better myself, thinking to myself as I said this that I had, indeed, said basically that same thing, many times in the past five weeks. At this, several heads in the class nodded sagely, making expressions as if they had gained some new insight into the process.

On one hand, I am happy that they seem to understand it now better than they did before, that this students' summarizing the process seemed to resonate. Yet I am plagued by the question, What just happened? What was actually different about this explanation than before?

I will see the students again tomorrow, and I'm thinking about just asking them. I don't want to make anyone feel badly about not understanding it before, but I do want to know if they can identify what tipped them toward understanding it now.

Wednesday, September 20, 2023

The one question to ask playtesters (according to Mike Selinker)

I listened to some of Justin Gary's interview with Matt Fantastic on the "Think Like a Game Designer" podcast on my walk home from work today. The two were talking about how to interpret playtesting results, and Mr. Fantastic shared some advice he got from Mike Selinker. According to Fantastic, Selinker advises asking playtesters just one question, "What did you do?" 

This question completely avoids problems of the designer asking leading questions. Instead, the designer gets to hear what the player experienced in their journey through the game. It's a lot simpler than some of the other formats I have used and encouraged my students to use, and it's certainly simpler than the formal playtesting process that my students are pulling from Lemarchand. It would be fascinating to run parallel tests with these different formats and see if there's something objective that can be learned from this, but who has time for yet another research project?

Monday, September 18, 2023

Brief summary of Zinsser's "On Writing Well"

I recently read through most of the 4th edition of William Zinsser's On Writing Well, published in 1990. I pushed it to the top of my reading list after reading Filip Hráček's article, "The engineering principles behind GIANT ROBOT GAME," which references Paul Boyd's, "The Cargo Cult of Good Code," which lists Zinsser's book as his second favorite book on software design. I enjoyed roughly the first half of Zinsser's book, I worked through another third or so, and then I figured I had gotten what I could from it. I agree with some of the criticism of his book that he uses a lot of words to tell you that you should not use so many words.

Near the end of my reading, though, he had a helpful recap of the fundamentals that he professes. These are, briefly:

  • Clarity
  • Simplicity
  • Economy
  • Humanity
  • Active Verbs
  • Avoid windy concept nouns
The last one merits explanation since "concept noun" is not a term one comes across regularly. An example from his book of a violation of the principle is the sentence, "The common reaction is incredulous laughter," which has no people in it. Compare that to, "Most people just laugh with disbelief," which is something that the reader can visualize. When I think about concept nouns, though, I tend to remember the wrong thing: Zinsser's caution against "creeping nounism" as one finds in the gem, "Communication facilitation skills development intervention."

Taken as a set, Zinsser's summarized points provide solid advice, though I am not sure it stands up against George Orwell's rules.

While I enjoyed most of the book, one part that irked me was how journalistic he treated the wide tent of "nonfiction writing." He argues for paragraphs to be constructed of three, two, or even one sentence. This strikes me as counter to the epistemic value of writing as embodied in the essay—the attempt to understand. That is the space where I do the most serious writing, and it is the space I want my undergraduates to inhabit. For a book-length treatment of writing advice, then, I think I'll stick with Strunk

Monday, September 11, 2023

Notes on Boredom

Last Friday, I was able to attend a talk by Kevin Hood Gary about his book, Why Boredom Matters. The talk was sponsored by the Alcuin Study Center in Muncie. My wife was interested but unable to attend the presentation, so I took some notes to share with her. A colleague also told me they were unable to attend, so I decided I'd turn my notes into this post, which I can then share with anyone who is interested. Keep in mind that these are extrapolations of my notes from the presentation and should not be taken as a summary of the book. He acknowledged that the talk was designed for a general audience while the book goes into more technical detail. I'm sure any inaccuracies or misrepresentations in this post are my own.

The thesis of the book—as explained by Dr. Hood—is that we avoid boredom and thereby miss out on leisure. There are different ways to react to boredom, but they fall on a spectrum from avoidance to resignation. We can avoid boredom through amusements. I appreciate that, from early in the presentation, Dr. Hood clearly and explicitly distinguished between amusement and leisure, the former being a distraction and the latter being life-giving. This is a classical distinction that is clear to me, although from the discussion, I think that he has sometimes received pushback on these terms.

After acknowledging that the smart phone is a boredom avoidance device, Dr. Hood mentioned that he offers extra credit to his students if they put their phones in a box during class meetings. He said they happily take him up on this. I am not a fan of "extra credit" since it leads to inflation, and I wonder what differences would occur in making this simply for credit instead of extra credit

Dr. Hood referenced Wilson's shock test experiment. I think I had heard of this before, but I did not have it ready in memory. It's an amazing study from about ten years ago in which people were put in an empty room to be alone with their thoughts... except they could also opt to give themselves a mild electrical shock. Interested readers should learn about the actual experiment, and although I hesitate to spoil some results here, I will, in the name of having these data later for my own purposes. The study showed that 25% of women and 67% of men would rather give themselves painful shocks than be alone with their thoughts. I echo the response of one of the audience members who wondered how this broke down by age. Dr. Hood drew a comparison between this study and Pascal's quotation, that all human evil comes from the same cause: our inability to sit still in a room.

He also brought up a nice quotation from T. S. Eliot that I don't remembering hearing before. It comes from "Four Quarters" and states that we are "distracted from distraction by distraction." An unsurprisingly poetic interpretation of the theme from Mr. Eliot.

Dr. Hood briefly mentioned how Kierkegaard described human agency as being balanced between the despair of possibility and the despair of necessity. The former describes, for example, infatuation with celebrity, as well as the hesitation one experiences at trying something at which one might fail. During the Q&A, Dr. Hood confirmed that my understanding of this end of the spectrum was correct when I likened it to meeting incoming students who want to become game developers but who then do not do any of the actual hard work required to succeed in the field: imagining oneself as an ideal is easy, but reifying that ideal takes real effort.

Rather than eliminating boredom with amusements, we should instead pursue focal practices. These draw upon the classical notions of leisure, and these were connected to historical understanding of scholé. He illustrated the concept by referencing Groundhog Day, which everyone in the audience had seen. (Both he and I were surprised at this, given the range of ages present.) In the movie, Bill Murray's character eventually replaces his amusements and his despair with focal practices such as good conversations, walking, making music, and reading a good book. Dr. Hood talked a little about these as being done for their own sake and being driven by intrinsic motivation. 

Crucial to understanding focal practices is that they always involve a moral threshold. That is, focal practices are preferable to amusements, but they involve making a conscious decision. His example was that he could go home after work and watch sports highlights or he could go for a walk with his wife. The latter is clearly the more life-giving of the two, but it requires a decision to be made. This and several other parts of the presentation got me thinking about the virtues, both as habits of being and as choosing a medium between extremes.

He contrasted focal practices in an interesting way against videogames, describing how some will claim that videogames are their leisure. He explained that he himself had played a lot of videogames and that he knows that they work by producing a steady stream of delicious, delicious dopamine. He was a little reductive here but not inaccurate. There are hooks here for a scholar to explore the amusement vs. leisure elements of play. (Coincidentally, I was recently in conversation with a colleague about how it's almost certainly better to make videogames as a leisure activity rather than as a job. A "regular" job in software development will pay the bills more reliably.)

It may be worth noting that throughout the presentation, Dr. Hood never insulted amusement and acknowledged that they play a role in a healthy life. It is the absence of leisure that is the serious detriment.

Toward the end of the presentation, he offered some suggestions on what we can do about boredom. The first of these is a "boredom audit," in which one tracks how much time is spent avoiding boredom. This raises awareness. I think that sounds like a great challenge and perhaps even an achievement for my CS222 class. The second of his suggestions was to identify two focal practices that you enjoy and one you would like to enjoy. Again, this seems to me to be about foregrounding thoughts that one might be avoiding. The third was to pursue friendships of excellence, drawing on Aristotle's understanding of the different kinds of friendship. Here, he pulled in one of my favorite quotations from G. K. Chesterton, that "if a thing is worth doing, it is worth doing badly." (The line has been used and misused, but The Society of G. K. Chesterton has a helpful article explaining its context and interpretation.)

I enjoyed the presentation, and I look forward to reading the book. The Tower of Unread Books has spawned two or three additional piles throughout my home and office, so it may be a while before I get to it.