Thursday, December 12, 2024

Reflecting on CS315, Fall 2024 Edition

As described in my course revision post in June, the overall structure of CS315 Game Programming was unchanged from previous semesters: half the semester was spent on weekly projects designed to build skills and confidence, and half the semester was spent on larger projects. 

The most significant change was in how those weekly assignments were evaluated. The past several years, I have used checklist-based evaluation, but I was hoping to find a fix for the problem of students doing the checklists wrong. This takes something simple and makes it into more work for me than if it was just a point-based rubric. Unfortunately, the strategy I used did not make things any simpler. Instead of checklists, I gave students a list of the criteria that needed to be met in order to be satisfactory. Their work then was assessed as Satisfactory, Needs Minor Revision (fix within 48 hours), or New Attempt Required. New attempts could be made at the rate of one per week, as I've done for years in most of my non-studio courses. I ran into a bit of the same problem as I wrote about yesterday, where Canvas' "Complete/Incomplete" assessment combined with no-credit assignments leads to a bad user experience, but it was not among the dominant frustrations. Those frustrations were two: students not submitting satisfactory work, and students not submitting work.

The first of those is the most disconcerting. As with checklist-based grading, I gave the students the precise criteria on which a submission would be graded. All they had to do was to meet those, and most of them did. Sometimes it took minor revisions or a new attempt or two, but these were no big deal: handling and correcting misconceptions is exactly what the system is supposed to do. The real problem came from students who submitted things that were wrong multiple times after I had told them what was wrong. In a strict reading of the evaluation scheme, this means the work was still simply unsatisfactory, whereas in other schemes (including checklist-based) they might have gotten a D or C for the work. I am still torn on this issue: was the system unfair to students of lower ability or was it the only fair thing to do with them? Put another way, is it better to give a student a C when they still have serious misunderstandings, or is it better to clearly tell them that they should not advance until they understand it? I don't interpret any of the criteria I gave as strictly "A"-level. That is, it did not require excellence to meet those criteria. What it required was rigor

The other problem, of students not resubmitting work that needed to be resubmitted, seems unrelated to the evaluation scheme chosen. Speaking with professors across campus and institutions, this seems to be part of a generational wave of challenges. I have a few hypotheses about root causes, but the point of this blog post is not to opine on that topic.

Some of my early-semester assignments take the form of multi-week projects. For example, the set of assignments involve creating an Angry Birds clone. It is submitted as a series of three assignments with increasing complexity, and the complexity is scaffolded so that someone who has never made a game before can follow along. I had a student in the class this semester who fell behind, and then he wondered if he could just submit the final iteration of that three-week project as long as it showed mastery of each week's content. I ended up declining the request. One of my reasons is that the assignments double as a sort of participation credit. It makes me wonder though if it's worth my separating these things. For example, something I've done in other courses in the past is make it so that the final iteration's grade supercedes earlier ones if it is higher. 

This was the first semester that a colleague offered a different section of CS315 during the same semester. Looking at his students' games, as well as some recent conversations in the game production studio, made me realize that I should probably emphasize the build process more in my section. Rather than simply running their games in the editor, I should ensure that they know how to create an executable or a web build. It's an important skill that's easy to miss, and there's a lot to be learned by seeing the differences between running in the editor and outside of it.

Now that we've grown the number of games-related faculty in my department, there's a chance I may not teach game programming again until 2026. I expect I will come back to these notes around that time. The biggest pedagogic design question I will need to consider is whether to return to checklist-based grading (with its concomitant frustrations) or move to something else, like a simple point distribution. 

Wednesday, December 11, 2024

Reflecting on CS222, Fall 2024 Edition

I had a little break from teaching CS222 last semester as I wrapped up work on STEM Career Paths. I have not blogged much about that project, but you can read all about it in my 2024 Meaningful Play paper, which I understand will be published soon. In any case, here I want to capture a few of the highlights and setbacks from the Fall 2024 class, and I promise, I'm trying not to rant about Canvas more than I have to.

Regular readers may recall that I tried a different evaluation scheme this semester, which I wrote about back in July. In September, I wrote a detailed post about some of my initial frustrations with the system as well as a shorter one about how I felt my attention being pecked away. I don't want to bury the lede, so I'll just mention here that to compute final grades, I went back to my 2022 approach, the tried and true, the elegant and clean system that I learned from Bill Rapaport at UB: triage grading. Between my failed experiment this semester and the similarly failed EMRF experiment from last year or so, I feel like I'm looking for a silver bullet that doesn't exist. It reinforces to me, yet again, that I should really be running some kind of workshops for local people here to learn about what makes triage grading superior.

I still want to track some of the specific problems of the semester, though, so that readers (including future self) won't walk into them. First, I tried to set up a simple labeling system in Canvas such that I could mark work as being satisfactory, needing a minor revision, or needing a new attempt. I made no headway here in part because of Canvas' intolerable insistence that courses are made up of points. I talked with a respected colleague who is willing to toil over Canvas more than I about his approach, and he mentioned that he encodes this information into orders of magnitude, something like 10 points for satisfactory, 1 point for minor revisions, and 0.1 points for new attempt required. Combining these together, students get a weird combination of numeric and symbolic feedback. He acknowledged that it wasn't perfect. 

What I tried to do instead was to use Canvas' built-in support for grading as "complete/incomplete." Because that was all I cared about, I set the assignments to be worth zero points. When I used SpeedGrader, sure enough, the work was labeled properly. It wasn't until midsemester that I downloaded all the grades as a spreadsheet and saw that it only gave me the zero points. That is, whether the work was complete or incomplete was stripped from the exported data set. There wasn't so much data that I couldn't eyeball it to give students midsemester grades, which was facilitated by my recent transition to only giving A, C, or D midsemester grades (which are epistemologically vacuous anyway). 

It wasn't until weeks later that it dawned on me that my students almost certainly had the same problem: Canvas was showing them zeroes instead of statuses. Of course, all my policies for the course were laid out in the course plan, and I do not have any qualms about considering those to be the responsibility of my students. However, when the university's mandated "learning management system" actively disrupts their ability to think about the course, it becomes more of a shared responsibility. About two weeks ago, I went in and re-graded all of the work to use triage grading instead, which allowed me to distinguish not only between complete and incomplete, but also between things that were submitted-but-incorrect and things that were not even attempted.

One positive change that I made this semester was counting achievements as regular assignments. This made processing them simpler for me, and I suspect it made thinking about them easier for the students too. While they have a different shape than the other assignments, they are "assigned" in the sense that I expect people to do them to demonstrate knowledge. I also set specific deadlines for them, spaced out through the semester. This reduced stress from the students by providing clear guidelines, since they could still miss one and resubmit it later by the usual one-resubmission-per-week policy. It also helped me communicate to them that the intention behind the achievements is that they give you a little side quest during the project-oriented portion of the course.

I had a really fun group of students this semester, as I mentioned in yesterday's post. There were still some mysteries around participation, though. I had several students withdraw a few weeks into the semester without ever having talked to me. It is not clear to me if they decided the course was not for them or if they were simply scared. By contrast, I know I had at least one student who was likewise scared early on, but who stuck with it, and ended up learning a lot. It is not clear to me if there is more I can do to help the timid students lean toward that mindset. Also, despite excellent in-meeting participation, I had many students who just didn't do a lot of the assigned work. I have some glimmers of insight here, but it still puzzles me: how many times do I need to say, "Remember to resubmit incomplete work?" I hope that some of the simplifications I have made to the course will help streamline students' imagination about it, but more than that, I am thinking about the role of the creative imagination. I am sure that a lot of students come into this required sophomore-level class without a good sense of what it means to study, to work, or to learn. My friends in the Biology department recently took their required senior-level professionalism course, in which students do things like make resumes, and made it a sophomore-level course. I wonder if we can do something similar to help the many students we have who are not well formed.

Tuesday, December 10, 2024

What we learned in CS222, Fall 2024 edition

My students are currently typing away, writing their responses to the final exam questions for CS222. As per tradition, the first step was to set a 20-minute timer and ask them the list off anything they learned this semester that was related to the course. This was an enthusiastic group with hardly a quiet moment. They listed 130 items in 20 minutes. I gave them each six votes, and these were the top six:

  • TDD (9 votes)
  • SRP (8 votes)
  • Code cleanliness (6 votes)
  • DRY (6 votes)
  • Git (6 votes)
  • GitHub (6 votes)
Here are all the items they listed, together with the number of votes each earned, if any. There some interesting items here that point to interesting stories of personal growth. It was really a fun group of students to work with, even though several of them exhibited some behaviors I still cannot quite explain, such as a failure to take advantage of assignment resubmission opportunities.
  • Flutter (1)
  • Code cleanliness (6)
  • TDD (9)
  • A new sense of pain
  • How to set up Flutter (1)
  • DRY (6)
  • SRP (8)
  • Mob programming (2)
  • Pair programming (1)
  • Git (6)
  • Version control (2)
  • Future builder
  • Setting up your environment
  • Asynchronous programming (1)
  • UI design (3)
  • GitHub (6)
  • Code review (1)
  • Defensive programming
  • Working with APIs (1)
  • Model-View Layers (2)
  • Teamwork (4)
  • Better testing (1)
  • What "testing" is (2)
  • Explaining code with code instead of with comments (1)
  • Understandable and readable code
  • Agile development (1)
  • Naming conventions
  • Functional vs Nonfunctional Requirements
  • User stories (2)
  • Paper prototypinig
  • CRC Cards
  • User acceptance testing
  • Programming paradigms
  • How to write a post-mortem
  • Resume writing
  • Knowing when something is done (3)
  • Debugger (1)
  • Time management (3)
  • Using breakpoints
  • Test coverage (1)
  • Modularization
  • Distribution of work (1)
  • Communication skills (1)
  • Discord
  • Dart
  • commits on git
  • pull using git
  • Flutter doctor
  • pub get
  • Configuring the dart SDK
  • Rolling back commits
  • Checking out commits
  • Going to office hours early
  • Commit conventions
  • CLI tools
  • Don't use strings for everything
  • Structuring essays
  • Enumerated types
  • Sealed classes
  • Better note-taking
  • Humans are creatures of habit
  • Parse JSON data
  • JSON
  • Refactoring (5)
  • How often wikipedia pages change
  • Data tables
  • OOP (2)
  • URL vs URI
  • One wrong letter can lead to the program not working
  • How data are handled in memory
  • FIXME comments (1)
  • Widgets
  • State management
  • Encapsulation (1)
  • Abstraction (2)
  • Presenting projects
  • Coming up with project ideas
  • Reflection (2)
  • pubspec management
  • .env files
  • Hiding files from GitHub
  • Serializing JSON
  • Personal strengths & weaknesses
  • Falling behind sucks
  • Software craftsmanship
  • Work fewer jobs
  • Finding internships
  • Remember to email about accommodations
  • Accepting criticism on resubmissions (1)
  • Procedural programming
  • You don't have to take three finals on one day
  • Painting miniatures
  • GitHub has a comic book
  • Being flexible
  • Dead code
  • Holding each other to standards
  • Bad and good comments
  • Aliasing
  • Reading a textbook thoroughly
  • Rereading
  • No nested loops (no multiple levels of abstraction)
  • Using classes is not the same as OOP (1)
  • SMART
  • A bit about the Gestwicki family
  • Places to eat in NY
  • Getting ink to the front of an Expo marker
  • How to clean a whiteboard properly
  • New York Politics
  • Data structures vs DTOs vs Objects (1)
  • Conditions of satisfaction
  • Setting up ShowAlertDialog
  • Handiling network errors
  • Handling exceptions
  • Build context warnings
  • CORS errors
  • Semantic versioning
  • Dealing with Flutter error reporting
  • Test isolation (1)
  • Don't make multiple network calls when testing
  • Improving test speed
  • Always run all the tests
  • You can test a UI
  • Writing 'expect' statements
  • Running tests on commit
  • Autoformatting in Android Studio
  • Testing in clean environments
  • Creating dart files
  • Hard vs soft warnings
  • Functioning on 0-3 hours of sleep
  • Configuring git committer names

Top Five Videogames of 2024

Over on the Indiana Gamedevs Discord, one of the organizers encouraged members to share their Top 5 (or Top 10) games of 2024. I am fascinated by the fact that most of the other developers' top games are things I have never heard of. A friend pointed out that games were becoming like music, where each person has an individual taste that might be completely unknown to someone else. Trampoline Tales put their favorites on their blog, and I figured I'd go ahead and do the same.

It may be obvious, but these are video games. I don't pay much attention to how many or what kind of video games I play during the year except occasionally to wince at the hours spent on a particularly catchy title. For tabletop games, I log my plays on Board Game Geek and RPG Geek, which makes it easy to collect the data I need to write my annual retrospective. For this reflection on video games, I was pleased to see that Steam makes it easy to see which games I played by month over the past year. GOG's website  and my Epic account show games in order of activity. All these data sets are somewhat polluted by a combination of judging for the IGF and acquiring (but not playing) freebies from Prime or Epic. 

I ended up with seven games that were contenders for my favorite five of the year, but the ones I've chosen to list below really stood out from the others. These were not the only games I played, and in fact, they were not even the games I played most. There are some games I played this year that I found deeply disappointing, but I will probably keep those as internalized design lessons rather than writing a separate post about them.

Here are the five I listed for my fellow Indiana gamedevs, along with links and a very short blurb about them. 

  1. Dave the Diver
    I didn't know much about this game except that it was popular. I found the whole experience to be delightful.
  2. Tactical Breach Wizards
    Turn-based strategy, defenestration, and magic. One of the characters had an ability that I still think about, something I've never seen in a game before that is beautiful, elegant, thematic, and hilarious.
  3. SKALD: Against the Black Priory
    This is a wonderful homage to classic CRPG gameplay with just enough modern twists to feel fresh.
  4. Balatro
    This is a great example of a simple idea taken to a logical and beautiful end.
  5. Steamworld Heist II
    A sequel to one of the most interesting takes on the turn-based tactics genre, combining a 2D camera and platform elements with robots and firearms. Fun battles and rewarding power escalation.