Thursday, December 12, 2024

Reflecting on CS315, Fall 2024 Edition

As described in my course revision post in June, the overall structure of CS315 Game Programming was unchanged from previous semesters: half the semester was spent on weekly projects designed to build skills and confidence, and half the semester was spent on larger projects. 

The most significant change was in how those weekly assignments were evaluated. The past several years, I have used checklist-based evaluation, but I was hoping to find a fix for the problem of students doing the checklists wrong. This takes something simple and makes it into more work for me than if it was just a point-based rubric. Unfortunately, the strategy I used did not make things any simpler. Instead of checklists, I gave students a list of the criteria that needed to be met in order to be satisfactory. Their work then was assessed as Satisfactory, Needs Minor Revision (fix within 48 hours), or New Attempt Required. New attempts could be made at the rate of one per week, as I've done for years in most of my non-studio courses. I ran into a bit of the same problem as I wrote about yesterday, where Canvas' "Complete/Incomplete" assessment combined with no-credit assignments leads to a bad user experience, but it was not among the dominant frustrations. Those frustrations were two: students not submitting satisfactory work, and students not submitting work.

The first of those is the most disconcerting. As with checklist-based grading, I gave the students the precise criteria on which a submission would be graded. All they had to do was to meet those, and most of them did. Sometimes it took minor revisions or a new attempt or two, but these were no big deal: handling and correcting misconceptions is exactly what the system is supposed to do. The real problem came from students who submitted things that were wrong multiple times after I had told them what was wrong. In a strict reading of the evaluation scheme, this means the work was still simply unsatisfactory, whereas in other schemes (including checklist-based) they might have gotten a D or C for the work. I am still torn on this issue: was the system unfair to students of lower ability or was it the only fair thing to do with them? Put another way, is it better to give a student a C when they still have serious misunderstandings, or is it better to clearly tell them that they should not advance until they understand it? I don't interpret any of the criteria I gave as strictly "A"-level. That is, it did not require excellence to meet those criteria. What it required was rigor

The other problem, of students not resubmitting work that needed to be resubmitted, seems unrelated to the evaluation scheme chosen. Speaking with professors across campus and institutions, this seems to be part of a generational wave of challenges. I have a few hypotheses about root causes, but the point of this blog post is not to opine on that topic.

Some of my early-semester assignments take the form of multi-week projects. For example, the set of assignments involve creating an Angry Birds clone. It is submitted as a series of three assignments with increasing complexity, and the complexity is scaffolded so that someone who has never made a game before can follow along. I had a student in the class this semester who fell behind, and then he wondered if he could just submit the final iteration of that three-week project as long as it showed mastery of each week's content. I ended up declining the request. One of my reasons is that the assignments double as a sort of participation credit. It makes me wonder though if it's worth my separating these things. For example, something I've done in other courses in the past is make it so that the final iteration's grade supercedes earlier ones if it is higher. 

This was the first semester that a colleague offered a different section of CS315 during the same semester. Looking at his students' games, as well as some recent conversations in the game production studio, made me realize that I should probably emphasize the build process more in my section. Rather than simply running their games in the editor, I should ensure that they know how to create an executable or a web build. It's an important skill that's easy to miss, and there's a lot to be learned by seeing the differences between running in the editor and outside of it.

Now that we've grown the number of games-related faculty in my department, there's a chance I may not teach game programming again until 2026. I expect I will come back to these notes around that time. The biggest pedagogic design question I will need to consider is whether to return to checklist-based grading (with its concomitant frustrations) or move to something else, like a simple point distribution. 

Wednesday, December 11, 2024

Reflecting on CS222, Fall 2024 Edition

I had a little break from teaching CS222 last semester as I wrapped up work on STEM Career Paths. I have not blogged much about that project, but you can read all about it in my 2024 Meaningful Play paper, which I understand will be published soon. In any case, here I want to capture a few of the highlights and setbacks from the Fall 2024 class, and I promise, I'm trying not to rant about Canvas more than I have to.

Regular readers may recall that I tried a different evaluation scheme this semester, which I wrote about back in July. In September, I wrote a detailed post about some of my initial frustrations with the system as well as a shorter one about how I felt my attention being pecked away. I don't want to bury the lede, so I'll just mention here that to compute final grades, I went back to my 2022 approach, the tried and true, the elegant and clean system that I learned from Bill Rapaport at UB: triage grading. Between my failed experiment this semester and the similarly failed EMRF experiment from last year or so, I feel like I'm looking for a silver bullet that doesn't exist. It reinforces to me, yet again, that I should really be running some kind of workshops for local people here to learn about what makes triage grading superior.

I still want to track some of the specific problems of the semester, though, so that readers (including future self) won't walk into them. First, I tried to set up a simple labeling system in Canvas such that I could mark work as being satisfactory, needing a minor revision, or needing a new attempt. I made no headway here in part because of Canvas' intolerable insistence that courses are made up of points. I talked with a respected colleague who is willing to toil over Canvas more than I about his approach, and he mentioned that he encodes this information into orders of magnitude, something like 10 points for satisfactory, 1 point for minor revisions, and 0.1 points for new attempt required. Combining these together, students get a weird combination of numeric and symbolic feedback. He acknowledged that it wasn't perfect. 

What I tried to do instead was to use Canvas' built-in support for grading as "complete/incomplete." Because that was all I cared about, I set the assignments to be worth zero points. When I used SpeedGrader, sure enough, the work was labeled properly. It wasn't until midsemester that I downloaded all the grades as a spreadsheet and saw that it only gave me the zero points. That is, whether the work was complete or incomplete was stripped from the exported data set. There wasn't so much data that I couldn't eyeball it to give students midsemester grades, which was facilitated by my recent transition to only giving A, C, or D midsemester grades (which are epistemologically vacuous anyway). 

It wasn't until weeks later that it dawned on me that my students almost certainly had the same problem: Canvas was showing them zeroes instead of statuses. Of course, all my policies for the course were laid out in the course plan, and I do not have any qualms about considering those to be the responsibility of my students. However, when the university's mandated "learning management system" actively disrupts their ability to think about the course, it becomes more of a shared responsibility. About two weeks ago, I went in and re-graded all of the work to use triage grading instead, which allowed me to distinguish not only between complete and incomplete, but also between things that were submitted-but-incorrect and things that were not even attempted.

One positive change that I made this semester was counting achievements as regular assignments. This made processing them simpler for me, and I suspect it made thinking about them easier for the students too. While they have a different shape than the other assignments, they are "assigned" in the sense that I expect people to do them to demonstrate knowledge. I also set specific deadlines for them, spaced out through the semester. This reduced stress from the students by providing clear guidelines, since they could still miss one and resubmit it later by the usual one-resubmission-per-week policy. It also helped me communicate to them that the intention behind the achievements is that they give you a little side quest during the project-oriented portion of the course.

I had a really fun group of students this semester, as I mentioned in yesterday's post. There were still some mysteries around participation, though. I had several students withdraw a few weeks into the semester without ever having talked to me. It is not clear to me if they decided the course was not for them or if they were simply scared. By contrast, I know I had at least one student who was likewise scared early on, but who stuck with it, and ended up learning a lot. It is not clear to me if there is more I can do to help the timid students lean toward that mindset. Also, despite excellent in-meeting participation, I had many students who just didn't do a lot of the assigned work. I have some glimmers of insight here, but it still puzzles me: how many times do I need to say, "Remember to resubmit incomplete work?" I hope that some of the simplifications I have made to the course will help streamline students' imagination about it, but more than that, I am thinking about the role of the creative imagination. I am sure that a lot of students come into this required sophomore-level class without a good sense of what it means to study, to work, or to learn. My friends in the Biology department recently took their required senior-level professionalism course, in which students do things like make resumes, and made it a sophomore-level course. I wonder if we can do something similar to help the many students we have who are not well formed.

Tuesday, December 10, 2024

What we learned in CS222, Fall 2024 edition

My students are currently typing away, writing their responses to the final exam questions for CS222. As per tradition, the first step was to set a 20-minute timer and ask them the list off anything they learned this semester that was related to the course. This was an enthusiastic group with hardly a quiet moment. They listed 130 items in 20 minutes. I gave them each six votes, and these were the top six:

  • TDD (9 votes)
  • SRP (8 votes)
  • Code cleanliness (6 votes)
  • DRY (6 votes)
  • Git (6 votes)
  • GitHub (6 votes)
Here are all the items they listed, together with the number of votes each earned, if any. There some interesting items here that point to interesting stories of personal growth. It was really a fun group of students to work with, even though several of them exhibited some behaviors I still cannot quite explain, such as a failure to take advantage of assignment resubmission opportunities.
  • Flutter (1)
  • Code cleanliness (6)
  • TDD (9)
  • A new sense of pain
  • How to set up Flutter (1)
  • DRY (6)
  • SRP (8)
  • Mob programming (2)
  • Pair programming (1)
  • Git (6)
  • Version control (2)
  • Future builder
  • Setting up your environment
  • Asynchronous programming (1)
  • UI design (3)
  • GitHub (6)
  • Code review (1)
  • Defensive programming
  • Working with APIs (1)
  • Model-View Layers (2)
  • Teamwork (4)
  • Better testing (1)
  • What "testing" is (2)
  • Explaining code with code instead of with comments (1)
  • Understandable and readable code
  • Agile development (1)
  • Naming conventions
  • Functional vs Nonfunctional Requirements
  • User stories (2)
  • Paper prototypinig
  • CRC Cards
  • User acceptance testing
  • Programming paradigms
  • How to write a post-mortem
  • Resume writing
  • Knowing when something is done (3)
  • Debugger (1)
  • Time management (3)
  • Using breakpoints
  • Test coverage (1)
  • Modularization
  • Distribution of work (1)
  • Communication skills (1)
  • Discord
  • Dart
  • commits on git
  • pull using git
  • Flutter doctor
  • pub get
  • Configuring the dart SDK
  • Rolling back commits
  • Checking out commits
  • Going to office hours early
  • Commit conventions
  • CLI tools
  • Don't use strings for everything
  • Structuring essays
  • Enumerated types
  • Sealed classes
  • Better note-taking
  • Humans are creatures of habit
  • Parse JSON data
  • JSON
  • Refactoring (5)
  • How often wikipedia pages change
  • Data tables
  • OOP (2)
  • URL vs URI
  • One wrong letter can lead to the program not working
  • How data are handled in memory
  • FIXME comments (1)
  • Widgets
  • State management
  • Encapsulation (1)
  • Abstraction (2)
  • Presenting projects
  • Coming up with project ideas
  • Reflection (2)
  • pubspec management
  • .env files
  • Hiding files from GitHub
  • Serializing JSON
  • Personal strengths & weaknesses
  • Falling behind sucks
  • Software craftsmanship
  • Work fewer jobs
  • Finding internships
  • Remember to email about accommodations
  • Accepting criticism on resubmissions (1)
  • Procedural programming
  • You don't have to take three finals on one day
  • Painting miniatures
  • GitHub has a comic book
  • Being flexible
  • Dead code
  • Holding each other to standards
  • Bad and good comments
  • Aliasing
  • Reading a textbook thoroughly
  • Rereading
  • No nested loops (no multiple levels of abstraction)
  • Using classes is not the same as OOP (1)
  • SMART
  • A bit about the Gestwicki family
  • Places to eat in NY
  • Getting ink to the front of an Expo marker
  • How to clean a whiteboard properly
  • New York Politics
  • Data structures vs DTOs vs Objects (1)
  • Conditions of satisfaction
  • Setting up ShowAlertDialog
  • Handiling network errors
  • Handling exceptions
  • Build context warnings
  • CORS errors
  • Semantic versioning
  • Dealing with Flutter error reporting
  • Test isolation (1)
  • Don't make multiple network calls when testing
  • Improving test speed
  • Always run all the tests
  • You can test a UI
  • Writing 'expect' statements
  • Running tests on commit
  • Autoformatting in Android Studio
  • Testing in clean environments
  • Creating dart files
  • Hard vs soft warnings
  • Functioning on 0-3 hours of sleep
  • Configuring git committer names

Top Five Videogames of 2024

Over on the Indiana Gamedevs Discord, one of the organizers encouraged members to share their Top 5 (or Top 10) games of 2024. I am fascinated by the fact that most of the other developers' top games are things I have never heard of. A friend pointed out that games were becoming like music, where each person has an individual taste that might be completely unknown to someone else. Trampoline Tales put their favorites on their blog, and I figured I'd go ahead and do the same.

It may be obvious, but these are video games. I don't pay much attention to how many or what kind of video games I play during the year except occasionally to wince at the hours spent on a particularly catchy title. For tabletop games, I log my plays on Board Game Geek and RPG Geek, which makes it easy to collect the data I need to write my annual retrospective. For this reflection on video games, I was pleased to see that Steam makes it easy to see which games I played by month over the past year. GOG's website  and my Epic account show games in order of activity. All these data sets are somewhat polluted by a combination of judging for the IGF and acquiring (but not playing) freebies from Prime or Epic. 

I ended up with seven games that were contenders for my favorite five of the year, but the ones I've chosen to list below really stood out from the others. These were not the only games I played, and in fact, they were not even the games I played most. There are some games I played this year that I found deeply disappointing, but I will probably keep those as internalized design lessons rather than writing a separate post about them.

Here are the five I listed for my fellow Indiana gamedevs, along with links and a very short blurb about them. 

  1. Dave the Diver
    I didn't know much about this game except that it was popular. I found the whole experience to be delightful.
  2. Tactical Breach Wizards
    Turn-based strategy, defenestration, and magic. One of the characters had an ability that I still think about, something I've never seen in a game before that is beautiful, elegant, thematic, and hilarious.
  3. SKALD: Against the Black Priory
    This is a wonderful homage to classic CRPG gameplay with just enough modern twists to feel fresh.
  4. Balatro
    This is a great example of a simple idea taken to a logical and beautiful end.
  5. Steamworld Heist II
    A sequel to one of the most interesting takes on the turn-based tactics genre, combining a 2D camera and platform elements with robots and firearms. Fun battles and rewarding power escalation.

Tuesday, November 26, 2024

Bloom's Taxonomy, Teaching, and LLMs

Recent discussions of LLMs in the classroom have me reflecting on Bloom's Taxonomy of the Cognitive Domain. Here's a nice visual summary of its revised version.

Blooms Taxonomy of the Cognitive Domain
(By Tidema - Own work, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=152872571)

Bloom's Taxonomy, as it is called, is a standard reference model among teachers. The idea behind it is that a learner starts from the bottom and works their way upward. As far as I know, it has not been empirically validated: it's more of a thought piece than science. This is reflected in the many, many variations I've seen in the poster sessions of games conferences, where some young scholar proposes a play-based inversion that moves some piece into a different position on the trajectory. All that is to say, take it with a grain of salt. The fact remains that this model has had arguably outsized influence on the teaching profession. (Incidentally, I prefer the SOLO taxonomy.)

There's been a constant refrain the past few decades among a significant number of educators and pundits that technology has made obsolete the remember stage. Why memorize this table of values when I can look them up? Why remember how this word is spelled? Spellcheck will fix it for me. My skepticism of the concept has only increased as I have worked with more and more students who use digital technology as a crutch rather than a precision instrument.

LLM-generated code comes up in almost every conversation I have among teachers and practitioners in software development. There are ongoing studies into the short- and long-term implications of using these tools. My observations are more anecdotal, but it's no exaggeration to say that every professional developer and almost every educator has landed in the same place: LLMs can generate useful code, but knowing what to do with it requires prior knowledge. That is, the errors within the LLM-generated code are often subtle and require knowledge of both software engineering and the problem domain. 

From the perspective of Bloom's taxonomy, a developer with a code-generating LLM is evaluating its output. They come to their evaluation by building upon the richness of cognitive domain skills that undergird it. At the very fundamental level, they bring to bear a vast amount of facts about the praxis of software development that they have remembered and understood.

If Bloom is right, then among the worst things we could do in software development education is throw students at LLMs before they have the capacity for viable evaluation. Indeed, before LLMs, the discussion around the water cooler was often about how to stop students from just searching Stack Overflow for answers and submitting those. Before Stack Overflow, it was that students were searching the web for definitions rather than remembering them. My hypothesis for learning software development then is something like this:

  • Google search eliminates the affordance for learning to remember.
  • Stack Overflow eliminates the affordance for learning to understand.
  • LLMs eliminate the affordance for learning to apply.
This hypothesis frames the quip that I share when an interlocutor discovers that I am a professor and, inevitably, asks what I think about students using ChatGPT. My answer is that I'm considering banning spellcheck.

Monday, November 25, 2024

Walking away from a November game project: A reflection on NoGaDeMon 2024, Dart, Flutter, and Bloc

I would hate to make this a tradition, but it seems that I once again entered NoGaDeMon. National Game Design Month (NaGaDeMon) is November, and for several years, I created interesting little projects during the month. Last year, I was not able to pull a project together, and I'm afraid that's the case this year as well. However, I was able to learn a bit through the attempt, so I want to capture some of it here before it slips away.

Before November, I had been tinkering with an intersection of ideas related to posts in the last few months: interactive narrative games like my The Endless Storm of Dagger Mountain, which drew from the Powered by the Apocalypse tabletop RPG space, built around some concepts from Blades in the Dark and Scum & Villainy. I figured that, for November, I would try building a very small slice of the idea. For various reasons, I also wanted to try building and releasing a game using Dart and Flutter. I dug in and started making reasonable progress for a side project.

A few days into November, John Harper released Deep Cuts, a campaign and rules expansion for Blades in the Dark. I bought a copy and was quite surprised at the rules changes. I had expected little tweaks and balancing maneuvers, but Deep Cuts actually provides a complete overhaul of the most fundamental Blades action resolution system. This was too cool not to play with, so I rehashed my planned NaGaDeMon project, essentially starting from scratch to support some of the Deep Cuts ideas.

Before last week, I was able to get a very small version of the game working, letting the player experience a single, badly written game scene. The user-interface was just awful, so in order for the game to come together would have required adding a ton of content and a complete player experience design and implementation. Both of those would be tedious efforts, especially the latter, since I am not very fast with Flutter UI development. Part of the inspiration for choosing Flutter was to gain more practice with engaging UIs. 

About two weeks ago, the work of one of my committees exploded into taking most of my unassigned work hours, and this was not altogether unexpected. We also just got the good news that we will be hosting family for several days around Thanksgiving. This will be wonderful, although it also means these won't be hobby-project days. The result is that I've decided to put this project to rest. I did learn quite a bit going this far into the project, and that is the topic for the remainder of this post.

First of all, the obvious lesson is that if I wanted to really focus on learning to make a top-notch interactive Flutter UI, I should have chosen something with zero other design risks. I knew that the best I could do in one month was to make something just functional, yet I am not sure I was honest with myself about how ugly that would likely end up. Maybe I will find a game jam that will let me get a better handle on combining turn-based game timing with implicit animations.

Prior to November, I had been tinkering with some of these design inspirations in Godot Engine, which is of course the engine I used to build The Endless Storm of Dagger Mountain. I was using a rather conventional mutable-state object-oriented architecture. I found myself frequently frustrated by the lack of good refactoring tools for GDScript. This is a significant hindrance to evolving an appropriate design. This is part of what made me switch over to Dart, which is a joy to work with in part because of the excellent tooling support from Android Studio. 

A few summers ago, I spent a great deal of time studying Filip Hracek's egamebook repository. Nothing shippable came out of my efforts—I don't think I ever even blogged about it—but I did learn a lot. I was struck by how Hracek separated the layers of his architecture, and it was the first time I spent a lot of time in a game that used immutable data models. At the time, I had looked into the Bloc architecture and struggled to make sense out of it.

Approaching this November's project, I decided to dig deeper into Bloc. I spent a lot of time with the official tutorials and puzzling over this seemingly simple diagram:


The simple tutorials are simple, which is convenient, but the more robust ones separate the "data" component into a data provider and repository. It seemed clear that the game state could be conceived of as data, but I struggled to conceptualize where the game rules should live. The game rules can be considered part of the domain model, and as such, should be separated from the bloc. This would mean that a response from the domain model may be the modified game state, which then is echoed back through the bloc to the UI with a bloc state change. However, it's also reasonable to conceive of the game state itself as the data layer and the "business logic" as being the transformations of that state. Indeed, this seems to be the difference between the simple and more complex tutorials: the simple ones deal with simple in-memory state, and the more complex ones draw data from different sources and transform them in the data layer. 

Of course, there is no silver bullet. Given the tight time constraints on the project, I simply considered the immutable game state to be my data layer, and I put the game logic in a bloc. I also simply passed the game state along to the UI, but in a more robust solution, I would have had clearer separation between layers. Including a dependency between the UI and the data layers was a matter of expedience and the intentional incurring of technical debt.

My first pass at the implementation had me writing my game states and bloc states by hand. The Equatable package meant that I didn't have to fret over writing some of the boilerplate that's necessary to do state comparisons, and it was easy to integrate this in Android Studio using Felix Angelov's Bloc plugin. When scouring the Web for help with Bloc, one quickly also comes across discussions of Freezed, which library is also integrated into Angelov's plugin. I had tinkered with Freezed in my egamebook-inspired explorations, but I have not shipped anything that uses it. After having built up my understanding of Bloc using Equatable, Freezed was an obvious next step. Next time, I would jump right into using it for cases like this.

Writing a functional Flutter user-interface was straightforward using BlocBuilder. I found this to be a convenient way to conceptualize the game, especially since it had very clear states. For example, in my original explorations (before Deep Cuts), I had the player choosing an action from a list, then customizing the action with various options from Blades in the Dark, such as pushing yourself to trade stress for dice. After rolling the dice, the player is now in a different state of the game in which they are responding to the result, such as by resisting its consequences. This was elegant to express in the code, and I am confident that with enough effort, I could make a compelling user experience out of it. By contrast, Dagger Mountain used an architecture inspired by MVP but that depended too heavily on the undocumented, unenforceable behavior of coroutines. Both of these are "only jam projects," but they are helping me to conceive of how I would approach something more significant in this problem domain. The aforementioned coroutines were my solution to synchronizing the model and view states (for example, to finish an animation before continuing to the next step of the narrative); I'm fairly certain I understand how I can do that with bloc's events and states, but since the November project will remain unfinished, there is risk.

All this exploratory coding meant that I did not follow a test-driven process. I ended up not getting into the testing libraries specifically for bloc. It's possible that this would have helped me better to conceptualize the business logic versus the domain layer, but that remains future work. 

There are still a lot of questions about the game design itself. Indeed, this entire exploration is inspired by design questions around the adaptation of Blades in the Dark tabletop gameplay into a digital experience. Citizen Sleeper is the only project I know of that has worked in this space, and it's a fantastic interpretation. I only became aware of Citizen Sleeper after I started doodling my own ideas, and it's interesting to see where they converge and where they diverge. I hope to dive back into this design space later, but for now, my attention must go toward wrapping up this semester, planning for next semester, and enjoying the upcoming Thanksgiving break.

Wednesday, November 13, 2024

What people believe you need to do to be an independent game developer

Aspiring game developers are starving for advice. I recently attended a meetup of game developers where an individual gave a formal presentation about how to become an indie. The presentation was thoughtfully crafted and well delivered, and it was entirely structured around imperatives—the things that you, the audience member, need to do if you want to be a successful independent game developer. The audience ate it up and asked for more. They were looking for the golden key that would unlock paradise.

There are two problems here, one overt and one subtle. The overt one is that there is no golden key. There is no set of practices that, if followed, will yield success. I imagine most of the audience knew this and were sifting for gold flakes. However, it was also clearly a mixed crowd, some weathered from years of experience and some fresh-faced hopefuls. I hope the latter were not misled.

The subtler problem was made manifest during the question and answer period when it became clear that the speaker was not actually a successful indie game developer at all. Their singular title had been in development for three years and had just entered beta. They had no actual experience from which to determine if the advice was reasonable or not. The speaker seemed to wholeheartedly believe the advice they were giving despite not being in a position to draw conclusions about their efficacy.

Once I saw the thrust of the presentation, I started taking notes about the kinds of advice the speaker was sharing. 
  •  Document everything, and specifically create:
    • Story and themes document
    • Art and design document
    • MDA document
  • Have a strong creative vision
  • Be a role model for the work environment you want
  • Consider these pro tips for hiring staff:
    • Use a report card to score your candidates
    • Look for ways to get to know what it would be like to work with them
    • Try collaborating with them as part of the interview
    • Always have a back-up candidate, not a top candidate but someone you know you could work with
    • Being their best friend does not mean you should work with them
  • Thank people for their contributions and efforts
  • Use custom tools to help you work better
    • Use the Asset Store in Unity
    • Use tools to help you test
    • Automate as much as you can to save you time
    • Learn to prompt so you can use generative AI
      • It allows an artist to be a developer by removing coding barriers
      • LLMs can replace tedious use of YouTube, Google, Reddit, etc.
  • When pitching to publishers, have two versions of your slide deck:
    • pitch slides: the version you send
    • pitch presentation: the version you present
  • Take budgeting seriously
    • Budget for specific deadlines
    • Don't spend your own money if you can get money from someone else (e.g. publisher)
    • Get a job so that you can support yourself until you can get funding from someone else for the game project
      • Quoting one of his professors: "To make money, you need to spend money, and to spend money, you need money."
  • Don't get distracted by others (e.g. on social media)
These aren't the things you need to do to be an indie game developer. These are the things that an audience believed you need to do to be an indie game developer or the things that someone with a modicum of experience thought would be worth telling indie hopefuls. It seems to me that this is the advice you would get if you spent an afternoon collecting advice by searching the Internet. It's helpful for me to have a list of what people are likely to believe from consuming popular advice. Sometimes advice is popular because it is accurate; sometimes people tell you to make your game state global.

Three other things jumped out at me about the presentation. First was the unspoken assumption that one would be using Unity. There was no indication from the speaker that this was even a choice, and none of the questions reflected on it. Second, the speaker acknowledged the importance of automation and automated testing, which was great to see. Third, no one pushed back regarding the use of CoPilot or other LLMs to help with coding, whereas I suspect there would have been a riot had he suggested using the same tech to generate artwork. There's a study in there.