Tuesday, July 23, 2024

Mulberry Mead

It's time again for What is Paul Drinking? Today's notes are from my latest batch of mulberry mead, which I mentioned in my notes about making lattes. I think this is my second batch, with my first having been made in 2022. I found a bottle of 2022 in my cabinet, and if I've made other batches, they are lost to time. 

I put about a quart of mulberries into a saucepan with enough water to cover them. I cooked them a while to soften them, then gently muddled the berries, turning the water into beautiful purple. What I should have done (and what my wife recommended I do) is get as much juice out of the berries as possible then just use that in primary. Instead, I had the idea that I wanted the whole berries in the fermentation. I put all the solids into a mesh bag and dropped it into my usual mix of three pounds of honey and D47 yeast.

Unfortunately, the bubbling action of the fermentation lifted the bag right up and out of the water. In retrospect, that is predictable. I would have needed to use some weights to keep the bag submerged. However, I only wanted to keep the fruit in the fermenter a few days, and if I submerged it, it would have to stay until racking. In short, I had made a problem for myself.

Next time, just smash the juice out the berries and use that. 

In any case, the result is a lovely color. It has a subtle berry flavor, which I understand to come from the fact that the fruit was added in primary. It's quite different from when I infuse a mead with fruit, which picks up the fruit flavor more intensely. It also came out quite dry. Sometimes that is what I want, and sometimes I add just a splash of simple syrup to the glass before drinking. That's much simpler than formal backsweetening with no danger of exploding bottles from restarted fermentation.



Friday, July 12, 2024

Summer course revisions 2024: CS222 Advanced Programming

I made a few significant structural changes to CS222 for the Fall semester. The course plan has just been uploaded, so feel free to read it for the implementation details. The motivation for all the changes was the same: reduce friction. The course has always had a lot of stuff going on in it, and students seem less able to manage this than they could in the past. For example, it used to be that I could explain triage grading such that most of the students understood it, but students become more brainwashed into the LMS way of running a class, they become less able to conceive of alternatives.

I decided to use the same grading scheme in this class as I am trying in Game Programming. Each assignment will be graded on the scale Successful, Minor Revisions Needed, New Attempt Required, or Incomplete, following Bowman's case study. The EMRF approach that I tried last year did not work, and I am hopeful that this alternative alternative will patch some of the leaks. I considered breaking down the CS222 assignments into individual goals, as Bowman does in his math courses and as I have done in Game Programming, but I found it to be unnecessarily complicated to do so. Instead, I have taken each day's work and consolidated it into a single assignment with multiple graded parts. I hope that this, too, simplifies the students' experiences.

I am still using achievements, but I have changed how they are assigned and assessed. For many years, I have had an open submission policy, where students can complete achievements at any time, and their final grade is based on the quantity and quality submitted. This gave students one more thing to manage, and it was something that could not easily be represented in Canvas. My wishing that students didn't delegate or subjugate their planning to Canvas won't change the fact that they do. Hence, I'm just asking students to do three achievements during the semester. It will be like choosing an assignment from a menu. Since they are otherwise a normal kind of assignment, I don't need special policies for resubmission, either. Maintaining this parallel structure between achievements and assignments also made me remove the star system evaluations. Previously, students could claim one star through self evaluation, two through peer evaluation, and three through expert evaluation. I love the idea of having students review each others' work in this way, but in the name of streamlining, I have removed it. Since I don't have this kind of peer evaluation on other assignments, I am going to remove it here as well.

From the beginnings of CS222, I have used Google Docs to manage submissions so that students can see and comment on each others work. I used to spend time in class doing more peer review in this way, but this got cut out as new "content" was added to the course. Google Docs stayed as a convenient way for me to see student work and especially for students to do the peer reviews required for achievements. Taking those away means there's no real good reason to make students go through the process of submitting through Google Docs. As students' general computing literacy has declined, I have had more and more trouble with students understanding how to use Google Docs and the browser according to the instructions. Now there's no reason besides tradition to keep it, so out goes Google Docs.

I still want to keep my course plans online and publicly available rather than having them stashed away on Canvas. However, my old approach to managing the course site as an SPA made it impossible for me to link directly to specific parts of a document. Somewhere between .htaccess configurations and shadow DOM, I could just not make it work. This was especially frustrating since this is so simple in vanilla HTML: just link to a named anchor. With the change in how I am assigning and evaluating work, I decided it was time to make this work. I have spent about two work days fighting with web development and finally ended up with the solution you can find on the course plan. I have kept lit html and Web Components because of the powerful automation tools they provide: I can define the data of an achievement, for example, and use Javascript and HTML templates to generate the code that displays it. I have stopped using the open-wc generators and npm. I looked into trying to use the open-wc generator and rollup without the SPA configuration, but it turns out that the instructions for doing this are not up to date: they produce a conflicting dependency error. Hence, I just went with a simple deployment solution that copies my source files and a minified JS lit-html library to the web server. Even though I already wrote about my frustrations with maintaining my Game Programming site, and how they led me to migrate the site to GitHub, I am thinking about revisiting that decision based on the work I've done to get the CS222 page working properly.

Friday, July 5, 2024

O teach me

I recently read Romeo & Juliet for the first time since the early 1990s. I was struck by this particular line by Romeo in Act 1, Scene 1:

    O teach me how I should forget to think

At the time, he is smitten by unrequited love, yearning for a woman who he has convinced himself will bring him complete joy. It inspired me to make this.

UPDATE: A friend's commentary on this idea was too brilliant not to pursue, so here are a few more for the album. Maybe I will make up some T-shirts next semester.



I wonder if one even needs Benny on there? It would probably work just as well without. Here's a set you can use for your own satirical ends.





Wednesday, July 3, 2024

Thoughts on Koenitz's "Understanding Interactive Digital Narrative"

Harmut Koenitz's latest book, Understanding Interactive Digital Narrative, quickly establishes itself as being postmodern and political. I am grateful for his overt framing, although significant portions of the book contradict the tenets of moral relativism and subjective truth, as I will discuss below. Whether populism truly is a "cancer" that only leads to violence and trouble does not come up again in the text beyond the introduction.

A primary contribution of the book is Koenitz's System-Process-Product (SPP) model for analyzing interactive digital narrative (IDN). SPP recognizes that the system is created by the developers, that it is reified through a process of user interaction, and that this results in a product, which can be the discourse about the experience or a recording thereof. SPP includes a "triple hermeneutic" that users would bring to the experience, recognizing the interpretation of possibilities for interaction, the interpretation of instantiated narrative, and the reflection on prior traversals, which entails using memory from prior traversals. The explanation of SPP draws explicitly on familiar concepts from object-oriented software development, describing how IDN systems are instantiated through interaction as being like how objects are instantiated from classes. I remain surprised that this model would be considered revolutionary since it is exactly how any reasonable game developer would think of their work: design ideas are captured in code and assets; the player interacts with the dynamic system; and as a result of that experience, players can talk about it or share their playthrough.

Koenitz brings up the cautionary tale of Microsoft's Tay, the chatbot that was taken down after only 16 hours due to its absorbing and then repeating racist content. In a book that is otherwise about the boundless potential of IDN, the author here exhorts the reader that there must be protections in place to prevent IDN from exhibiting such behavior. This reveals a significant gap in his analytical model. SPP has no affordance to talk about morals and ethics outside of a participant's or scholar's subjective interpretations. The analytical framework in the text lacks the epistemological power to claim that any player activity is ethical or unethical. The claim that some interactions are universally unethical reveals that the author is using a different interpretive lens than the one he describes.

I appreciate his lengthy treatment of the narratology vs. ludology wars and its numerous references. I transitioned into games scholarship when this conversation was cooling. Koenitz's claims that the ludologists' primary mistake was narrative fundamentalism. Because they believe in only one kind of narrative, they misunderstood the narratologists, who had special knowledge of the avant-garde and multiplicity of narrative forms. I am not conversant enough in this literature to support nor critique his arguments, but the unyielding insistence that the opposition has no merit leaves me wanting to hear a bit more from the other side.

The book includes a discussion of the interpretation of Bandersnatch. He explains how interactors have created different mappings of this IDN's formal structure based on their experience, pointing out that none of them are "the structure of Bandersnatch" but rather are each "an interpretation of the structure of Bandersnatch." He also claims, however, that "unless the original design documentation is released, we cannot be sure, and therefore different interpretations of the underlying structure exist." 

Two things struck me about this claim. The first is that he presumes the existence of an authoritative and correct "original design documentation." This seems like the same fallacy that desires design bibles in games and BDUF in software development. My experience is that there may be some original design documentation but that the design-as-such is only definitively manifested in the system. Anything that specifies the possible player experiences at the fidelity matching actual player experience is homologous to the system itself. (Incidentally, one of the reasons most of my projects are released as open source is to allow the curious to study the actual system and not just interpretations of it.)

Second, there is a contradiction inherent in defending the recognition that Bandersnatch has a structure and that all interpretations are valid. He states that the differences in mappings "do not mean that any of these interpretations are wrong in absolute terms, but rather that we need to be aware of their epistemological status as post-factum interpretations." How can the interpretations not be wrong and yet the difference in interpretations be contingent upon the original design documentation not being released? That is, there is an implicit acknowledgement that there is an absolute and authoritative structure, and that these interpretations are approximations of it, such that if one had the former, some of the latter could be shown to be wrong. It is possible that a commitment to postmodernism requires one to admit the viability of demonstrably-wrong structural interpretations, but if that's the argument here, it's awfully subtle. If there is a difference between someone's structural analysis of Bandersnatch and its actual structure, then that means that it can be demonstrated with formal analysis or automated tests. I would call that interpretation of the structure "wrong." Aside from this engineering-design perspective, one can see the problem from the lens of constructivism: the interpretation yields a non-viable mental model because it makes incorrect observations about the world. It reminds me of Koster's point about Monopoly: you can use house rules all you want, but you have to acknowledge that you're playing a different game with the same pieces. If someone's model of Bandersnatch leads to contradictions against the actual thing, then it's either wrong or it's a model of something that doesn't exist.

Koenitz uses a cooking analogy to distinguishing between the prescription of a recipe (a specification) and the description of food (a product of the experience). It's a reasonable metaphor, but here is where also writes, "Far too long have we tried to learn how to cook from descriptions of finished meals." There is no referent for "we," but I don't consider myself included. When I decided I wanted to get better at making games, I didn't turn to descriptions of games: I turned to the writings of people who talk about why and how to make games. Indeed, it's hard to imagine how one could expect to get better at any art form by only looking at descriptions of experiences of that art form. Who is "we" then? He must mean "my community of IDN designers."

The penultimate section of the book provides advice on how to design IDNs. It is what any seasoned designer would expect, and it repeats what has been documented in countless books on video game design: specify goals, create prototypes of increasing fidelity, produce the software, and test. This is the conventional production process that has been talked about in games since at least Cerny's Method talk in 2002 and well before that in user-centered design. The approach is so well established that it allows scholars like me to interrogate it to determine where agile software development methods can improve it.

Reading Understanding Interactive Digital Narratives helped me understand both IDN and the community of IDN scholars. I learned some new ideas from it that will certainly be helpful in my thinking and writing, including narrative fundamentalism, narrative ambivalence, and the cognitive turn in narratology. I applaud Koenitz for his insistence that precise definitions of words like "story" and "narrative" are necessary and that lazy or colloquial use holds back progress. Indeed, I respect that he doesn't insist that people use his definitions necessarily, but that one has to define their terms in order to ensure that they are understood. I believe the SPP model will provide a useful starting point for learners who wish to analyze IDNs, including games, especially those learners who don't have a background in systems design. However, for my students who want to get better at writing for games, I will continue to recommend Bateman's collection.

Tuesday, July 2, 2024

Representing character damage through loss of skills and equipment

I was surprised to come across two recent tabletop RPGs that both eschew "hit points" as a means of representing character damage: Lester Burton's Grok?! and Runehammer's Crown and Skull. Neither cites the other nor any common inspiration, which makes me think that there's an interesting games history project hiding in here: is there a common ancestor or is it convergent evolution?

In Grok?!, the player has seven resource slots that can hold items. When a character suffers duress, there are a few possible outcomes. The player may choose to create, remove, or change one of their items, or they may take a condition that uses up a resource slot. These are intended to be temporary, but if a character has no more slots, then the character is incapacitated and the condition instead becomes a permanent trait. A player may also voluntarily add conditions to their character in order to roll additional dice after failing a check. Grok?! is clearly a story-focused game, using an elegant universal resolution system that invites creativity and narration.

Crown and Skull has an intricate point-based character-creation system in which players determine a character's skills and gear. Taking damage involves crossing off skills and gear, which is temporary, and sometimes destroying gear, which is permanent. Damage is classified by whether it targets skills, equipment, or both, and it is further classified by whether it is a random target or whether players choose. Runehammer describes this as an attrition system, and it's easy to see how it invites more interesting narration than "You lose five hit points." Crown and Skull is presented as a game that the players themselves get better at, learning more about it by playing it. Part of the challenge of the game is learning to create and manage a versatile, robust, survivable character.

I have played a lot of CRPGs, but I don't remember ever seeing a system like this—one where damage is exclusively represented by the temporary or permanent loss of gear or skills. It makes me wonder how well such a design could be adapted into a video game. Could such a system be adapted into a satisfying video game experience, or are these formal systems too strongly coupled with the improvisational storytelling of tabletop games?

Friday, June 14, 2024

The Endless Storm of Dagger Mountain: A short adventure game that is Powered by the Apocalypse

Introduction and background

Last night, I released a new game into the wild: The Endless Storm of Dagger Mountain. I submitted it to Crossroads Jam 2024, a statewide game jam sponsored by the Indiana Gamedevs community. 

This game scratches a creative itch that I've had for over two years: what happens when you apply Apocalypse World style rules in a digital game? I've had this as a component of a few different design explorations, none of which bore any fruit—sometimes because they weren't fun and sometimes because their scope exploded. In fact, in May, I started work on a project that was growing too large, and it included PbtA elements. By the last week of May, I had put that side project to rest. I decided that I could use Crossroads Jam as an excuse to isolate just this single design idea—digital PbtA—and package it up into a jam-sized game. Readers may be interested in looking at my previous exploration of tabletop PbtA, which took the form of Kapow! The Campy Superhero Role-Playing Game. I also wrote an essay comparing the math of PbtA and d20 systems.

I was a little disappointed that the theme "severe weather" won the polling on the Indy Gamedevs Discord. Since this is the first Crossroads Jam, it seemed like a great opportunity to highlight something positive about the state. You know, like corn. More seriously, there are a lot of great things about Indiana, including globally-recognized events like the Indianapolis 500. And corn. But I digress, and others preferred "severe weather." I had been wanting to explore some pulp fantasy writing a la Robert E. Howard's Conan stories, which I read a few years ago. This presented a good opportunity: a lone, stoic hero, making a long journey up to the top of a mountain where dark magic has brought about the destruction of the innocent.

Game and Narrative Design

Most of the writing is really just a first draft. Despite years of gamedev, I have done barely any game writing. It felt good to get my hands dirty and create enough content to carry the gameplay. I estimate I spent about 15 hours just writing content for a game that takes a few minutes to play. The writing was enjoyable, but especially as I got tired, I couldn't shake the feeling that much of the text was low stakes. Each dice check has at least three paths—succeed, succeed at a cost, and fail—and I tried to keep them equally interesting. I do not spend a lot of time with interactive fiction, but I quickly ran right into the same problem that any narrative designer has: with chance or with agency comes the loss of authorial control. There are a few specific scenes in the game that I would like to spend more time on, to make the story more compelling and to evoke Howard more strongly.

I was not able to pull in all of the tabletop inspiration that I originally wanted. In tabletop games, I love the idea of the Countdown Clock or what Runehammer calls Timers. A simple timer that ticks down becomes a source of tension for the player and, in the system, it's another formal element to manipulate. In my first draft of Dagger Mountain, I had a timer that, if it ran out, would cause the game to end before the player could summit the mountain. This worked well as a penalty, especially when asking the player to choose between attribute reduction and advancing the timer. However, the game ended up being too short to make the timer meaningful. In fact, one of the problems that inspired me to think about removing the timer was the trouble of visually representing a "timer" as something with only two or three units. I could not reasonably balance it and give it a significant value: a timer with two clicks feels more like it's depleting some other resource rather than feeling like time.

The other common PbtA element that I didn't add was what Apocalypse World calls "reading a sitch." The idea here is that a player can spend an action trying to understand a situation, where success or partial success determines how many questions they can ask about it. Systematically, this is simple: I could have a preconstructed list of questions and answers, and these could provide lore and setting information. As I got into building the narrative, this felt like it would not have a good return on investment: every other decision produced changes in the world state, such as modifications to attributes, and I didn't want "reading the sitch" to be a wellspring of mechanical benefits. It would be relatively easy to add this into my software since it was in mind from the beginning, but it did not find its way into this project.

Speaking of attributes, I still have something of a pipe dream that one could make an RPG attribute system that is consistent with Thomistic philosophy. Consider that the legacy of Dungeons & Dragons presents a sort of dualism, that the mind and the body are separate. Yet, as confirmed by a recent conversation with a weight-lifting friend of mine, the two must work together: it's not clear that Strength (as physical might) and Wisdom (as willpower) are independent variables, for example. I couldn't find a way to distill a more Aristotelian view of the human person into three or four attributes, but reviewing Apocalypse World's attributes, I was reminded that they describe how one does something rather than what someone is. That's a great hook for future design work. In the meantime, for this project, I made a list of actions that I wanted the player to perform, knowing the genre and setting, and I categorized these into the three attributes that are in Dagger Mountain: bold, determined, and savvy. I admit that there are a few stretches in the game where penalties might be hard to classify in these ways, but I will be curious to hear what players think about them as a trio.

Technical Considerations

Inspired by Knights of San Francisco and the beauty of Dart, I started writing Dagger Mountain in Flutter. It's a beautiful way to write applications, but there's a significant difference between declarative and imperative UI programming, and I find that I stutter a bit when I hop between them. I set up the essential architecture and was enjoying myself until I tried to implement some UI features, specifically the scrolling list of text and buttons along with placeholder animations. There is a lot of typing required, animations were hard to debug, and it wasn't always obvious where my problems came from. I am sure there was a lot to learn from the endeavor, but I was on a deadline and wanted to get things up and running quickly. In transitioning back to the comfort of Godot Engine, I realized something: while dart's asynchronous programming features are wonderfully expressive, there is a real power in GDScript's simple signal syntax. It is hard to get more terse than that, although it comes at the cost of not having explicit control over things like Futures. I returned to Godot Engine, re-creating everything I had written in dart in very little time. With the design decisions made, I just had to interpret it in the new environment and type it up.

I spent too much time in Godot Engine adding dynamic font resizing. I knew I wanted the game to run comfortably on a desktop or mobile browser, and giving the player control of font size seemed the best way to do this. It required a lot of shenanigans with theme overrides, and as I added more visual elements such as the visible dice, it got more and more convoluted. Near the end of development, I just gutted this feature from the play experience and put a configuration on the main menu, which allowed me to just fiddle with the values in the main theme rather than deal with distributed theme overrides. What really irked me was when I started working on deployment, realizing that the cleanest solution for the player would be to use the browser's built-in font rendering and resizing... the way that a Flutter app would have done. Sigh.

Here are a few summary observations along these lines. Flutter is great for its static typing, robust asynchronous programming support, autoformatting, built-in browser font resizing support, spread operator, null safety, and most importantly, refactoring support. Godot Engine clearly wins on terseness of signals and tweens and the ability to rapidly build and test scenes independently of each other. To clarify that last point, I regularly decompose my Godot Engine programs into scenes that I can run and configure by themselves, confident then in how they will work when instantiated as part of a larger system. In Flutter, I wish I could easily say, "Spin up one of these widgets by itself and let me tinker with it," but I have not found anything that comes close to Godot Engine's rapid development support this way.

Incidentally, I did briefly consider other options than writing my own engine. I am intrigued particularly by ink, which I have never used. I was hesitant to jump into something with such a different syntax, although I am sure I could learn a lot from it, too. What killed the deal for me though was that it wasn't clear to me that I could easily plug in the PbtA aspects that I wanted. I discovered a Godot Engine integration, so perhaps I will investigate that later this summer. It wasn't until my family was testing the game that one of them mentioned Dialogic, which I haven't used since Godot 3.x. I haven't looked at it to see if it could have been modded for my purpose. However, writing for Dagger Mountain made me appreciate why narrative designers need better tools than just piles of scripts and a notebook sketch.

Dagger Mountain is my first released game that uses an event queue to isolate the game rules from the interface. I have tinkered with this pattern in several abandoned projects. Two summers ago, I spent a lot of time studying egamebook and its architecture, and I learned a lot from it even though that particular summer project was never released. 

My approach separates the software into three parts. The module is the content of the adventure itself, the story of Dagger Mountain. Each scene in Dagger Mountain is a GDScript file that is given a reference to an Adventure object. The next part is the rules engine, which is manifest in an Adventure object. The scene is given a reference to an Adventure object, and the module tells it to do things like show text, modify attributes, or present a series of choices to the player. Internally, the rules engine generates events to correspond to these interactions, posting them to the event queue. The final part is the presentation layer, which subscribes to the event queue. It dequeues events, processes them, and then notifies the rules engine when it is complete. The code is all free, so feel free to look at the prelude scene for an example of how this works.

I decided early in the project that I would repurpose GDScript as my narrative scripting language rather than create an independent data format that would be interpreted. The primary reason for this decision was the pressures of time: GDScript is already a scripting language, so using its support for functions and conditionals would be faster than writing my own. This is true, but I hadn't considered all of the costs at the time. The game runs through function calls in the module layer, which is nice for terseness but actually makes it hard to test in a modular fashion. I am sure that if I had used TDD, I would have had a more testable architecture. I would much rather have test coverage of the whole state space of the game; instead, I have to hope that my manual testing was adequate.

I did add integration tests near the end because of the need to await basically every call in the module layer. Missing a single instance will break the player experience. I wrote a test that reads through the module layer and looks for cases where await is missing. It took a little time get the test working, but it immediately found a case that I had missed, so that was worthwhile.

Conclusions

I enjoyed building The Endless Storm of Dagger Mountain, and I hope you enjoy playing it. I think I will go and tweak some of that text with this morning's remaining coffee. Despite its small scope and shortcomings, I feel good about having built it. Not only does it explore digital PbtA in a way that I've been imagining for a few years, it also gave me an opportunity to do some creative writing and build more empathy for narrative designers. 

Regarding digital PbtA, I think the jury is still out, since for every promise it has, it comes at the cost of dramatic increase in content creation costs. For interactive fiction, using PbtA resolution requires writing an enormous amount of text, much of which will never been seen my players. It is, of course, not the same feeling as the give and take of a tabletop RPG. However, I can see opportunities for using this resolution system if there were more supporting systems. In a larger game, for example, one could put back in "reading the sitch" style actions that give clues to puzzles. I prefer losing attribute points over the abstraction of hit points, particularly for a narrative-focused game, but I think this would benefit from more explicit representation. Gaining statuses like "confused" or "twisted ankle" would help carry the narrative forward, but then these would be most meaningful if worked into the other systems or stories of the game. All that being said, I appreciate how the PbtA elements feel more like a description of a whole human person than do hit points, armor classes, and the six classic attributes.

Thanks for reading. Let me know what you think of the game!

Wednesday, June 5, 2024

Summer course revisions 2024: Game Programming (CS315)

It's time again for Summer Course Revisions. I spent this week focused on my Game Programming course, which is a junior-level elective for most students and required for the Game Design & Development concentration. I think it is one of my best courses, and I feel good about the general stability of it.

In preparation for revisions, I went back to my notes from last year, including my reflective blog post from last Fall and my notes from reading Grading for Growth. I also referenced my internal notes, which I keep in my course planning spreadsheet. The most important things I came across here were: a reflection on the idea of using "more hurdles" instead of "higher hurdles" for specifications grading; and the need clean up how stars were earned on the final project to remove shortcuts. The latter is something I will have to consider later in Fall since, to address the former, I decided to make a dramatic change in the grading scheme.

This course is where I pioneered checklist-based grading, which I also wrote about in an academic paper. As my post from last Fall makes it clear, though, something shifted in my teaching experience that led to significant frustrations with that approach. I suspect the causes are cultural and not personal, but you have to negotiate with a system. I decided to try an alternative inspired by Joshua Bowman's work that is documented in Grading for Growth. In particular, I am replacing the higher-hurdles specs approach with atomic, clear goals and multi-tiered resubmission. The overall structure of the semester will be the same, and I expect the primary student activity to be exactly the same; the changes are almost entirely in the activity systems around assessment.

I have rewritten the first three weeks' assignments as a proof of concept. For each, I removed the checklists and replaced them by an articulation of essential and auxiliary goals. The essential goals are always required for a satisfactory grade, and when there are auxiliary ones, a subset of them are required. Each goal is graded on a four point scale. Successful has an obvious meaning. Needs minor revision is for the cases where it's mostly right, but something crucial needs to be addressed to show understanding. These minor revisions can be done within three days, accompanied by a short report explaining what went wrong. New attempt required is for cases where something critical is wrong; related to that is Incomplete, for work that is not done. These latter two require a more significant reworking by the student, and I've put in a throttle similar to what I use in CS222: one resubmission per week. 

Concomitant with this change is a revision to course grades. I have written before and done several experiments regarding the assigning of course grades. One of the things I really liked about my old approach to Game Programming in particular was that it was easy to give each week equal contribution to the final grade. However, exercises are now being evaluated as either satisfactory or unsatisfactory, and it's not clear that this categorical data makes sense "averaged" with other data. I have put together a simple specification table, akin to what I did last Fall in CS222.

I am hopeful that this approach will alleviate some of the frustration of students' mismanaging the checklist system. It narrows the number of things students need to think about at the cost that each item is slightly larger now. 

I have not written up policies for the midsemester exam and the final project yet, but my inkling is to pull out specific objectives in which each student must individually show competency. This would represent something much more like a transition from "higher hurdles" to "more hurdles," as long as I can make the hurdles roughly the same size. I am also considering dropping the team requirement from this course. Teamwork is common but not essential to game development. The students in the GDD concentration will have opportunities to work in teams in the production capstone sequence, where the students from other majors won't have teamwork experience anyway. I would rather my CS students' skills be all up to snuff coming into that sequence than that they've already been introduced to interdisciplinary teamwork concepts that will have to be in that sequence anyway.

The other major change for Game Programming is more technical. For years, I've maintained my own websites for my courses, and I've done that for three main reasons: it gives me complete control over the presentation and public nature of the content; it gives me reliability in case of the campus systems' going down; and I can use my software development knowledge to separate model and view. My system for representing, rendering, and downloading checklists was pretty robust, but its assumptions also weaved through the whole course site. When I started reconfiguring my template to handle these changes, I ran into common Web development frustrations: changing dependencies, confusing errors from libraries, and CSS. I decided to pivot and just put all the content onto GitHub. This is what I did with my games capstone and preproduction classes last year as an experiment. It's not ideal, but it meets several of my needs. 

You can find the current draft of my course plan at https://github.com/doctor-g/cs315Fa24 in case you want to take a look. As usual, the content is licensed under CC BY, so feel free to adapt it for your own teaching and learning purposes. I wanted to experiment with how Canvas might link to the individual exercises and their assessment goals, but Fall's schedule isn't loaded into Canvas yet, which points to yet another reason not to bind one's course planning to that platform.

UPDATE: After doing a bunch of work to get Fall's CS222 page working as a non-SPA lit page, I've revised the Game Programming site to match. Markdown is great for many things, but it's bad for automation and separation of data and presentation. It leaves me hungry for a more eloquent text+programming environment. 

Monday, June 3, 2024

Something like a latte

It's time for that recurring feature, "What is Paul drinking?"

The warm summer weather turned my mind toward coffee concentrate. I use a simple approach gleaned from online advice: fill a pitcher 24-35% with ground coffee, top it off with water, and let it sit on the counter for eight hours or so to brew, shaking occasionally. This gives a strong, dark liquid that is a great base for iced coffee, and I often have both decaf and regular in my fridge in season. It also works for a hot morning cup in cases where I am out of whole beans, which happened to be the case last week. 

The last week has been quite mild, the high temperatures only being in the low 70s. It wasn't the right weather for iced coffee, so I wondered what else I could do with my concentrate. I came across a site describing how to make a latte at home with no fancy equipment, and this inspired me to try something like that myself. Turns out, I can make something at home that is a lot like the cappucino I might order at the Bookmark Cafe.

Here's what I've been tinkering with:

  • About two ounces of coffee concentrate in a coffee cup, heated in the microwave for thirty seconds
  • Just under six ounces of whole milk in a ball jar, heated in the microwave for a minute
  • Froth up the milk using the fancy battery-powered handheld frother that, until recently, I didn't know we had in our kitchen utility drawer
  • Slowly pour the milk into the coffee cup, which will leave the foam on top.
The result is quite nice. It's not that much complicated than my usual French press coffee. It is pretty, and if your pants are really fancy, you can sprinkle cinnamon on top. The flavor and mouthfeel are pleasant, and I don't think I could tell you if it was made with espresso or coffee concentrate. I call this a successful experiment, and there's a good chance I'll be making a decaf one this afternoon. 

I normally don't sweeten my cappucino, but I wanted to try that last time I made one of these homemade lattes. I added some simple syrup and vanilla extract after combining everything, but I think if I did this again, I'd add it before frothing to eliminate stirring later.

I'm also working on a batch of mulberry mead, and I will have more to say about that later. In particular, I will probably say, "Don't put the bag of berries into the carboy because it will float to the top and cause headaches." 

An evening with Microscope

I heard about Microscope from Ben Robbins' interview on Justin Gary's podcast. It is a game about creating a history: the rules guide the players in the collaborative creation of the periods and events that make up a historical arc. I became intrigued, and it seemed like the kind of game one would have to play to follow a conversation about it. I borrowed a copy of the rulebook and convinced my wife and two elder sons to try it out with me.

The rulebook gives specific advice on how to introduce the game, and I appreciated being able to follow the script. Our first decision was the overall theme of the history, but we could not settle on one that we all liked. We agreed to take the first one of three that were recommended for players like us who weren't sure how to start: three nations are united as a single empire.

Our next step was to bookend history. One of my sons recommended that the end of the history should be that three nations, each on the back of a turtle, come together under one emperor. We then came up with the idea that at the beginning of history, there was one nation, on the back of the Great Mother Turtle, but she died, and the nation divided onto the backs of her three children.

As we got into the game, we created the history of three turtle-nations who lived in harmony until a blight caused scarcity of Turtle Orchids—the only food eaten by the giant turtles. The three turtle-nations separated to search for new sources, and their cultures evolved due to the different environments under which they found themselves. We never got into the details of how the turtle-nations came back together after this separation, especially not how they resolved religious differences that emerged, but we know they did, and that it was positive for them.

There were some rocky spots, as to be expected in any first play of an RPG where only one person has read the rules. The last page of the rulebook is a convenient rules summary, but Robbins has not provided this as a downloadable player aid. I feel like it would have helped the players—including me—keep some of the terminology and sequencing right. 

One scene did not go particularly well. Scenes answer particular questions in history, and this one was supposed to answer the question, "What monsters attacked Medium Turtle that caused the society to become more militaristic?" It was only our third played scene, and we jumped into it with gusto. It didn't seem to go anywhere, though, as no one roleplayed an answer that satisfied the question. At one point, I just put the kibosh on the scene, suggesting that the answer seemed to be that we didn't know. This was a little unsatisfying, but so was the scene at that point. 

I did some work afterward to better understand the rules for played scenes. The introductory advice that we followed had us start with a played scene, and that one had gone well. In re-reading that portion of the rulebook, though, I was reminded that playing the scene is a combination of narrative and dialogue. We had only been engaging in dialogue, and if I were to teach the game again, I would make sure to open a scene by using both. We also were too light with setting the stage, which is an explicit part of playing a scene: while we had established who and where we were, we had not established what we all knew and what happened prior. 

I came across two interesting resources during my post-play research. One is this rules cheat sheet created by Nicole van der Hoeven. It may be a good way to introduce someone to the game, but it's a great summary of the rules. Reading it provided a more convenient reminder about the core rules over re-reading the book itself, since the book necessarily combines rules and exposition. Seeing the topic list on van der Hoeven's site, I think I may spend some more time exploring her notes on other topics as well.

The other interesting resource I found was a recent blog post by Robbins himself. It presents alternate rules for scenes which I am sure would have given us a better experience even in our first play. Among the benefits of the revision is that it eliminates the need for "push" rules. These are the rules that allow players, during a played scene, to push back on something that someone has introduced into the world. They seemed necessary but secondary in the book, containing more details than I could hold in memory when teaching the game. They were to be deployed in reaction to play, which also meant that I did not want to review them while we were actually in the game. I am not just happy with the simplification of the scene rules, but I am also chuffed to see a designer improving a game he published over ten years ago.

In summary, I enjoyed my first play of Microscope, and I would like to play again, now having a better understanding of the rules and a handy revision thereof.  If I were teaching my game design class in Fall, I would likely bring this in as an example of an RPG. In an era where all of my students are at least aware of Dungeons & Dragons, it would be a great example of how "role-playing game" is bigger than that.

Monday, May 13, 2024

Letter to Sphere Province Games on the occasion of the launch of Mission Rovee

I shared a personal reflection about my work with Sphere Province Games at the launch party for Mission Rovee. At my college's request, I rewrote my comments in the form of an open letter. They have just published it as a featured blog from the College of Sciences and Humanities. You can read it here.

Thursday, April 25, 2024

Dante Alighieri on Social Media

In Canto 30 of The Divine Comedy, Dante lingers in the eighth circle of Hell, watching the damned insult and attack each other. Virgil, as the voice of reason and wisdom, calls him out on this foolishness. Dante repents, and Virgil responds,

"Never forget that I am always by you
should it occur again, as we walk on,
that we find ourselves where others of this crew
fall to such petty wrangling and upbraiding.
The wish to hear such baseness is degrading."

(John Ciardi translation)

Thursday, April 11, 2024

Using C# with Rider in Godot Engine on Ubuntu

I was inspired by the announcement of Slay the Spire 2 and Casey Yano's description of his experience with Godot Engine to investigate the C# bindings in Godot Engine. I've been using Godot Engine for years but only scripted it with GDScript. Like Yano, I prefer statically typed languages over dynamic ones, so this seemed worth a shot. I was introduced to Rider when I was doing C++ in Unreal Engine, and I found it to be an amazing IDE. This, combined with the availability of free academic licenses for people like me, made that my first stop for trying Godot's C# side.

Unfortunately, some of official documentation had me going in unproductive directions. That's why I am taking a moment here to share my quick notes about the experience. The most important thing I learned was not to bother with Mono: it is being phased out. If I had to do it all again from scratch, I would do something like the following.

  • Install dotnet SDK using Microsoft's feed. I used version 8.0 and that seemed fine.
  • Download and install JetBrains Rider. I did this with snap, which is how I've installed Android Studio for my Flutter work.
  • The first time you run Rider, go to the settings and install the "Godot Support" plug-in.
  • Of course, make sure you have the .NET version of Godot Engine, and tell Godot Engine to use Rider as its external "dotnet" editor.
That's it. Make a Godot Engine project, make some scene, and set it as the main scene to run. Then open the project in Rider, and everything else just worked. 

This was my trial case. At the time of this writing, it is the entirety of the C# code I have written for Godot Engine.


 using Godot;  
 namespace RiderTest;  
 public partial class World : Node2D  
 {  
   public override async void _Ready()  
   {  
     base._Ready();  
     await ToSignal(GetTree().CreateTimer(2.0), Timer.SignalName.Timeout);  
     GD.Print("Did it!");  
   }  
 }  

Monday, April 8, 2024

From Asset Forge to Mixamo to Godot Engine

Here are the steps I followed to get a 3D character from Asset Forge (2.4.1 Deluxe) into Mixamo and from there into Godot Engine (4.2.1). These are notes I took to help me remember my particular process; see the update at the bottom for my recommendations.

End Result

Make a thing in Asset Forge. I'd never done this before, so it was a little rocky at first. One of the first things I learned was that I could increase the UI scale in the preferences, which was important since I could not make out the body parts in the character section. The legs and hips don't align with the default snap size, but I discovered that the toolbar widget, right of the two squares, adjusts this amount. Dropping it down to 0.1 allowed me to get the legs both into the hips, although it was tedious to click through the values rather than be able to type them. Once I dropped an arm in place, I had to look up how to mirror the copy for the other side. This is done with the widgets in the top-right of the toolbar, choosing an axis and then mirroring around the selected one (or 'M' as an accelerator).

Export from Asset Forge to FBX. "Merge blocks" needs to be enabled, and the character should be in T-pose, as per the docs.

Import into Mixamo. This was quite easy. For my test model, I changed the Skeleton LOD down to "No fingers."

Export from Mixamo. Select an animation that you want, then choose Download. Make sure you grab this first one "with skin."

Bring that downloaded model into Godot Engine and duplicate it. Name the duplicate after the character (e.g. "bald_guy.fbx"). The original one will be the one from which we'll get the animation, and the copy will be the one from which we'll get the rigged mesh. This is an optional step, but I think it makes things a bit easier to manage. 

For any other animations you want from Mixamo, download them one at a time. You can get these without the skins, since you'll be applying them to the character you already downloaded. Bring this all into Godot Engine.

In Godot Engine, double-click the character fbx ("bald_guy.fbx" in my example above) to get the advanced import options. In the Scene root, you can disable importing animations. In the Actions popup, extract the materials and save these someplace convenient, such as a materials folder. This will make it easy to fix some material settings, which is important since they will all come in as metallic, and you probably don't want that.

Now, we can bring in all the animations as animation libraries. Select all the relevant FBX files from the filesystem view (in my case, all of them except "bald_guy.fbx"), then click on the Import tab. Switch from Scene to Animation Library, then click Reimport. If any of these are looping animations, open them individually, go to the mixamo_com animation entry, and select the appropriate loop mode.

All the pieces are now in place. Create a 3D Scene, and drag your character ("bald_guy") into it to instantiate it. Select the node and enable editable children. Now, you can get to the AnimationPlayer node, and under Animation, choose to Manage Animations. Load each of your animations as its own library. Notice that the first animation, the one embedded with the character, will be listed under the name mixamo_com, and all the other animations will be called AnimationName/mixamo_com. The reason we duplicated that initial fbx above was to make it so that the animation name would be sensible here, since we cannot edit it.

From my initial explorations, this approach is robust in the face of needing to change elements of the model. For example, if you tweak the model in Asset Forge, then push it up to Mixamo, rig it, bring it back down, and reimport it, your animations are still stable. I am surprised that I haven't had to even reload the libraries into the animation player.

A relevant disadvantage, though, is that you cannot add your own animations to the animation player. 

Note that I tried the Character Animation Combiner, but every time I did so, I lost my textures. I also watched a video about combining animations in Blender, but I haven't tried that technique yet. That approach looks like it could make things a little simpler once in the Engine, particularly to rename the animations in a canonical way, but I also like that I can do this without having to round-trip through yet another tool.

Here's a proof of concept created in Godot Engine where tapping a key transitions between a confident strut and some serious dance moves.

Simple transitions for confident boogie

UPDATE: Since taking my initial notes, I tried the approach described in the aforelinked video from FinePointCGI. All things considered, I think that approach is actually simpler than my Blender-avoidance technique. Being able to rename the animations is helpful, as I expected. Having all the animations stored in one .blend file, which can be imported directly into Godot Engine, also saves on cognitive load when looking over the filesystem. I could not have taken his approach without doing the rest of my experimentation, though, coming to understand Asset Forge and Mixamo along the way.

Verdict: Make the model in Asset Forge, upload to Mixamo, let it generate a rig, download all the animations you want (leaving skins on), bring all these into Blender, remove the excess armatures, rename the animations using the action editor of the animation view, save it as a Blender project, import that into Godot Engine, and use the advanced import editor to update looping configurations.

Saturday, March 9, 2024

Painting ISS Vanguard

It has been a long time since I finished a full set of miniatures. I have been painting off and on, but it hasn't been complete sets. For example, my sons and I have painted a few Frosthaven and Oathsworn figures but not all of them. I have some Massive Darkness 2 figures primed that have been sitting on a box awaiting inspiration for some time. 

In any case, I'm here to break the streak with ISS Vanguard. My older boys and I really enjoyed Etherfields. It came with a promotional comic about ISS Vanguard that piqued my interest. Somehow, I heard about how Awaken Realms had re-opened the pledge manager for the game between Wave 1 and Wave 2, so I jumped in. I received my copy many weeks ago, but I knew I did not want to bust it open until we finished Oathsworn. We are now two chapters away from doing so, and so I have finished this core set of figures just in time.

ISS Vanguard Box Art

Most of the things I paint are based on concept art, and I like to match the colors of the figures to the artwork. This is especially important in board games so that figures are easily distinguishable. The eight human figures in ISS Vanguard don't have any published concept art, however, so I had to come up with a different way to handle them. Awaken Realms offers a service where they "sundrop" miniatures, which essentially means that they are given a high-quality wash. Their approach makes it clear that the figures are in pairs, two for each of the four playable sections: security, recon, science, and engineering. I liked the idea of painting them in matching pairs, and searching for inspiration online revealed that many others painters did as well. This Reddit post was my favorite, where the painter featured the distinct section colors on each model but otherwise used a high-contrast scheme with white armor and dark detailing. Coincidentally, a friend shared a video on Facebook last night called, "White plastic and blinking lights: the sci fi toys of the late 1970s and early 1980s." I didn't watch the whole thing, but it did get me reflecting on why I believed that white armor with bright colors should match a science fiction setting.

The figures were slightly frustrating to paint. There are many fiddly details on the armor, but not all of it is meaningful. It seems like the kind of thing that would look great when sundropping because that leaves it as monochromatic detail without having to be specific about what pieces logically or thematically connect. I used the aforementioned Reddit post regularly to try to plan out where I wanted splashes of color. There are a few parts that I would consider recoloring if I had the paints on hand, but a lot of the colors were custom mixes; it wasn't worth the risk of having a bad match to recolor it.

I used zenithal priming from the airbrush to prep the figures. I then painted all of them with a slightly warm off-white color, mostly white with a dot of grey and of buff. I used a wash over the whole figure to darken the recesses, then hit the highlights with the off-white armor color.

Let's look at the figures in pairs.

Engineering Section

All miniature painters know that yellow is a challenging color. Fortunately, a white undercoat made it manageable. The one on the left could probably use slightly more yellow, but I do like how it looks in isolation, and one will never have both of these out in the same mission anyway.

Security Section

I thought a lot about whether the little "pet" on the left should be white like the armor or a different color for contrast. I ended up keeping it white to suggest that it's made of the same stuff as the bulk of the armor. 

I like the poses of these two. These figures all make good use of scenic bases. Part of me prefers blank bases, since I can then decide whether or not I want to add features and suggest that the characters are in particular settings, as I did most elaborately in my Temple of Elemental Evil set... whose images sadly seems to have been eaten by a grue, in a horrible example of why you should not trust "the cloud." Here, however, we can see that a pose like that recon figure on the right would really not be possible any other way. The engagement with the scenery makes it worthwhile in a way that those engineering figures feel more like it's in the way. 

Recon Section

The Recon Section also has wonderful, dynamic poses. I was worried that the smokey jet trail of the one on the right might be too much, but I think it turned out fine. I chose yellow for the flowery thing on the left figure in part to complement the dark blue of the strap and mask details and partially so that it has similar colors to the jetpack character

Science Section

The yellow figures may have taken the most time because of the troubles getting yellow to be bright enough, but the Science Section was awfully close because of all the stuff in their scenes. I had some similar thoughts about the claw arm as the security section's pet, and I ended up going the same way here: if white plastic is what they're using to build lightweight rigid armor, then let's use it for the claw arm and the pets, too.

The alien biomatter being picked up by the one on the right looked fungous, so I picked out some colors inspired by that. Of course, a giant mushroom here on earth would not also have green leaves sticking out of it. 

ISS Vanguard (?)

The last figure in the box is a big space station. I presume it is the titular ISS Vanguard, but the parts of the rulebook that I have read don't actually reference it at all. It's not clear to me if this is used in play or not. I wish it did, though, since that would have given me some idea of how much effort I should spend painting it.

As with the human characters, I looked around online and found a few ideas for painting this piece. I kept the "nearly white with spots of color" motif. One of the challenges here is that the way a space station would be lit is quite different from how an away team would be, but I didn't want to paint it so starkly. I ended up using a cold off-white here to differentiate it subtly from the warm off-white of the characters. A wash deepened some of the recesses, then some highlights and spot colors. It's fine. I waffled a bit on whether to just paint over the silly translucent bit, but I chose against it in part because I have no idea if it is significant to the story. Who knows, maybe the campaign plot hinges on understanding that people are using pure translucent blue as a power source? I wanted the blue to match the beautiful tone used on the box cover, but I didn't quite get it.  It's not purple enough, but it does match the translucent parts.

All Eight Characters

Thanks for checking out the photos and the story here. I'll include some more individual pictures below for people who want to see more detail, including the backs. 



















Thursday, February 15, 2024

Reaping the benefits of automated integration testing in game development

This academic year, I am working on a research and development project: a game to teach middle-school and early high-school youth about paths to STEM careers. I have a small team, and we are funded by the Indiana Space Grant Consortium. It's been a rewarding project that I hope to write more about later.

In the game, the player goes through four years of high school, interacting with a small cast of characters. We are designing narrative events based on real and fictional stories around how people get interested in STEM. Here is an example of how the project looked this morning:

This vignette is defined by a script that encodes all of the text, the options the player has, the options' effects, and whether the encounter is specific to a character, location, and year. 

We settled on the overall look and feel several months ago, and in that discussion, we recognized that there was a danger in the design: if the number of lines of text in the options buttons (in the lower right) was too high, the UI would break down. That is, we needed to be sure that none of the stories ever had so many options, or too much text, that the buttons wouldn't fit in their allocated space.

The team already had integration tests configured to ensure that the scripts were formatted correctly. For example, our game engine expects narrative elements to be either strings or arrays of strings, so we have a test that ensures this is the case. The tests are run as pre-commit hooks as well as on the CI server before a build. My original suggestion was to develop a heuristic that would tell us if the text was likely too long, but my student research assistant took a different tack: he used our unit testing framework's ability to test the actual in-game layout to ensure that no story's text would overrun our allocated space.

In yesterday's meeting, the team's art specialist pointed out that the bottom-left corner of the UI would look better if the inner blue panel were rounded. She mentioned that doing so would also require moving the player stats panel up and over a little so that it didn't poke the rounded corner. I knew how to do this, so I worked on it this morning. It's a small and worthwhile improvement: a cleaner UI with just a little bit of configuration. 

I ran the game locally to make sure it looked right, and it did. Satisfied with my contribution, I typed up my commit message and then was surprised to see the tests fail. How could that be, when I had not changed any logic of the program? Looking at the output, I saw that it was the line-length integration test that had failed, specifically on the "skip math" story. I loaded that one up to take a look. Sure enough, the 10-pixel change in the stat block's position had changed the line-wrapping in this one particular story. Here's how it looked:


Notice how the stat block is no longer formatted correctly: it has been stretched vertically because the white buttons next to it have exceeded their allocated space. 

This is an unmitigated win for automated testing. Who knows if or when we would have found this defect by manual testing? We have a major event coming up on Monday where we will be demonstrating the game, and it would have been embarrassing to have this come up then. Not only does this show the benefit of automated testing, it also is a humbling story of how my heuristic approach likely would not have caught this error, but the student's more rigorous approach did.

I tweaked the "skip math" story text, and you can see the result below. This particular story can come from any character in any location, and so this time, it's Steven in the cafeteria instead of Hilda in the classroom.

We will be formally launching the project before the end of the semester. It will be free, open source, and playable in the browser.