Monday, May 13, 2024

Letter to Sphere Province Games on the occasion of the launch of Mission Rovee

I shared a personal reflection about my work with Sphere Province Games at the launch party for Mission Rovee. At my college's request, I rewrote my comments in the form of an open letter. They have just published it as a featured blog from the College of Sciences and Humanities. You can read it here.

Thursday, April 25, 2024

Dante Alighieri on Social Media

In Canto 30 of The Divine Comedy, Dante lingers in the eighth circle of Hell, watching the damned insult and attack each other. Virgil, as the voice of reason and wisdom, calls him out on this foolishness. Dante repents, and Virgil responds,

"Never forget that I am always by you
should it occur again, as we walk on,
that we find ourselves where others of this crew
fall to such petty wrangling and upbraiding.
The wish to hear such baseness is degrading."

(John Ciardi translation)

Thursday, April 11, 2024

Using C# with Rider in Godot Engine on Ubuntu

I was inspired by the announcement of Slay the Spire 2 and Casey Yano's description of his experience with Godot Engine to investigate the C# bindings in Godot Engine. I've been using Godot Engine for years but only scripted it with GDScript. Like Yano, I prefer statically typed languages over dynamic ones, so this seemed worth a shot. I was introduced to Rider when I was doing C++ in Unreal Engine, and I found it to be an amazing IDE. This, combined with the availability of free academic licenses for people like me, made that my first stop for trying Godot's C# side.

Unfortunately, some of official documentation had me going in unproductive directions. That's why I am taking a moment here to share my quick notes about the experience. The most important thing I learned was not to bother with Mono: it is being phased out. If I had to do it all again from scratch, I would do something like the following.

  • Install dotnet SDK using Microsoft's feed. I used version 8.0 and that seemed fine.
  • Download and install JetBrains Rider. I did this with snap, which is how I've installed Android Studio for my Flutter work.
  • The first time you run Rider, go to the settings and install the "Godot Support" plug-in.
  • Of course, make sure you have the .NET version of Godot Engine, and tell Godot Engine to use Rider as its external "dotnet" editor.
That's it. Make a Godot Engine project, make some scene, and set it as the main scene to run. Then open the project in Rider, and everything else just worked. 

This was my trial case. At the time of this writing, it is the entirety of the C# code I have written for Godot Engine.

 using Godot;  
 namespace RiderTest;  
 public partial class World : Node2D  
   public override async void _Ready()  
     await ToSignal(GetTree().CreateTimer(2.0), Timer.SignalName.Timeout);  
     GD.Print("Did it!");  

Monday, April 8, 2024

From Asset Forge to Mixamo to Godot Engine

Here are the steps I followed to get a 3D character from Asset Forge (2.4.1 Deluxe) into Mixamo and from there into Godot Engine (4.2.1). These are notes I took to help me remember my particular process; see the update at the bottom for my recommendations.

End Result

Make a thing in Asset Forge. I'd never done this before, so it was a little rocky at first. One of the first things I learned was that I could increase the UI scale in the preferences, which was important since I could not make out the body parts in the character section. The legs and hips don't align with the default snap size, but I discovered that the toolbar widget, right of the two squares, adjusts this amount. Dropping it down to 0.1 allowed me to get the legs both into the hips, although it was tedious to click through the values rather than be able to type them. Once I dropped an arm in place, I had to look up how to mirror the copy for the other side. This is done with the widgets in the top-right of the toolbar, choosing an axis and then mirroring around the selected one (or 'M' as an accelerator).

Export from Asset Forge to FBX. "Merge blocks" needs to be enabled, and the character should be in T-pose, as per the docs.

Import into Mixamo. This was quite easy. For my test model, I changed the Skeleton LOD down to "No fingers."

Export from Mixamo. Select an animation that you want, then choose Download. Make sure you grab this first one "with skin."

Bring that downloaded model into Godot Engine and duplicate it. Name the duplicate after the character (e.g. "bald_guy.fbx"). The original one will be the one from which we'll get the animation, and the copy will be the one from which we'll get the rigged mesh. This is an optional step, but I think it makes things a bit easier to manage. 

For any other animations you want from Mixamo, download them one at a time. You can get these without the skins, since you'll be applying them to the character you already downloaded. Bring this all into Godot Engine.

In Godot Engine, double-click the character fbx ("bald_guy.fbx" in my example above) to get the advanced import options. In the Scene root, you can disable importing animations. In the Actions popup, extract the materials and save these someplace convenient, such as a materials folder. This will make it easy to fix some material settings, which is important since they will all come in as metallic, and you probably don't want that.

Now, we can bring in all the animations as animation libraries. Select all the relevant FBX files from the filesystem view (in my case, all of them except "bald_guy.fbx"), then click on the Import tab. Switch from Scene to Animation Library, then click Reimport. If any of these are looping animations, open them individually, go to the mixamo_com animation entry, and select the appropriate loop mode.

All the pieces are now in place. Create a 3D Scene, and drag your character ("bald_guy") into it to instantiate it. Select the node and enable editable children. Now, you can get to the AnimationPlayer node, and under Animation, choose to Manage Animations. Load each of your animations as its own library. Notice that the first animation, the one embedded with the character, will be listed under the name mixamo_com, and all the other animations will be called AnimationName/mixamo_com. The reason we duplicated that initial fbx above was to make it so that the animation name would be sensible here, since we cannot edit it.

From my initial explorations, this approach is robust in the face of needing to change elements of the model. For example, if you tweak the model in Asset Forge, then push it up to Mixamo, rig it, bring it back down, and reimport it, your animations are still stable. I am surprised that I haven't had to even reload the libraries into the animation player.

A relevant disadvantage, though, is that you cannot add your own animations to the animation player. 

Note that I tried the Character Animation Combiner, but every time I did so, I lost my textures. I also watched a video about combining animations in Blender, but I haven't tried that technique yet. That approach looks like it could make things a little simpler once in the Engine, particularly to rename the animations in a canonical way, but I also like that I can do this without having to round-trip through yet another tool.

Here's a proof of concept created in Godot Engine where tapping a key transitions between a confident strut and some serious dance moves.

Simple transitions for confident boogie

UPDATE: Since taking my initial notes, I tried the approach described in the aforelinked video from FinePointCGI. All things considered, I think that approach is actually simpler than my Blender-avoidance technique. Being able to rename the animations is helpful, as I expected. Having all the animations stored in one .blend file, which can be imported directly into Godot Engine, also saves on cognitive load when looking over the filesystem. I could not have taken his approach without doing the rest of my experimentation, though, coming to understand Asset Forge and Mixamo along the way.

Verdict: Make the model in Asset Forge, upload to Mixamo, let it generate a rig, download all the animations you want (leaving skins on), bring all these into Blender, remove the excess armatures, rename the animations using the action editor of the animation view, save it as a Blender project, import that into Godot Engine, and use the advanced import editor to update looping configurations.

Saturday, March 9, 2024

Painting ISS Vanguard

It has been a long time since I finished a full set of miniatures. I have been painting off and on, but it hasn't been complete sets. For example, my sons and I have painted a few Frosthaven and Oathsworn figures but not all of them. I have some Massive Darkness 2 figures primed that have been sitting on a box awaiting inspiration for some time. 

In any case, I'm here to break the streak with ISS Vanguard. My older boys and I really enjoyed Etherfields. It came with a promotional comic about ISS Vanguard that piqued my interest. Somehow, I heard about how Awaken Realms had re-opened the pledge manager for the game between Wave 1 and Wave 2, so I jumped in. I received my copy many weeks ago, but I knew I did not want to bust it open until we finished Oathsworn. We are now two chapters away from doing so, and so I have finished this core set of figures just in time.

ISS Vanguard Box Art

Most of the things I paint are based on concept art, and I like to match the colors of the figures to the artwork. This is especially important in board games so that figures are easily distinguishable. The eight human figures in ISS Vanguard don't have any published concept art, however, so I had to come up with a different way to handle them. Awaken Realms offers a service where they "sundrop" miniatures, which essentially means that they are given a high-quality wash. Their approach makes it clear that the figures are in pairs, two for each of the four playable sections: security, recon, science, and engineering. I liked the idea of painting them in matching pairs, and searching for inspiration online revealed that many others painters did as well. This Reddit post was my favorite, where the painter featured the distinct section colors on each model but otherwise used a high-contrast scheme with white armor and dark detailing. Coincidentally, a friend shared a video on Facebook last night called, "White plastic and blinking lights: the sci fi toys of the late 1970s and early 1980s." I didn't watch the whole thing, but it did get me reflecting on why I believed that white armor with bright colors should match a science fiction setting.

The figures were slightly frustrating to paint. There are many fiddly details on the armor, but not all of it is meaningful. It seems like the kind of thing that would look great when sundropping because that leaves it as monochromatic detail without having to be specific about what pieces logically or thematically connect. I used the aforementioned Reddit post regularly to try to plan out where I wanted splashes of color. There are a few parts that I would consider recoloring if I had the paints on hand, but a lot of the colors were custom mixes; it wasn't worth the risk of having a bad match to recolor it.

I used zenithal priming from the airbrush to prep the figures. I then painted all of them with a slightly warm off-white color, mostly white with a dot of grey and of buff. I used a wash over the whole figure to darken the recesses, then hit the highlights with the off-white armor color.

Let's look at the figures in pairs.

Engineering Section

All miniature painters know that yellow is a challenging color. Fortunately, a white undercoat made it manageable. The one on the left could probably use slightly more yellow, but I do like how it looks in isolation, and one will never have both of these out in the same mission anyway.

Security Section

I thought a lot about whether the little "pet" on the left should be white like the armor or a different color for contrast. I ended up keeping it white to suggest that it's made of the same stuff as the bulk of the armor. 

I like the poses of these two. These figures all make good use of scenic bases. Part of me prefers blank bases, since I can then decide whether or not I want to add features and suggest that the characters are in particular settings, as I did most elaborately in my Temple of Elemental Evil set... whose images sadly seems to have been eaten by a grue, in a horrible example of why you should not trust "the cloud." Here, however, we can see that a pose like that recon figure on the right would really not be possible any other way. The engagement with the scenery makes it worthwhile in a way that those engineering figures feel more like it's in the way. 

Recon Section

The Recon Section also has wonderful, dynamic poses. I was worried that the smokey jet trail of the one on the right might be too much, but I think it turned out fine. I chose yellow for the flowery thing on the left figure in part to complement the dark blue of the strap and mask details and partially so that it has similar colors to the jetpack character

Science Section

The yellow figures may have taken the most time because of the troubles getting yellow to be bright enough, but the Science Section was awfully close because of all the stuff in their scenes. I had some similar thoughts about the claw arm as the security section's pet, and I ended up going the same way here: if white plastic is what they're using to build lightweight rigid armor, then let's use it for the claw arm and the pets, too.

The alien biomatter being picked up by the one on the right looked fungous, so I picked out some colors inspired by that. Of course, a giant mushroom here on earth would not also have green leaves sticking out of it. 

ISS Vanguard (?)

The last figure in the box is a big space station. I presume it is the titular ISS Vanguard, but the parts of the rulebook that I have read don't actually reference it at all. It's not clear to me if this is used in play or not. I wish it did, though, since that would have given me some idea of how much effort I should spend painting it.

As with the human characters, I looked around online and found a few ideas for painting this piece. I kept the "nearly white with spots of color" motif. One of the challenges here is that the way a space station would be lit is quite different from how an away team would be, but I didn't want to paint it so starkly. I ended up using a cold off-white here to differentiate it subtly from the warm off-white of the characters. A wash deepened some of the recesses, then some highlights and spot colors. It's fine. I waffled a bit on whether to just paint over the silly translucent bit, but I chose against it in part because I have no idea if it is significant to the story. Who knows, maybe the campaign plot hinges on understanding that people are using pure translucent blue as a power source? I wanted the blue to match the beautiful tone used on the box cover, but I didn't quite get it.  It's not purple enough, but it does match the translucent parts.

All Eight Characters

Thanks for checking out the photos and the story here. I'll include some more individual pictures below for people who want to see more detail, including the backs. 

Thursday, February 15, 2024

Reaping the benefits of automated integration testing in game development

This academic year, I am working on a research and development project: a game to teach middle-school and early high-school youth about paths to STEM careers. I have a small team, and we are funded by the Indiana Space Grant Consortium. It's been a rewarding project that I hope to write more about later.

In the game, the player goes through four years of high school, interacting with a small cast of characters. We are designing narrative events based on real and fictional stories around how people get interested in STEM. Here is an example of how the project looked this morning:

This vignette is defined by a script that encodes all of the text, the options the player has, the options' effects, and whether the encounter is specific to a character, location, and year. 

We settled on the overall look and feel several months ago, and in that discussion, we recognized that there was a danger in the design: if the number of lines of text in the options buttons (in the lower right) was too high, the UI would break down. That is, we needed to be sure that none of the stories ever had so many options, or too much text, that the buttons wouldn't fit in their allocated space.

The team already had integration tests configured to ensure that the scripts were formatted correctly. For example, our game engine expects narrative elements to be either strings or arrays of strings, so we have a test that ensures this is the case. The tests are run as pre-commit hooks as well as on the CI server before a build. My original suggestion was to develop a heuristic that would tell us if the text was likely too long, but my student research assistant took a different tack: he used our unit testing framework's ability to test the actual in-game layout to ensure that no story's text would overrun our allocated space.

In yesterday's meeting, the team's art specialist pointed out that the bottom-left corner of the UI would look better if the inner blue panel were rounded. She mentioned that doing so would also require moving the player stats panel up and over a little so that it didn't poke the rounded corner. I knew how to do this, so I worked on it this morning. It's a small and worthwhile improvement: a cleaner UI with just a little bit of configuration. 

I ran the game locally to make sure it looked right, and it did. Satisfied with my contribution, I typed up my commit message and then was surprised to see the tests fail. How could that be, when I had not changed any logic of the program? Looking at the output, I saw that it was the line-length integration test that had failed, specifically on the "skip math" story. I loaded that one up to take a look. Sure enough, the 10-pixel change in the stat block's position had changed the line-wrapping in this one particular story. Here's how it looked:

Notice how the stat block is no longer formatted correctly: it has been stretched vertically because the white buttons next to it have exceeded their allocated space. 

This is an unmitigated win for automated testing. Who knows if or when we would have found this defect by manual testing? We have a major event coming up on Monday where we will be demonstrating the game, and it would have been embarrassing to have this come up then. Not only does this show the benefit of automated testing, it also is a humbling story of how my heuristic approach likely would not have caught this error, but the student's more rigorous approach did.

I tweaked the "skip math" story text, and you can see the result below. This particular story can come from any character in any location, and so this time, it's Steven in the cafeteria instead of Hilda in the classroom.

We will be formally launching the project before the end of the semester. It will be free, open source, and playable in the browser.

Friday, February 9, 2024

Tales of Preproduction: Refining the prototyping procedure

I am teaching the Game Preproduction class for the second time this semester, and this time I am joined by Antonio Sanders as a team-teacher from the School of Art. There are already a lot of interesting things happening as we have a class that is now have Computer Science majors and half Animation majors. We have also extended the class time to a "studio" duration, so we meet twice a week for three hours per meeting instead of the 75 minutes I had with my inaugural group last year.

Given that quick summary of our context, I want to share a significant change that we made to the prototyping process from last year. Last year, the team adjusted the schedule because we didn't dedicate enough ideation time to prototyping, and so this year, we set aside five days for this. Each day, the students are supposed to bring in a prototype that answers a design question. I remember this also being challenging last year, and it wasn't until late in that process that we remembered the seven questions that Lemarchand poses about prototypes in A Playful Production Process

In an effort to get the students thinking more critically about their prototypes, we have required them to write short prototype reports that address Lemarchand's seven questions. The last of the questions, which Lemarchand himself typesets in bold to show its importance, is, "What question does this prototype answer?" What the reports help reveal, which was harder to see last year, were cases where the questions themselves were either malformed or unanswerable. That is, students are going into prototyping without a good idea of what prototyping is. Several times, I've seen students show their prototypes, and when I ask what design question they answer, the students have to look it up on their reports. This is pretty strong evidence that the questions were developed post hoc. What's most troubling is that, after having completed four of the planned five rounds, these problems are still rampant.

Early in the process, my teaching partner suggested students think about design questions in the form "Is X Y?" where X is a capability being prototyped and Y is a design goal. For example "Is holding the jump button down to fly giving the player a sense of freedom?" While this heuristic proved helpful, a lot of students struggled with it: in part, I think they didn't understand that it was only a heuristic, and in part because they haven't practiced the analysis skills required to pull a design question out of an inspiration. If I were to use this again, I'd follow the obvious-in-retrospect need to rename those variables, to something like "Does this player action produce this design goal?" (Unfortunately, the discussion of design goals comes up later in the book, so maybe even this idea is too fuzzy for the students.)

Many of the questions that students want to pursue are actually research questions. I mean this in both the colloquial and the academic senses. A question like, "Does adding a sudden sound make the player scared when they see the monster?" is obviously answered in the affirmative: one need only look at games that induce jump-scares to see that this is effective. Questions like, "Do timers increase player stress?" are simple design truisms that are not worth prototyping. In yesterday's class, I tried to explain to the students that if the question is generic then it's a research question, and that design questions are always about specifics. In particular, through science, we approach generic questions through specific experiments that attempt to answer the general question; through design, we answer specific questions through specifics directly.

Reflecting on these problems, it becomes clear that the earlier parts of the semester were not goal-directed enough. Students acknowledged after our in-class brainstorming session that they were not brainstorming game ideas (but that's a topic for another post). When the students did research, many of these were also not goal-directed. Now, in prototyping, we're more easily to see what students are interested in, and we can point out to them that their interests and issues can and should be solved by blue-sky ideation or by research. However, we haven't baked that into these first five weeks. Put another way, we took a waterfall approach to ideation whereas perhaps next year we should try an iterative one.

We're in the process of collecting summaries of all the students' prototypes. I put together a form that uses this template for students to self-describe prototypes that are viable for forming teams around:

This game will be a GENRE/TYPE where the player CORE MECHANISM to GOAL/THEME. 

I'm eager to see if this was a helpful hook for the students. I will have to ask them about it on Tuesday and then see if it's something we can use with next year's cohort.