Friday, July 26, 2019

Summer 2019 Course Revisions

My main focus this summer was my first commercial game project, but I have always kept in the back of my head that I had some serious work to do to get ready for Fall semester. In fact, I was feeling a bit stressed about it a few weeks ago and took some time off of Kaiju Kaboom to sketch out plans for my game programming and game design classes. My third class for Fall is scheduled to be CS445/545 Human-Computer Interaction, but it is right on the edge of being underenrolled; if it doesn't make minimum enrollment, the dean's office will cut it and I'll be deployed somewhere else with very little notice. However, because my time to work on course prep is drawing to a close, I devoted the day to sorting out as much of that course as I can.

In today's blog post, I'll give some highlights for my three courses. This will be shorter than in some previous years, so I'm condensing it all into one.

CS315: Game Programming

I'm excited to be teaching game programming again, since this year we actually did get new machines in the lab—machines that are capable of running UE4. Last Fall's course went well, and so I am keeping most of the plan as-is, but trying to keep a pedagogic eye on how I can use in-class examples and workshops to drive some of the lessons home. I expect that we will do three to four mini-projects followed by a larger team project. This year, I have dropped the achievements, since I think the course is already quite full of places where students can make meaningful decisions about what to pursue. I am keeping both the specifications grading and the project reports, both of which served their purposes last year. I have been tempted to set up some kind of team role system or other accountability system for the final project, to prevent the case where one student carries the rest, but I am still unsure how to do this; perhaps I should turn that question around to the students and have them contribute to setting the rules.

Here is the draft course plan. Only the first of the mini-projects is posted, but I expect to re-use my progression from last Fall, where we went essentially from 1D to 2D to 3D across three projects.

CS439: Introduction to Game Design

I do not have an immersive learning project lined up for this academic year. Instead, I am using the year to try to intentionally explore how I can integrate some of the work I've done through immersive learning into formalized Computer Science department offerings. One step in this direction is offering a version of my game design course—which I have taught for several years as an honors colloquium—as a Computer Science elective that anyone can take. The easiest way to do this was with our "seminar" course, although this is not ideal for marketing the course, since it still shows up in the catalog as a 400-level CS elective. Still, I look forward to teaching this and seeing how the audience compares to the honors colloquium.

One way that I have made this "computer science-y" is to require our intro programming course as a prerequisite. This is not because I expect to do much programming, but rather because I want to be able to draw upon metaphors of computational thinking when looking at games system design. This seemed like a good idea at the time, but I have questioned this the more I've worked on it. Indeed, in my random sketching of how I would consider proposing this as a formal service course in my department, I've strongly considered dropping any prerequisite.

As for the course structure, here is the draft course plan. It is based strongly on what I have done in previous years. Even though the examples in the free online text we use are showing their age, I like both the presentation and the price. Once students build a core vocabulary, they can make use of the exhaustive supplemental information that is discoverable online. Without a community partner via immersive learning, the students will be working on projects of their own design without external constraints, which I haven't done in a class like this since roughly 2008. I'm eager to see what they pursue. As before, we'll spend the first half of the semester studying fundamentals and then the second half of the semester building projects. Some of the students who have enrolled are ones with whom I really enjoy working, and so I'm looking forward to spending time with them again too.

CS445/545: Human-Computer Interaction

The last academic year, I taught this course both semesters in a collaboration with the David Owsley Museum of Art. I had a fruitful meeting with their education director several weeks ago as we debriefed the experience. I am glad to say that we are continuing our collaboration, but we are narrowing the focus toward one specific problem: helping visitors navigate the physical museum. This means that my students won't have to do so much problem discovery, but I think that's OK. They really struggled with the idea of finding a legitimate problem vs. inventing a problem and then justifying their work. I think this new focus will help them get into the solution design part of the course, which is really more important for our single, elective course on HCI.

Knowing that this will be a relatively low-enrollment class allows me to treat it as a studio class. We will start with some common readings and structured exercises, but then I would like to move quickly into tackling this navigability problem, using my familiar tactics of just-in-time teaching and reflective practice to have a meaningful learning experience. The draft course plan only lays out activities for the first three weeks or so of class, after which I can work with the students to assess our situation and move forward as needed. It does mean there is kind of a hole around the grading policy of the course, and I hope that this does not cause the students any undue stress. My plan is to work with them to develop a methodology that embeds assessments into it, which I think they will enjoy and learn from.

A word about the sites

Careful readers may have noticed that my course web sites have undergone a visual overhaul. This is related to my learning lit-element, as I wrote about earlier this summer. Whereas my sites were previously based on the polymer starter kit, now I am using the PWA starter kit prerelease. I had to do a bit of finagling to get it to work on our departmental Apache server, but once that was done, I could easily replicate it across the three sites.

Tuesday, July 23, 2019

Kaiju Kaboom!

I am pleased to announce the release of Kaiju Kaboom, my first commercial game project. You can find it on Google Play, exclusively playable on Daydream.

Most summers, I set my own work and creative goals rather than teach classes or work on grant-funded projects. During the Spring semester, I decided that one of my summer goals would be to create a game from scratch and release it commercially. The outcomes of this plan are objectively measurable, and as of last night, I have met them, and I feel good about it.

The game

The game itself is an expansion of the concept that my son and I developed for Global Game Jam. Inspired in part by Terror in Meeple City, the player takes on the role of a kaiju who returns to its island home, only to find it infested with people. Of course, the people are meeples, the buildings are made from wooden blocks, and you are a miniature giant monster, but this both adds to the charm and simplifies the asset development for a novice modeler. The biggest change to the game is moving it to mobile VR via Google Daydream, but I also added randomized levels, meeples that fight back, and four different kaiju powers, three of which are unlocked through the high score system.

I spent some time around finals week building a proof of concept to ensure that the game would be enjoyable and within scope, and having a positive experience with that, I devoted the summer to it. I did not use any formal task tracking tools. Instead, I have a pad of paper on my desk where I would either write down features to explore, sketch geometric or software solutions, or record defects to address. I worked roughly eight to ten hours each day, sometimes up to twelve, and about half days on Saturday, with a few gaps for family trips. I showed an early build to some friends and family around Memorial Day, and this confirmed to me that the core gameplay was really enjoyable.

The tech

Kaiju Kaboom was built using Unreal Engine 4. I started the project in Blueprint and initially just added C++ for the parts that were faster to write in textual code than they were to prototype in Blueprint. In retrospect, there are parts of the core game loop that I left in Blueprint which I probably should have done in C++, because the Blueprint can get hard to follow. I don't think there's any real performance hit from using Blueprint here; the real problem is that Blueprint makes it harder to control modularity, so it's hard to intuit the dependencies after walking away from a subsystem for a while.

Getting up and running on Daydream was not much trouble, and the documentation is good. However, the Google VR branch of UE4 is clearly no longer being maintained. This led to a problem as I was getting ready to release, in which the game crashed on a friend's device. Scouring the web, reading semi-related threads, and trying various approaches, I was able to finagle a build system that worked for both of us. I continued to do my regular feature development in UE4.22.3. For releases, however, I downloaded and built UE4.22 from source, replacing its GoogleVR libraries with those from 4.20 release—the most recent release—of the googlevr-unreal fork. My friend also had some troubles that I thought might be due to texture formats, so the build includes both ASTC and ETC2 formats, even though only the former is needed for my own Pixel 2.

The null business plan

At this point, the thoughtful reader may ask, "Why did you target Google Daydream?" The simple answer is that I have one and it seemed like fun, both as a technical challenge and through that proof-of-concept I built in early May. Note that mine was a commercial project not because I wanted to make money but because I wanted to see what was involved in making it a commercial project. I know literally one other person who has a Google Daydream setup: my friend Chris, who was my lone tester. If I can sell ten copies, I'll buy some nice beer. I'm in the blessed and enviable position where I can invest a summer in learning and improving my skills without it having to be economically profitable. As I alluded to before, abother benefit of Daydream is that because it's low-powered for VR, I could get away with my own limited asset-creation skills, whereas I know I could not make something by myself in two months that would be competitive for, say, the Vive.

I did borrow an Oculus Go from a friend, in hopes that it would be easy to release on both platforms. Unfortunately, the logic of my character was tightly coupled with the GoogleVR components, and I didn't see a clean way to separate these. I would consider buying one of these $200 headsets and doing a port if I had any belief that my little hobby project could make at least that much back plus porting time, but right now, I don't think I can put that much more effort into it, since I need to transition into "preparing for Fall semester" mode.

The last two or three days, as I've geared up for the public release, I have thought that perhaps I should have planned on doing a more modern release, where the first part of the game is free, and people who like it can pay to unlock the rest. One reason for doing this is so that I could explore both the libraries and the platform support for DLC and licensing, which would be interesting. I decided to stick with my original plan, augmenting it slightly by releasing the game under the GPL. This means that if someone buys the game, they get access to the source code as well, and they are free to learn from it just as I did. I don't know if that will have any real impact, but it feels like the right thing to do, given my situation.

The learning

Originally, I had it in my head that I would try to make all the assets of the game from scratch in true artisan fashion. Along the way, I did use a few public domain sound effects, and as my available time quickly ran out, I added CC-BY music from indie- and jam-favorite Kevin MacLeod. I have written two other posts to detail some specifics about things I've learned through this project: importing Blender animations into UE4 and using UE4 LiveCoding for rapid unit testing. I did not keep a rigorous log of other things I've learned, and some things I had to learn twice because they did not stick in my head. Here, however, I am going to try a brain dump, in part so that I can come back here later when I need to re-learn something again.

Blender

  • UV Unwrapping
    • I found this video in particular to explain well how to unwrap and texture paint, which technique I used to mark out regions of a texture for manual painting in Gimp.
  • Normal maps
    • The technique involves modeling high-poly and low-poly models in the same file, exporting the low-poly one (by selecting it and exporting only the selection) for UE4, but exporting the normal map of the high poly one and using it. I used this on an early version of the blimp.
  • Smooth shading instead of normal maps
    • An easier way to do what I wanted with normal maps was to simply select Smooth Shading from the Tools menu in Blender, and to make sure this was exported to FBX.
  • Boolean modifiers
    • These modifiers enable constructive solid geometry; for example, the union of a plane and a sphere was used to generate the blimp's fins
  • Keyframes in video editing
    • Just as explained in this video, I had been doing gamma cross fade effects, but now can do more robust effects by keyframing. Right-click on a property, insert keyframe. I'm still not sure how to see all the keyframes and move them around, but this was good enough for me to compose the scaling and fading effects of the trailer.

Gimp

  • Filters→Render contains all kinds of neat stuff to play with visual effects. I used the Lava and Cell Noise filters to generate quick but effective textures for the fireball and acid spit spheres.
  • Multiple monochrome images can be packed into the RGBA channels of a single image using the Colors→Components→Compose feature. Make each image into a separate layer, and then this tool allows you to directly compose the layers into the RGBA channels.
  • Two ways to add shadows behind text:
    • A "soft" shadow can be had by copying a text layer, darkening it, and then blurring it. I did this at some point in the past, though I cannot remember where.
    • A harder shadow is possible by using Filters→LightAndShadow→DropShadow.

UE4

  • AI vs Physics
    • An actor cannot be driven both by a behavior tree and by physics simulation. Of course not, he says in retrospect, that would make no sense.
    • There are actually two meeple actor types in the game: AIMeeple and PhysicsMeeple. AIMeeple is controlled by one of two behavior trees, depending on whether they are armed or not. When they are hit, they convert in-place to a PhysicsMeeple.
  • AnimGraph
    • I had done a little with AnimGraphs for an unreleased tech demo last Fall, but very little of that stuck in my head, in part because I may very well have been doing things wrong. For Kaiju Kaboom, AnimGraphs are used in a more conventional way, driving the animation states of both AI and Physics meeples.
  • Landscape Materials
    • The landscape has sand and grass layers, and it fades toward white with higher altitude.
  • Cascade Particles
    • I was hoping to learn Niagara, but it is not supported on Daydream. Still, I was able to learn enough of Cascade to make my own simple particle effects, such as the fireball explosions and acid spit splash.
  • Following a Spline
    • It's a fundamental technique in level building, but I haven't really done much level building before. The blimp's flight path is specified by a closed spline.
  • Efficient Materials
    • I learned some more about how to balance GPU and CPU processing. For example, my original implementation of the blimp's propellers involved using a rotating movement component for each. After watching some videos (possibly one of Tharle VFX's shader maths videos), I moved this to be computed by a shader using world position offsets.
    • Several of my materials use customized UVs so that they are computed per vertex instead of per pixel.
    • I have a few "master materials" and use a lot of material instance constants. I understand this to be more efficient, although I didn't go so far as to measure the difference.
  • Visualizing Shader Complexity
    • As mentioned above, I haven't really done a lot of level design in UE4, so I've never really needed to inspect my scenes for performance problems. On this project, I tinkered some with the different editor view modes such as shader complexity, in order to find areas that were not performing how I expected. Turns out, everything was performing as I expected, but it was still neat to see this.
  • Unit tests
    • I knew it was possible to write unit tests for C++ code in UE4, but I had never done it before. TDD seemed like the best way to approach my high score table: unlike experimental gameplay code, it had very well-defined rules. Here is the source code for the unit test. The testing library assumes all the tests are in one method, and so I have used macros to give a fluent layer on top of that assumption. This is not as robust as something like chai, but it's much easier to read than the alternative.
    • I did not have a continuous integration system, so I had to run the tests by hand, which means that I rarely ran them. If I were working on a team on a larger-scale project, I would definitely invest in CI.
  • Functional tests
    • As with unit tests, I have known for some time that it was possible to do automated testing through the UE4 session frontend, but I never took the time to really figure it out. In this project, I created a test level containing four automation tests in a 2x2 format to ensure that falling columns and floors damage AI meeples and physics meeples.
    • In my original approach, I was spawning the relevant actors in the PrepareTest event of the Blueprint. Later I realized that it was much more effective for me to create these actors as child actors within the construction script, since this allowed me to see and manipulate them in the level. 
    • The result is fun to watch, but the approach was somewhat hamstrung by the fact that I was not using continuous integration as mentioned above..

Wrapping Up

It seems like there's no end of good ideas that I did not have time to put into the game, ranging from minor quality-of-life improvements to major new features. The spirit of this project is inspired by Jeff Vogel from Spiderweb Software, who taught me this wonderful expression: It's better than good—it's good enough! I'm pleased to have this project more-or-less wrapped up for the summer. I have started doing some course prep, but I have more of that and other university-related business to take care of this summer, as well as family obligations. 

I've also done much of this work with an injured middle finger on my right hand, which, combined with the intensity of this project, has left me well behind on my summer painting goals. Now that Kaiju Kaboom is out in the wild, it may be time to clean up the office and get the paints out.

Thanks for reading! If you know of someone with a Daydream who you think might enjoy throwing boulders at meeples, please let them know about the game. That is, pretty much, my marketing plan.

Thursday, July 4, 2019

UE4 Live Coding for Unit Testing

TL;DR: Use the new Live Coding feature in UE4.22 as a workaround for the fact that automation unit tests cannot be Hot Reloaded.

My goal for the day was to write a high score system for my summer project, which I am creating in Unreal Engine 4.22. As I started thinking about the requirements, I realized that TDD would be a good approach. Of course, I've spent more hours diving into how to make this work than I would have spent on a brute force approach, but I'm hoping I'll recoup the investment in the future.

The Automation Technical Guide provided enough scaffolding for me to write a single unit test. I took some time then to pull this out into its own module—this seemed like a good idea, although developing with multiple modules is something else I've also never done. Orfeas Eleftheriou's blog post was instrumental in my pulling my unit tests into their own module.

I remembered reading months ago that unit tests in UE4 are not Hot Reloaded, and this is confirmed as expected behavior in UE-25350. The frustrating fallout of this is that the editor has to be reloaded in order to see changes made in the C++ test implementation. This slow and tedious process is anathema to good TDD rhythm. I came across Vhite Rabbit's workaround, but I could not get it to jive with the modular decomposition I gleaned from Eleftheriou.

Then I remembered that 4.22 shipped with an experimental new Live Coding feature, which promises to allow changes from C++ to be brought into a running game session, as long as they are not structural changes. I wondered, if this works for running games, would it work for unit tests?

The happy answer here is Yes. I had to restart the editor because Live Coding cannot be used after any Hot Reloading. The editor opens with a separate window that seems to be managing the Live Coding feature. I went into my unit test and turned its simple "return false" into "return true", hit the magic Ctrl-Alt-F11 combo, waited just a few seconds for the Live Coding system to run, and then re-ran my test. Sure enough, now the test passes.

I have essentially no progress on a high score feature after the morning's work, but hopefully by documenting my findings here, I can help others move forward in their unit testing adventures.

Wednesday, June 26, 2019

Painting The 7th Continent: What Goes Up, Must Come Down

I backed the expansion to The 7th Continent even though we had not gotten the base game back to the table since shortly after I painted up the tiny miniatures. The second curse we played just wasn't very good. It provided no indication of where to go next, so we had no real incentive to go in any particular direction. This killed our motivation, but I still remembered our narrow loss of the first curse as being great fun. I hoped that a little injection from the expansion would give us a little boost.

The What Goes Up, Must Come Down expansion includes three new characters and a goat. Here they are, all painted up. These are tiny miniatures that I just wanted to be a reasonable tabletop quality. They were brush primed in grey, then given base coats, wash, and highlight.

Tall guy in front

Short guy in front
The expansion also came with a barge as well as the hot air balloon evoked by the title. Here's the barge:
Tell them...
Large Barge sent ya
For the hot air balloon, I noticed that there were visible seams in the side, so I filled those with Milliput, primed in grey, and painted it up.

Hot Air Balloon Basket
I picked the color of the diamond shape on the front to go with the cardboard balloon that slots into that X. In fact, I was explaining this to my wife when I pulled out the cardboard balloon piece to illustrate the assembly. In my mind, the balloon pieces simply pushed into the X.
Uh oh.
Turns out, that's not the case. The basket—which arrived assembled—was supposed to be pulled apart and then snapped together again around the cardboard balloon. A thread on Board Game Geek mentions instructions in the rulebook that I don't remember seeing, but I probably just blipped over it.

Well, now I was in a pickle. Two options: (a) try to break apart the painted basket, reassemble it around the cardboard balloon as intended, and then patch the paint job; or (b) cut the balloon, stick the basket on, and glue the balloon back together.

Time for surgery

The ol' razor saw
The Razor saw did the trick

Better dry fit... not like it will make much difference now.
Post-Op
Studio Pic
The white glue seems to be holding fine. One of the advantages of making this a permanent fixture is that I could glue the crossbeams together for extra strength. The seam is clearly visibly if you pick up the piece and tilt it, but on the table, it's completely hidden by the basket. 

I thought it would be fun to tell the story of my error and repair in pictures, and maybe it will be useful should any other zealous painter also neglect to dry fit before painting.

I look forward to getting the game back to the table. I came across the designers' recommendations for quest order, and it's been long enough since our last run—and my wife and I will add a son or two to the team—so I think we'll start back at the beginning. 

Thanks for reading, and happy exploring!

Sunday, May 26, 2019

Building an online randomizer app for Thunderstone Quest, and what was learned along the way

I have enjoyed playing Thunderstone Quest and am glad I painted the minis for it. For some time I have had an itch to make a tool to assist in generating random adventures. There is one out there already, but it has some frustrating defects, such as the selection of heroes not always matching the constraint that there's one per class. I talked to the developer about it and tried sorting through the implementation, but neither of us made any headway.

The other day, I received my To the Barricades expansion, which is the final component to my Kickstarter pledge. We've only had it to the table once, and we enjoyed it despite the inevitable rockiness of first plays. This really got my wheels turning about building a new online randomizer—so I did.

Thunderstone Quest Randomizer: https://doctor-g.github.io/tsqr/

Source Code: https://github.com/doctor-g/tsqr

I worked on this from Thursday night through Sunday morning. It's not going to win any graphic design awards, but it has all the functionality that I wanted from it. The rest of this blog post contains a few things I learned while building it. I will focus on two things: jq and lit-element.

A friend mentioned jq to me several weeks ago as a powerful command-line tool for manipulating JSON. The other randomizer already included a transcription of most of the Thunderstone Quest cards, but it was not in a format that I wanted. This is exactly what jq is built for. However, I don't know exactly what I expected, but it wasn't what I found: jq's system of generators and filters was mind-blowing. It took me a while to wrap my head around it, and even after several hours I feel a bit uncertain of my grasp of it. I ended up crafting a filter like this to do the transformation:

 jq '[to_entries[] | .value | to_entries[] | {"Name": .key} + (.value | del(.Cost,.Types,.Keywords,.Races,.Summary,.Alert,.Light,.Battle,.Spoils,.Special,."After Battle",.Reward,.Entry)) | select(.Category)]' cards.json   

The whole command is wrapped in square brackets because I want the result as an array. The next few commands allow me to wade through the deeply-nested structure of the original format. Then, I push object keys as a new "Name" field into the rest of the object, filtering out all the data from the original source that I did not need. Finally, I pulled out the handful of cards from the original source data that had no Category specifier. Writing this command took several hours of pushing and prodding, but I'm glad I got it figured out.

Although I had pulled out much of the bad structure of the original format, I still had a lot of redundancy. I decided to grin and bear it until I reached a point where I wanted to add metadata about quests. In my first pass, quests were only represented as fields in objects: each card had a "Quest" key with a value listing the name of its quest. I wanted to pull that out so that I could describe a Quest more systematically, as having a code (such as "Q2") and as being part of a set (such as the Champion-level Kickstarter pledge). To do so, I needed to pull out each of the types of cards into their own lists and put these under the corresponding Quest. Unfortunately, I could not figure out how to automate this process for an arbitrary number of categories, and so I did some nasty copy-pasting of commands, manually entering what I would rather have parameterized. Get ready to cringe, because here it is:

 jq '[group_by(.Quest) [] | {"Quest": .[0].Quest, "Heroes": [.[] | select(.Category=="Heroes") | del(.Quest,.Category)], "Items": [.[] | select(.Category=="Items") | del(.Quest,.Category)], "Spells": [.[] | select(.Category=="Spells") | del(.Quest,.Category)], "Weapons": [.[] | select(.Category=="Weapons") | del(.Quest,.Category)], "Guardians": [.[] | select(.Category=="Guardians") | del(.Quest,.Category)], "Dungeon Rooms": [.[] | select(.Category=="Dungeon Rooms") | del(.Quest,.Category)], "Monsters": [.[] | select(.Category=="Monsters") | del(.Quest,.Category)]}]' cards.json  

If any readers know jq better than I do and see how to parameterize the management of each of those card categories, I'd love to know. Fortunately, this is a one-time transformation, since after doing this one, I had to manually enter the data for the new Barricades-level quests. There should be no reason for me to ever have to run this particular transformation again.

I have had an itch to learn Flutter, and I was excited to hear that there will be support for compiling Flutter apps for the Web. However, that's still in a technology preview state, so I decided instead to build my randomizer using Polymer. I've been using Polymer for a few years now, and it's a fascinating system, allowing the development of custom components with two-way data binding. At some point in the last year, I came across LitElement and lit-html, which provide very useful and terse expressions for binding values and also for iterating over arrays—syntactically much nicer than Polymer's dom-repeat. I decided I'd spend some more time with lit-element in my Polymer-based solution.

I ran into a problem in writing an element to filter quests, so that users could choose to get cards only from the sets they own. Making the element show all the quests with checkboxes was fine, and I could track the states of them from within the element, but the values were not reflecting back to the main app that held the filter element. This puzzled me for some time, until I ended up just moving that logic from a nested element into the top-level app: that's not a particularly good design decision, but it is a pragmatic one. At some point later, I was looking for help on the lit-element when I came across this excellent post by James Garbutt on the relationship between Polymer 3 and lit-element. This particular portion blindsided me:
Bindings are one-way in Lit, seeing as they are simply passed down in JavaScript expressions. There is no concept of two-way binding or pushing new values upwards in the tree.
I had been conceiving lit-element as just a way to use some cool lit-html features within a conventional Polymer ecosystem. Garbutt goes on to helpfully add, "Instead of two-way bindings, you can now make use of events or a state store." Events are familiar, of course, but a real part of Polymer's appeal to me was that I could get elegant two-way binding instead of tedious event plumbing.

Now, though, I have to show some of my ignorance, because I had to ask, "What is a state store?" With this term in hand, I discovered the work-in-progress PWA Starter Kit, which combines lit-element with Redux. "Redux" has the shape of a buzzword I would have overheard somewhere on the Internet, but I couldn't have told you what it is. I downloaded the PWA Starter Kit and started fiddling around with it, keeping the docs open beside it. This looked really interesting and exciting as a way to start building off of what I've learned using Polymer... but at this point I was hip deep in a project that I wanted to get done before the weekend was over. I put Redux and the PWA Starter Kit out of my mind, armed myself with the knowledge that lit-element is not what I thought it was, and I went back to the randomizer. I did end up keeping the card filter logic in the top-level application element, where it clutters things up but still works.

I jumped into this project with a relatively unstructured goal. I had an intuition of what I wanted, but I didn't do any sketches or write any specifications. I had a few regression defects along the way that made me wonder if I should have used TDD, but I think the whole project was really in the "experimental programming" mode. Making it free and online means that other people can gain some benefit from my work, but if I were to rebuild it from scratch, I would certainly be more careful about it. Indeed, it's tempting to rebuild it using Redux for state management, but this was already a four-day distraction from my main summer side project.

Incidentally, the randomizer itself is a progressive web app published on GitHub Pages. I was able to repurpose the approach I took for Elixer, my scoring application for Call to Adventure, which I don't think I ever wrote about here. That one is also an open source project hosted on GitHub. It took me a lot more time to make that a full-fledged PWA, but the second time around with the Thunderstone Quest randomizer was much faster.

Thursday, May 23, 2019

Importing Blender animations into UE4

Last Fall, I worked out how to create simple animations in Blender and import them into UE4, using separate files for the mesh and the animations. I intended to make a tutorial video about it, in part so that I would remember the steps. Alas, I postponed that video for long enough that I forgot all the tricks, and so this morning, I had to sort it all out again. I'm going to jot my notes here on the blog in case I forget again between now and creating the video.

The steps assume you already have a properly rigged mesh with an animation action created and selected in the dope sheet. Make sure you rename the armature from "Armature" to something else, otherwise the scale of the animations will be wonky, as described in TooManyDemons' answer here.

To export the mesh, from the FBX exporter, choose to export the Armature and the Mesh, but in the Animation tab, make sure nothing is selected. Export that to a file named something like model.fbx.

To export the animation is a bit trickier. I found good advice here. Make sure the desired animation is the only action shown in the NLA Editor, and push it down onto the stack. From the FBX exporter, select only Armature, and in the animation tab, select everything except All Actions. Deselect all the options under Amature as well. Export this to something like model_boogie.fbx.

This allows you to import the model and its animations independently within UE4, although they can still be in the same .blend file.

Other notes that I will likely forget:

  • When adding new actions in Blender, tap the 'F' button to save the animation even if it has no users.
  • To delete an action, hold Shift while tapping the 'X'. This marks it with a zero in the popup but doesn't actually remove it unless the file is reopened.
Now I just have to remember next Fall that I wrote this note on my blog...

Saturday, May 4, 2019

Reflecting on the Spring 2019 CS445 Human-Computer Interaction Class

Regular readers may recall that I was given the Spring 2019 HCI course to teach on rather short notice, so I only made a few structural changes between it and the Fall 2018 section. The most relevant to this post are the increased attention to software architecture and the switch to specifications grading. I also gave the teams nominally more time for their final project, but not enough that it was noticeable from my point of view. We retained our collaboration with the David Owsley Museum of Art (DOMA) and the overall theme that student teams would identify and address real problems they face.. Yesterday, I shared my sending-forth message to the students, and today, I would like to share my reflection on the semester's experience. Feel free to reference the course page, which provides the policies, procedures, assignments, and assessments for the semester.

I used a similar approach to specifications grading as I did in the Fall 2018 Game Programming course, in which there were discrete criteria for each level of grade. I added a separate category of criteria for the project reports, which were designed to provide the process documentation that corresponded to the technical artifacts produced. As before, students had to submit a self-assessment along with their source code and report, the self-assessment's consisting of a checklist of criteria that were met. Unlike the game programming class, where there was rarely disagreement between the students and I about whether a criterion was met, there was a lot of friction this semester. This was especially the case on the final project's two iterations. As I wrote to several students in my formal feedback, I have serious doubts that many of the teams honestly conducted a self-evaluation at all. Consider, for example, one of the most missed criteria asked students to explain how their projects manifested particular design principles from Don Norman's The Design of Everyday Things. Teams submitted a list of examples, with no explanations of them. It seems to me that if a team sat together and worked through the checklist, as I expected them to, someone would have said, "Have we explained this?" I don't think they had anything approximating such a discussion: I think they surveyed the checklist, said "Good enough," and marked the box. That is, I think they defaulted to the "hope for points" model rather than the "ensure success through unambiguous choices" model. Of course, the idea of self-assessment is not to save me grading time but to foster reflective practice and remove ambiguity. When students do it honestly, it does save me grading time, and when it is done dishonestly, I suspect that students might learn something about the importance of self-reflection. I need to think about what I might change in future uses of specifications grading to get around this.

An honest student approached me near the end of the semester, in part to share what he claimed was a voice of many students who were frustrated with the specifications grading system. I explained to him that the goal of the system is to remove ambiguity, both for students and for assessors. I honestly never got a good explanation of what precisely he or other students did not like about it, except that there were standards at all. I think the status quo is that students believe they start a project with full credit, and then I take away points for mistakes. One of the things I like about specifications grading is that it follows my contrary philosophy, which is that students start with nothing and must earn their credit. I think it is this idea, not specifications grading in particular, that students are upset about, because it holds the accountable to demonstrating understanding to earn credit. The fact that I get complaints about grading regardless of the scheme I use is probably testament to students' pushing back against having expectations rather than the particulars of the system. However, foreshadowing some of what is to come below, part of why a subset of students complains is likely that I actually draw upon knowledge they should have from prerequisite courses—knowledge that they may not actually have.

In the first half of the semester, I used a running demo project ("archdemo") to demonstrate some ideas of how to separate the layers of a user-facing software system. In the previous semester, I had done something similar, but using a context separate from our class collaboration with DOMA. Many students that semester did badly with the "warm-up" project, and so in order to help with on-boarding and consistency, archdemo showed a sample use of the DOMA data via the ContentDM database. The resulting application was called "Naïve Search," named thus because it didn't really solve any reasonable search problem: it just showed how to separate the layers of a system. While this worked in the short term, I think it also caused problems as students perceived more value in the example than it was meant to have. It was never intended as a template, but only an example of very specific course concepts.

One of the changes I made from Fall 2018 was that I required final project teams to use a subset copy of the ContentDM database in their projects. My intention here was that each team would have to demonstrate that they could separate the layers of a user-facing software system, regardless of what creative direction they wanted to take the project. The result, however, was that nearly every project looked a lot like archdemo with an added bell or whistle. Last semester, we had a broad range of concepts on a plethora of platforms; this semester, it was dullsville, as the teams just added some minor idea to archdemo. One team even consistently referred to their solution as "an improvement over Naïve Search," despite my repeatedly telling them in their formal feedback that this was not even close to our goal. I have no doubt that our partners at DOMA were uninspired by this semester's projects, although we have not had our wrap-up meeting yet. I would be remiss not to mention one exception, which was a clever interactive map that tied into the database in interesting ways despite the tight project timeline; those guys really nailed it, so if you're on that team and reading this, kudos to you.

Throughout the semester, we returned to five principles of design brought up in Don Norman's The Design of Everyday Things: affordances, signifiers, mapping, feedback, and conceptual models. Despite this being a theme of the class, there is scant evidence that students understand or applied these principles. Instead, my professor's eye tells me that they designed whatever they wanted, and then they tried to shoehorn those designs into these principles, or to justify their work after the fact. Although they had several assignments and much formative feedback about these principles, students continued to show misunderstandings through the final exam.

What was missing? I believe a big part of it was that they didn't follow my advice. This is exemplified with one key example: taking notes. The course plan says explicitly that students should always have their notebooks available for taking notes and jotting questions, and furthermore, that they should not have their distraction machines (laptops and phones) in their way during class discussions. Taking a friend's advice, I even made a first-day quiz in which students had to answer questions about this aspect of the class. Yet, very quickly (and for some, instantaneously), their old habits took over, and I would stand in front of class looking at the backs of open laptops rather than faces. Almost no students took any notes on any of our discussions, and is if to drive the nail of the coffin, some of them only got out their pens when I wrote something on the board. Even if they had a glimmer of understanding about affordances and signifiers during class discussion, there is no way that they held on to this fifteen minutes after class unless they actually expended the effort to do so.

I wrote on Facebook the other day about how I was feeling conflicting emotions about this class. On one hand, I am unsympathetic that they did not learn the material because they chose not to follow my advice on how to do so. The advice is not complicated: it primarily involves reading and taking notes. However, at the same time, I pity the students, because I think a large majority of them—if not all of them—know neither how to read for understanding nor how to take notes while reading or discussing. To me it begs the question, "Where does the buck stop?" If I get undergraduates in my upper-division elective Computer Science courses who lack these skills, is it my responsibility to teach them or just to assess what is in the master syllabus?

There is a related puzzle, which was foreshadowed in my sending-forth message to the class. An uncomfortably large proportion of the class showed very little proficiency in fundamental programming skills. When I brought this up in honest, private conversation with trusted undergraduates, they showed no surprise: they said that it was fairly easy, and common, for students to "cheat" on assignments. This manifests in two ways: either copying the work of a peer and submitting it as their own, or stringing together bits and pieces of code found online. Neither approach forces the learner to confront the useful struggles required to build firm understanding. It reminds me of the advice from Make it Stick that I wrote about last December, and that in its absence, students really don't know how or what it means to learn.

Going a little further, I witnessed a curious phenomenon several times during a guest presentation by a CS alumnus and successful professional. The speaker is currently in a position with a lot of creative flexibility, and he has his own team of programmers to implement parts of what he designs. However, many students seemed to misunderstand his story, thinking that this meant one could just "have ideas" and tell others to program them. They missed the part where he worked on rather tedious programming tasks for ten years to prove his capability, vision, and leadership. Instead, they rejoiced, saying things like, "I cannot program, but I want to tell programmers what to do, so now I see that there's a job for me!" This sentiment was shared primarily among CS minors despite their having taken at least three programming courses in the prerequisite chain to this course.

These students don't seem to see a connection between the HCI goals of the class and the fundamental skills of software development. It appeared to me that these students were not being tripped up by the accidental complexity of software development (such as the placement of braces or the quirks of a UI framework) but rather by its essential characteristics, which include precision and sequential reasoning. How I frame HCI as a Computer Scientist is essentially that it is user-centered precision and sequential reasoning. What happened on the students' final project teams seemed to be that those who had programming skill were relegated to doing persistence- and model-layer data manipulation tasks—required tasks for the program to work at all—while those who could not program worked on the UI. The result is that the UIs were badly conceived and executed, because those working on them couldn't conceive of the problem as requiring precision and sequential reasoning. Part of my evidence for this manifestation is the difference between teams' paper prototypes and their final products. Every team decided upon a paper prototype that was developed from a user-centered design process, but practically no team's final product looks at all like their prototype. Instead, their products looked like the archdemo sample, but with a few more widgets added via SceneBuilder. One could say they did what they could rather than what they wanted, or more pointedly, they decided to fail conservatively rather than succeed differently—which was exactly one of the human failure modes we discussed in class.

A student confided in me that, when he signed up for the course, he expected he would learn how to design a good user interface. By this, he meant that there would be some thing I could teach him that would suddenly make him good at it—a silver bullet. He pointed out that some students seemed to think that it was all in the tools: if they learned the tools, they would be able to make good UIs. I am grateful that he took the time to share this with me. I asked him if, after studying this topic for fifteen weeks, he understood why I could not meet his desires, why I could not dump ideas into his head that would suddenly make him good at UI design. He indicated that he did understand it, but he also sounded disappointed.

As I wrote about yesterday, I had forgotten about the emotionally powerful reflection session I had with my students at the end of the Fall 2018 HCI course. I didn't schedule for it this semester, and so it didn't happen. I think it would have helped all the students to frame their difficulties and challenges within their authentic context: yes they struggled, but they did so because what we are doing is legitimately hard.

The puzzles I face in considering how to change the course for Fall 2019 are significant. I expect this week to be able to meet with representatives from DOMA to talk about their take on the experience. Our primary contact is their Director of Education, Tania Said, and she has intimated that she would like to see the class work together to produce something with more staying power rather than a series of prototypes. This makes me a bit nervous, given my previous experiences having entire classes taking on a project, but there may be a possibility to set it up like competing consultancies rather than trying to do a whole-class team. One of the reasons I wrote this up now is that it has helped me serialize and articulate some of my thoughts in preparation for meeting with her. I hope that conversation will help me turn some of these reflections into actionable course plans for Fall. As always, I expect I will be able to share my summer planning activity in a blog post in the coming months. Until then, I think it may be time to spin up my summer project.