Friday, July 26, 2019

Summer 2019 Course Revisions

My main focus this summer was my first commercial game project, but I have always kept in the back of my head that I had some serious work to do to get ready for Fall semester. In fact, I was feeling a bit stressed about it a few weeks ago and took some time off of Kaiju Kaboom to sketch out plans for my game programming and game design classes. My third class for Fall is scheduled to be CS445/545 Human-Computer Interaction, but it is right on the edge of being underenrolled; if it doesn't make minimum enrollment, the dean's office will cut it and I'll be deployed somewhere else with very little notice. However, because my time to work on course prep is drawing to a close, I devoted the day to sorting out as much of that course as I can.

In today's blog post, I'll give some highlights for my three courses. This will be shorter than in some previous years, so I'm condensing it all into one.

CS315: Game Programming

I'm excited to be teaching game programming again, since this year we actually did get new machines in the lab—machines that are capable of running UE4. Last Fall's course went well, and so I am keeping most of the plan as-is, but trying to keep a pedagogic eye on how I can use in-class examples and workshops to drive some of the lessons home. I expect that we will do three to four mini-projects followed by a larger team project. This year, I have dropped the achievements, since I think the course is already quite full of places where students can make meaningful decisions about what to pursue. I am keeping both the specifications grading and the project reports, both of which served their purposes last year. I have been tempted to set up some kind of team role system or other accountability system for the final project, to prevent the case where one student carries the rest, but I am still unsure how to do this; perhaps I should turn that question around to the students and have them contribute to setting the rules.

Here is the draft course plan. Only the first of the mini-projects is posted, but I expect to re-use my progression from last Fall, where we went essentially from 1D to 2D to 3D across three projects.

CS439: Introduction to Game Design

I do not have an immersive learning project lined up for this academic year. Instead, I am using the year to try to intentionally explore how I can integrate some of the work I've done through immersive learning into formalized Computer Science department offerings. One step in this direction is offering a version of my game design course—which I have taught for several years as an honors colloquium—as a Computer Science elective that anyone can take. The easiest way to do this was with our "seminar" course, although this is not ideal for marketing the course, since it still shows up in the catalog as a 400-level CS elective. Still, I look forward to teaching this and seeing how the audience compares to the honors colloquium.

One way that I have made this "computer science-y" is to require our intro programming course as a prerequisite. This is not because I expect to do much programming, but rather because I want to be able to draw upon metaphors of computational thinking when looking at games system design. This seemed like a good idea at the time, but I have questioned this the more I've worked on it. Indeed, in my random sketching of how I would consider proposing this as a formal service course in my department, I've strongly considered dropping any prerequisite.

As for the course structure, here is the draft course plan. It is based strongly on what I have done in previous years. Even though the examples in the free online text we use are showing their age, I like both the presentation and the price. Once students build a core vocabulary, they can make use of the exhaustive supplemental information that is discoverable online. Without a community partner via immersive learning, the students will be working on projects of their own design without external constraints, which I haven't done in a class like this since roughly 2008. I'm eager to see what they pursue. As before, we'll spend the first half of the semester studying fundamentals and then the second half of the semester building projects. Some of the students who have enrolled are ones with whom I really enjoy working, and so I'm looking forward to spending time with them again too.

CS445/545: Human-Computer Interaction

The last academic year, I taught this course both semesters in a collaboration with the David Owsley Museum of Art. I had a fruitful meeting with their education director several weeks ago as we debriefed the experience. I am glad to say that we are continuing our collaboration, but we are narrowing the focus toward one specific problem: helping visitors navigate the physical museum. This means that my students won't have to do so much problem discovery, but I think that's OK. They really struggled with the idea of finding a legitimate problem vs. inventing a problem and then justifying their work. I think this new focus will help them get into the solution design part of the course, which is really more important for our single, elective course on HCI.

Knowing that this will be a relatively low-enrollment class allows me to treat it as a studio class. We will start with some common readings and structured exercises, but then I would like to move quickly into tackling this navigability problem, using my familiar tactics of just-in-time teaching and reflective practice to have a meaningful learning experience. The draft course plan only lays out activities for the first three weeks or so of class, after which I can work with the students to assess our situation and move forward as needed. It does mean there is kind of a hole around the grading policy of the course, and I hope that this does not cause the students any undue stress. My plan is to work with them to develop a methodology that embeds assessments into it, which I think they will enjoy and learn from.

A word about the sites

Careful readers may have noticed that my course web sites have undergone a visual overhaul. This is related to my learning lit-element, as I wrote about earlier this summer. Whereas my sites were previously based on the polymer starter kit, now I am using the PWA starter kit prerelease. I had to do a bit of finagling to get it to work on our departmental Apache server, but once that was done, I could easily replicate it across the three sites.

Tuesday, July 23, 2019

Kaiju Kaboom!

I am pleased to announce the release of Kaiju Kaboom, my first commercial game project. You can find it on Google Play, exclusively playable on Daydream.

Most summers, I set my own work and creative goals rather than teach classes or work on grant-funded projects. During the Spring semester, I decided that one of my summer goals would be to create a game from scratch and release it commercially. The outcomes of this plan are objectively measurable, and as of last night, I have met them, and I feel good about it.

The game

The game itself is an expansion of the concept that my son and I developed for Global Game Jam. Inspired in part by Terror in Meeple City, the player takes on the role of a kaiju who returns to its island home, only to find it infested with people. Of course, the people are meeples, the buildings are made from wooden blocks, and you are a miniature giant monster, but this both adds to the charm and simplifies the asset development for a novice modeler. The biggest change to the game is moving it to mobile VR via Google Daydream, but I also added randomized levels, meeples that fight back, and four different kaiju powers, three of which are unlocked through the high score system.

I spent some time around finals week building a proof of concept to ensure that the game would be enjoyable and within scope, and having a positive experience with that, I devoted the summer to it. I did not use any formal task tracking tools. Instead, I have a pad of paper on my desk where I would either write down features to explore, sketch geometric or software solutions, or record defects to address. I worked roughly eight to ten hours each day, sometimes up to twelve, and about half days on Saturday, with a few gaps for family trips. I showed an early build to some friends and family around Memorial Day, and this confirmed to me that the core gameplay was really enjoyable.

The tech

Kaiju Kaboom was built using Unreal Engine 4. I started the project in Blueprint and initially just added C++ for the parts that were faster to write in textual code than they were to prototype in Blueprint. In retrospect, there are parts of the core game loop that I left in Blueprint which I probably should have done in C++, because the Blueprint can get hard to follow. I don't think there's any real performance hit from using Blueprint here; the real problem is that Blueprint makes it harder to control modularity, so it's hard to intuit the dependencies after walking away from a subsystem for a while.

Getting up and running on Daydream was not much trouble, and the documentation is good. However, the Google VR branch of UE4 is clearly no longer being maintained. This led to a problem as I was getting ready to release, in which the game crashed on a friend's device. Scouring the web, reading semi-related threads, and trying various approaches, I was able to finagle a build system that worked for both of us. I continued to do my regular feature development in UE4.22.3. For releases, however, I downloaded and built UE4.22 from source, replacing its GoogleVR libraries with those from 4.20 release—the most recent release—of the googlevr-unreal fork. My friend also had some troubles that I thought might be due to texture formats, so the build includes both ASTC and ETC2 formats, even though only the former is needed for my own Pixel 2.

The null business plan

At this point, the thoughtful reader may ask, "Why did you target Google Daydream?" The simple answer is that I have one and it seemed like fun, both as a technical challenge and through that proof-of-concept I built in early May. Note that mine was a commercial project not because I wanted to make money but because I wanted to see what was involved in making it a commercial project. I know literally one other person who has a Google Daydream setup: my friend Chris, who was my lone tester. If I can sell ten copies, I'll buy some nice beer. I'm in the blessed and enviable position where I can invest a summer in learning and improving my skills without it having to be economically profitable. As I alluded to before, abother benefit of Daydream is that because it's low-powered for VR, I could get away with my own limited asset-creation skills, whereas I know I could not make something by myself in two months that would be competitive for, say, the Vive.

I did borrow an Oculus Go from a friend, in hopes that it would be easy to release on both platforms. Unfortunately, the logic of my character was tightly coupled with the GoogleVR components, and I didn't see a clean way to separate these. I would consider buying one of these $200 headsets and doing a port if I had any belief that my little hobby project could make at least that much back plus porting time, but right now, I don't think I can put that much more effort into it, since I need to transition into "preparing for Fall semester" mode.

The last two or three days, as I've geared up for the public release, I have thought that perhaps I should have planned on doing a more modern release, where the first part of the game is free, and people who like it can pay to unlock the rest. One reason for doing this is so that I could explore both the libraries and the platform support for DLC and licensing, which would be interesting. I decided to stick with my original plan, augmenting it slightly by releasing the game under the GPL. This means that if someone buys the game, they get access to the source code as well, and they are free to learn from it just as I did. I don't know if that will have any real impact, but it feels like the right thing to do, given my situation.

The learning

Originally, I had it in my head that I would try to make all the assets of the game from scratch in true artisan fashion. Along the way, I did use a few public domain sound effects, and as my available time quickly ran out, I added CC-BY music from indie- and jam-favorite Kevin MacLeod. I have written two other posts to detail some specifics about things I've learned through this project: importing Blender animations into UE4 and using UE4 LiveCoding for rapid unit testing. I did not keep a rigorous log of other things I've learned, and some things I had to learn twice because they did not stick in my head. Here, however, I am going to try a brain dump, in part so that I can come back here later when I need to re-learn something again.

Blender

  • UV Unwrapping
    • I found this video in particular to explain well how to unwrap and texture paint, which technique I used to mark out regions of a texture for manual painting in Gimp.
  • Normal maps
    • The technique involves modeling high-poly and low-poly models in the same file, exporting the low-poly one (by selecting it and exporting only the selection) for UE4, but exporting the normal map of the high poly one and using it. I used this on an early version of the blimp.
  • Smooth shading instead of normal maps
    • An easier way to do what I wanted with normal maps was to simply select Smooth Shading from the Tools menu in Blender, and to make sure this was exported to FBX.
  • Boolean modifiers
    • These modifiers enable constructive solid geometry; for example, the union of a plane and a sphere was used to generate the blimp's fins
  • Keyframes in video editing
    • Just as explained in this video, I had been doing gamma cross fade effects, but now can do more robust effects by keyframing. Right-click on a property, insert keyframe. I'm still not sure how to see all the keyframes and move them around, but this was good enough for me to compose the scaling and fading effects of the trailer.

Gimp

  • Filters→Render contains all kinds of neat stuff to play with visual effects. I used the Lava and Cell Noise filters to generate quick but effective textures for the fireball and acid spit spheres.
  • Multiple monochrome images can be packed into the RGBA channels of a single image using the Colors→Components→Compose feature. Make each image into a separate layer, and then this tool allows you to directly compose the layers into the RGBA channels.
  • Two ways to add shadows behind text:
    • A "soft" shadow can be had by copying a text layer, darkening it, and then blurring it. I did this at some point in the past, though I cannot remember where.
    • A harder shadow is possible by using Filters→LightAndShadow→DropShadow.

UE4

  • AI vs Physics
    • An actor cannot be driven both by a behavior tree and by physics simulation. Of course not, he says in retrospect, that would make no sense.
    • There are actually two meeple actor types in the game: AIMeeple and PhysicsMeeple. AIMeeple is controlled by one of two behavior trees, depending on whether they are armed or not. When they are hit, they convert in-place to a PhysicsMeeple.
  • AnimGraph
    • I had done a little with AnimGraphs for an unreleased tech demo last Fall, but very little of that stuck in my head, in part because I may very well have been doing things wrong. For Kaiju Kaboom, AnimGraphs are used in a more conventional way, driving the animation states of both AI and Physics meeples.
  • Landscape Materials
    • The landscape has sand and grass layers, and it fades toward white with higher altitude.
  • Cascade Particles
    • I was hoping to learn Niagara, but it is not supported on Daydream. Still, I was able to learn enough of Cascade to make my own simple particle effects, such as the fireball explosions and acid spit splash.
  • Following a Spline
    • It's a fundamental technique in level building, but I haven't really done much level building before. The blimp's flight path is specified by a closed spline.
  • Efficient Materials
    • I learned some more about how to balance GPU and CPU processing. For example, my original implementation of the blimp's propellers involved using a rotating movement component for each. After watching some videos (possibly one of Tharle VFX's shader maths videos), I moved this to be computed by a shader using world position offsets.
    • Several of my materials use customized UVs so that they are computed per vertex instead of per pixel.
    • I have a few "master materials" and use a lot of material instance constants. I understand this to be more efficient, although I didn't go so far as to measure the difference.
  • Visualizing Shader Complexity
    • As mentioned above, I haven't really done a lot of level design in UE4, so I've never really needed to inspect my scenes for performance problems. On this project, I tinkered some with the different editor view modes such as shader complexity, in order to find areas that were not performing how I expected. Turns out, everything was performing as I expected, but it was still neat to see this.
  • Unit tests
    • I knew it was possible to write unit tests for C++ code in UE4, but I had never done it before. TDD seemed like the best way to approach my high score table: unlike experimental gameplay code, it had very well-defined rules. Here is the source code for the unit test. The testing library assumes all the tests are in one method, and so I have used macros to give a fluent layer on top of that assumption. This is not as robust as something like chai, but it's much easier to read than the alternative.
    • I did not have a continuous integration system, so I had to run the tests by hand, which means that I rarely ran them. If I were working on a team on a larger-scale project, I would definitely invest in CI.
  • Functional tests
    • As with unit tests, I have known for some time that it was possible to do automated testing through the UE4 session frontend, but I never took the time to really figure it out. In this project, I created a test level containing four automation tests in a 2x2 format to ensure that falling columns and floors damage AI meeples and physics meeples.
    • In my original approach, I was spawning the relevant actors in the PrepareTest event of the Blueprint. Later I realized that it was much more effective for me to create these actors as child actors within the construction script, since this allowed me to see and manipulate them in the level. 
    • The result is fun to watch, but the approach was somewhat hamstrung by the fact that I was not using continuous integration as mentioned above..

Wrapping Up

It seems like there's no end of good ideas that I did not have time to put into the game, ranging from minor quality-of-life improvements to major new features. The spirit of this project is inspired by Jeff Vogel from Spiderweb Software, who taught me this wonderful expression: It's better than good—it's good enough! I'm pleased to have this project more-or-less wrapped up for the summer. I have started doing some course prep, but I have more of that and other university-related business to take care of this summer, as well as family obligations. 

I've also done much of this work with an injured middle finger on my right hand, which, combined with the intensity of this project, has left me well behind on my summer painting goals. Now that Kaiju Kaboom is out in the wild, it may be time to clean up the office and get the paints out.

Thanks for reading! If you know of someone with a Daydream who you think might enjoy throwing boulders at meeples, please let them know about the game. That is, pretty much, my marketing plan.

Thursday, July 4, 2019

UE4 Live Coding for Unit Testing

TL;DR: Use the new Live Coding feature in UE4.22 as a workaround for the fact that automation unit tests cannot be Hot Reloaded.

My goal for the day was to write a high score system for my summer project, which I am creating in Unreal Engine 4.22. As I started thinking about the requirements, I realized that TDD would be a good approach. Of course, I've spent more hours diving into how to make this work than I would have spent on a brute force approach, but I'm hoping I'll recoup the investment in the future.

The Automation Technical Guide provided enough scaffolding for me to write a single unit test. I took some time then to pull this out into its own module—this seemed like a good idea, although developing with multiple modules is something else I've also never done. Orfeas Eleftheriou's blog post was instrumental in my pulling my unit tests into their own module.

I remembered reading months ago that unit tests in UE4 are not Hot Reloaded, and this is confirmed as expected behavior in UE-25350. The frustrating fallout of this is that the editor has to be reloaded in order to see changes made in the C++ test implementation. This slow and tedious process is anathema to good TDD rhythm. I came across Vhite Rabbit's workaround, but I could not get it to jive with the modular decomposition I gleaned from Eleftheriou.

Then I remembered that 4.22 shipped with an experimental new Live Coding feature, which promises to allow changes from C++ to be brought into a running game session, as long as they are not structural changes. I wondered, if this works for running games, would it work for unit tests?

The happy answer here is Yes. I had to restart the editor because Live Coding cannot be used after any Hot Reloading. The editor opens with a separate window that seems to be managing the Live Coding feature. I went into my unit test and turned its simple "return false" into "return true", hit the magic Ctrl-Alt-F11 combo, waited just a few seconds for the Live Coding system to run, and then re-ran my test. Sure enough, now the test passes.

I have essentially no progress on a high score feature after the morning's work, but hopefully by documenting my findings here, I can help others move forward in their unit testing adventures.