Thursday, September 25, 2014

Quiz Games and Learning Taxonomies

A few days ago, I posted about the SOLO Taxonomy and my desire to incorporate its ideas into educational game design. Yesterday's serious game design class provided just such an opportunity.

This week was my students' first opportunity to present game concepts. Over the coming weeks, they will be presenting additional concepts as well as multiple iterations of prototypes, as I described in my course planning post. This was their first real opportunity to show something original, and most of them chose to make games based around the International Space Station, a topic of particular interest to the class' partner, The Children's Museum of Indianapolis. The designs varied in genre, scope, and clarity, as to be expected from a first pass. One particular design was essentially a series of questions, where each question was worth a point. That is, this design was essentially a quiz, with no interesting or relevant decisions being made by the player.

I told the students how, in general, I don't have much interest in quiz-style games. Coincidentally, they had just read The Education Arcade whitepaper, “Moving Learning Games Forward,” and I was able to draw on some of the ideas in that whitepaper to justify my indifference. Looking at quiz-style games from the perspective of SOLO, they clearly fall into either the unistructural or multistructural levels of the taxonomy. That is, either the player is recalling a single idea (identify or name) or sets of ideas (list, enumerate). SOLO makes it clear that these games exist at the lower level of learning activity. By contrast, a game that requires analysis or application of domain knowledge would place the learning activity at the relational level, and a game that requires invention or theory-building would be at the extended abstract level. These levels inherently reflect what Burgun calls endogenously meaningful ambiguous decision-making, which I find to be a useful heuristic for evaluating whether or not a game concept will be worth prototyping.

It seems to me that identifying quiz games as low-level learning is just as easy with Bloom's Taxonomy of the Cognitive Domain: quiz games are usually situated at the Remember or possibly the Understand level, the very lowest levels of the taxonomy. The applicability of SOLO vs. Bloom seems to get murkier once when adds meaningful choices to the game. This gives me a direction in which to consider these taxonomies as I move forward with this semester's serious game design efforts.

Wednesday, September 24, 2014

Configuring Java development on Linux

I run Linux Mint on my workstation and my laptop, and so my students see me using it in the classroom and in my office. A few students have dipped their toes into GNU/Linux this semester, but configuring a useful and robust Java development environment can be challenging for someone new to the Linux ecosystem. I am writing this blog post to document how I set up my environment, and why I do it this way. I will assume a Debian-based Linux distribution, although I have used this approach on RPM-based distros as well.

The first step is to get rid of the default OpenJDK installation. I sympathize with the goals of the OpenJDK community and of free software in general, but at the same time, I am content to run Oracle's JDK for development purposes. Instructions for doing this depend on your distribution, and a quick search will likely show something like this:

sudo apt-get purge openjdk*
sudo apt-get purge icedtea-* openjdk-*

It probably doesn't matter if you keep the OpenJDK installed, but since we're not using it, we may as well clean it out.

In the bad old days, I would install all my Java libraries into /usr/local, and then I would curse myself over getting the permissions right. These days, I just put them in an apps folder off of my home directory—that is, at /home/pvg/apps. If you're very new to *nix, not that the tilde (~) is used to represent your home directory, so I also refer to this directory as ~/apps.

Next step is to download the JDK from Oracle. I will assume we're using Java SE 7, release 40, just for the sake of discussion. The approach is the same for any release, and in fact, allows for parallel installation of different releases. I extract the JDK into ~/apps/jdk1.7.0_40 using standard archiving tools. Then, set up a symbolic link in ~/apps to point to this particular release of the JDK. From the ~/apps directory, it looks like this:

ln -s jdk1.7.0_40 jdk1.7.0

Now, I can reference ~/apps/jdk1.7.0 and it goes immediately to release 40. If I want to update my Java version, I simply drop it into ~/apps and update the symbolic link—no other changes are required.

Making this JDK accessible from the command line just requires a few tweaks to the .bashrc file. This is a hidden file in your home directory; this means you won't see it with a standard ls command, but you can if you do ls -a. Since I use emacs, I can open the file with emacs .bashrc from my home directory; vim or another editor would work similarly, except not quite as well because they are not emacs. What we'll do is add two export commands, one to specify JAVA_HOME and another to tweak the path. I put them at the bottom of my .bashrc, along with a comment reminding myself what I was doing. The result looks like this:

# Use the JDK in ~/apps
export JAVA_HOME=/home/pvg/apps/jdk1.7.0
export PATH=$JAVA_HOME/bin:$PATH

You will need to open a new console or source the .bashrc file to see the change. Now, if you type the command java -version, you should see that you're running JDK 1.7.0 Release 40 (in my example). Note that you can use the which command to determine which executable is being run, so which java should return /home/pvg/apps/jdk1.7.0/bin/java.

My installation strategy for Maven, Ant, and Eclipse is similar. This gives me fine-grained control over which version I have installed, and I can hop between versions by simply tweaking my path. For example, in my ~/apps directory I have apache-maven-3.1.0, and then in ~/.bashrc I have added the following lines:

export MAVEN_HOME=/home/pvg/apps/apache-maven-3.1.0

I keep a link to Eclipse on my desktop for ease of access. There is a trick to making this work in KDE; I don't know if this is applicable to other desktop environments. In my Eclipse configuration file (~/apps/eclipse/eclipse.ini) I have added a link to the JavaVM that should be used to start Eclipse. The two lines look like this:


These go before the -vmargs flag that is probably already there.

I hope this is useful information for my students and others who are experimenting with configuring GNU/Linux systems for Java development. If you have other tips or approaches, feel free to leave them in the comments.

Tuesday, September 16, 2014

Bloom's Taxonomy vs. SOLO for serious game design

I think the first time I came across Bloom's Taxonomy for the Cognitive Domain was in my first two years as a faculty member, when I was just dipping my toes into the science of learning. I have written about Bloom's taxonomy before, including my preferred presentation of it:

I have found this presentation useful in teaching and practicing game design. Many games have a learning structure that follows the lower three levels. First, you are given some command that you must remember, such as "press right to move right" or "press B to jump." Then, you are given some context in which to understand the effect these commands have on the world: run to the right and the screen begins to scroll, then continue running and fall into a pit, forcing you to start again. Now, you have the context to apply what you understood, combining running and jumping to get over the pit.

I contend that most games don't really go beyond that. I hesitate to say that recognizing the pit as an obstacle constitutes analysis or that combining running and jumping is any meaningful synthesis. Most games do not teach the player to learn how to evaluate the game against the rest of their mental models either. The modern phenomenon of in-game "crafting" is similarly contained in the lower half of the taxonomy: remember that oil can be combined with flames to create an explosion, understand that this explosion hurts enemies, and you can apply this to defeat the enemy du jour. My contention is that games are designed experiences, and that players are remembering, understanding, and applying the constraints designed for them. Even Minecraft, with all its cultural importance, mostly has kids understanding and applying the rules of the world to build interesting things. Certainly, a few users recognize that the pieces they have been given can be used to synthesize something new, such as building circuits out of redstone, but my observation is that these are a minority—and the people who are following the tutorials to build copies, they are back to remembering, understanding, and applying someone else's design.

This use of Bloom's taxonomy is useful for game design thought experiments and for discussion, but it wasn't until GLS 2013 that I found out that many teacher educators are teaching Bloom's taxonomy as dogma, not as a useful sounding board. A poster session was presenting an alternative taxonomy, one that used the same labels but put analysis and synthesis closer to the bottom, using these to describe the kinds of tinkering users do with digital technology. (Clearly, they are using different interpretations of these labels than I am, but that's intellectual freedom for you.) This alternative was based on anecdotes and observations, much like Bloom's original, but this leads to a problem: Bloom's taxonomy is presented as a predictive, scientific model, but as far as I can tell, it is not empirical. In fact, cognitive science tells us that the human brain does not actually follow the steps presented in either of these taxonomies.

Reading Hattie and Yates' Visible Learning and the Science of How We Learn, which was recommended on Grant Wiggins' informative and inspiring blog, I was reminded of the fact that Bloom's Taxonomy does not represent a modern understanding of learning. The book introduced a different model, one that was first defined in 1982 but that I had never encountered before. It is Biggs and Collis' SOLO Taxonomy, where "SOLO" stands for Structure of the Observed Learning Outcome. It is summarized in this image, which is hosted on Biggs' site:

Hattie and Yates conveniently summarize the taxonomy—one idea, many ideas, relate the ideas, extend the ideas—and point out that the first two deal with surface knowing while the latter two deal with deeper knowing. The figure points out that each level of the taxonomy is associated with key observations that can be aligned with assessments. For example, if a student can list key elements of a domain but cannot apply, justify, or criticize them, you could conclude they are at the multistructured ("many ideas") level of SOLO. It strikes me that this has the potential to be powerful in my teaching, and I look forward to incorporating it.

So, how can SOLO contribute to an understanding of game design? It seems we run into the same limitations that hinder game-based learning, primarily those of transfer. Notice that the extend abstract level of SOLO explicitly refers to generalization to a new domain. It's true that I can learn how to jump over pits or destroy goblins with flaming oil, but this knowledge is locked away in the affordances of the game. This perspective is taken from Linderoth's work, particularly "Why gamers don't learn more," which applies the ecological theory of development to explain why learning from games does not transfer.

If nothing else, the SOLO Taxonomy can provide both a target for serious games and guidance toward assessments. Given a content area and the desire to create a game to teach it, I can target a specific level within SOLO. For example, if I only want players to emerge with surface-level knowledge, I might target the multistructural level, but if I wanted players to be able to connect the content to something else they know, I would need to target extended abstract. Then, I can reference the key words from the corresponding level of the taxonomy, and use these to define an assessment of whether or not the game worked. In fact, it strikes me that one could also take key words from the adjacent levels, and use this to detect extremes. As my game design course is wrapping up the preliminaries and moving into game concepts, I will try to create an opportunity to try this.

Monday, September 1, 2014

Learning from Failure—as a game mechanism

One of my favorite insights from the scholarship of teaching and learning is that, essentially, all learning is learning from failure. Every time a person encounters a mismatch between a mental model and reality, it is an opportunity to learn. I have been working for several years to incorporate this idea into my teaching, notably in the expansion of the CS222 project from two to three increments, specifically because it gives teams more chances for safe failure. Indeed, one of my favorite descriptions of university is that it is a "safe fail" environment, where students can fail in order to learn, without the economic cost it would take outside of academia.

It was only recently that I thought about what this phenomenon means in terms of game design, and in role-playing games in particular. Character advancement is a common component of RPGs, giving the player the opportunity to increase the skills and capabilities of his character. Such advancement is generally limited through in-game resource management, such as accumulating threshold values of experience points or by earning sufficient skill points to expend on new abilities. Dungeons and Dragons set the precedent whereby experience points are earned by overcoming obstacles, most often through combat but (with a good DM) also by other means. This has become the de facto standard and can be seen in all manner of modern games, including computer RPGs and RPG-inspired boardgames: success earns points that are used to gain skill.

Recently, I came across The Zorcerer of Zo (ZoZ), a role-playing game by Chad Underkoffler published by Atomic Sock Monkey. In fact, I have had a copy for almost a year, having bought it in a Family-Friendly RPG Bundle of Holding, but it wasn't until a few weeks ago that I read it. The game is based on Baum's Oz series, and since my two older boys and I are currently on the tenth book, we are quite familiar with the setting.

ZoZ uses the "good parts" variant of Underkoffler's Prose Descriptive Qualities (PDQ) system, the full version of which is described in a free document from Atomic Sock Monkey. A critical aspect of the system is that it eschews conventional attributes, skills, and inventory for qualities such as "world-traveler," "small," or "afraid of cats." Each quality is ranked, the range starting at Poor [-2], Average [0], and Good [+2], with each rank giving an adjustment to the 2d6 used for all conflicts. Again, for full detail, check that free PDF. It was a lot of fun to create ZoZ characters with my sons using this approach, since most of the time was spent describing the character's background and interests. One is a talking mouse from an island of merchants, who is in fact a small world-traveler who is afraid of cats. The other is a rockman warrior from a far-away island, who, since he does not need to breathe, simply walked through the ocean to Zo after hearing that it was a nice place to visit. There are no classes or races, and the setting lends itself to this kind of creative storytelling: it doesn't matter that there were no rock men in the Oz books, as long as my son wanted to make one, it was easy to create.

It is the character advancement system of ZoZ that got me thinking about the backwards nature of the conventional RPG systems. When a character encounters a conflict with a chance of failure, the player rolls 2d6 against a target number. If the roll meets or exceeds the target, the character succeeds. If not, the character fails and gains a Learning Point, and a player can improve his character by later spending these points. Consider how this matches what we know about how the brain works: when everything goes well, we actually learn very little, but when we make mistakes and reflect on them, our skills and knowledge improve.

It should not be missed that earning a Learning Point when missing a roll takes a lot of the sting away from failure. Although you did not get the result you wanted, you still get something: the opportunity to learn. This is a true yet countercultural statement. Conventional wisdom is that failure is bad, and this is a dangerous meme that is hard to overcome. Nowhere is it so endemic and unquestioned as in formal schooling environments, the very places whose mission it is (or should be) to instill a love of learning—and a love of learning necessitates tolerance for, if not embracing of, failure.

A player is still rewarded for success, of course, but it is done through the narrative. The use of narrative as a feedback mechanism is cleverly addressed in Koster's essay, "Narrative is not a game mechanic," which I recommend to anyone interested in weaving games and authored narratives together. In ZoZ, players can earn Hero Points for their characters by taking especially brave or noble actions. Hero Points are not used for character advancement, but rather to shift the story in the players' favor such as by getting hints, trading in favors, or getting a one-time boost to a roll—a sort of Oz Karma.

I have been impressed with Zorcerer of Zo and enjoyed playing it with my family. It has inspired me to consider how I might incorporate learning from failure as a game mechanism in my own designs, and more generally, how I might take more ideas from my scholarship and apply them to my game designs. This semester, I will be engaging in an experiment in public game design in concert with my honors colloquium on serious game design, and I expect to use this blog for that purpose. Watch this space for further announcements and designs, and as always, feel free to share your thoughts in the comments section.

Thursday, August 28, 2014

Impressions of Android Wear with the LG G Watch

I attended a GDG Muncie meeting over the Summer where I was lucky enough to win an LG G Android Wear watch. The longer version of the story is that the organizer was trying to determine the best way to generate random numbers for the lottery, and I suggested one of my very favorite sites on the Internet, He went to the site and generated my number: the system works!

I normally wear a watch—a Skagen titanium watch, to be precise. It is ultralight and quick to don or remove, both of which I consider to be great benefits. This is my second watch of this model, in fact, after having smashed the face of one on vacation several years ago. For those who don't know, I don't have a cell phone plan: the Nexus 4 I carry everywhere is used strictly as a pocket Wi-Fi device. Hence, my watch is not a fashion accessory, it serves an important function, and being so light, it does so innocuously.

My LG G arrived a few weeks after the GDG meeting, and I decided to wear it around the house for a few days. It is clunky and heavy, especially in contrast to my usual watch. I'm not hip to the technical terms for watchbands, but while the band is functional, it is fiddly, so it takes a few moments to put on or take off. Again, I am not interested in the piece as a fashion statement per se, but I think the picture shows how my rather thin hands and wrists are dominated by this piece of black technology.

It was easy enough to set up and sync with my Nexus 4. I like that the watch face can be configured to show the time, the date, and some ambient information, such as the temperature. I was afraid that the notifications system would be distracting, but I find it no less distracting than my pocket device, really. When I want to know whether I have new email, for example, I simply check. When I am in a situation where I don't want to be interrupted, I am not generally checking my watch for the time anyway: I am either in a situation where I don't care about the time (writing) or there's a clock readily available (meetings, teaching).

Given that I'm the type to turn off notifications and avoid distractions anyway, I also haven't found it to be that useful: almost any time I have used it, I could have about as easily used my pocket device instead. Perhaps that's due to immaturity of the platform, but I suspect it has more to do with me not being in the target demographic. All the same, it is kind of fun to check messages on my watch while walking down the hallway to the men's room. It doesn't feel any less isolated or rude than carrying a phone and checking messages in the same situation, but it does leave hands a bit more free. Probably the single-most feature I use on the watch besides time and date is the view of what appointment is coming next.

I do have a major complaint with the email authoring feature. It does feel very futuristic to talk to your wristwatch and have it send a message to someone. However, it is set up so that you narrate your brief message, and then the watch shows you what it recognized and sends it right away. The two times I've tried this, the speech recognition was terrible, but I had no opportunity to stop it before it sent—I was left with that awful feeling of having just sent a nonsensical message. In my opinion, it really needs a 2–3 second confirmation period in which one can stop the process.

Another usability failure on the watch arises from the context in which it is used, and in particular, I wonder if the designers considered users with small children. When my toddler sits on my lap or when I pick him up, his arms reach exactly to my wrists, and he tends to fiddle with whatever is there—a smarthwatch, for example. It's a bad feeling to be sitting happily with a child in lap and then suddenly realize that he may have knocked messages out of your inbox. The device really needs a hardware switch that turns on or off the touch-sensitivity, or it should come with a warning, "Not for parents of young children." I've started wearing it only on days when I am working from campus, and around the house, I stick with my Skagen and pocket-device.

Preliminary conclusion: It's a fun toy with a few good uses and a few usability problems. I would not buy one, but I am happy to tinker with one. I do have an idea for an app that I may experiment with in the next few days, but that depends on how the semester gets rolling.

UPDATE (8/29): A crazy thing happened this morning, the day after I posted my review. I checked my messages while walking down the hallway to work, and a colleague sent me a question that could be answered either "yes" or "no." I figured, how badly could the speech recognition mangle that? I hit "Reply" on my watch, and the voice recognition screen came up with the "Speak now" prompt. Then I swiped or tapped or something... I am not really sure what I did, which itself is interesting... and I got a "Yes/No" dialog. I hit the "Yes" button and an email was sent with exactly that content.

That is neat. I need to wait for someone else to send me an email that I can answer in one word and try that again.

Wednesday, August 20, 2014

Screencasting on Linux Mint 17

Sometimes I post philosophical essays, and sometimes I describe important teaching experiences, and sometimes I reflect on my miniature painting hobby. Today, however, I go to that much more pragmatic use of the blog: writing down how I actually got screencasting to work on my work machine.

I have a Dell Precision T3600 running Linux Mint 17. I switched from KUbuntu to Mint on my work machines last year, when I had some trouble with hardware recognition. Since I've been using KDE for over a decade (from Mandrake to Mandriva to KUbuntu), I use the KDE distribution of Mint as well. It seems this puts me in the minority, as most of the Q&A I see online assumes one is running something newfangled.

I have USB Logitech headphones that used for the screencast. After a bit of fighting with my mixer, I was able to get the microphone to work: the system wants to default to the hardware ports, even when there's nothing connected there. Audacity makes it very easy to test the sound configuration, and so I was sure that the microphone was working.

However, getting that microphone to work through screencast software was another problem entirely. I had no luck with the old standard recordmydesktop at all: video capture was fine, but the audio came up with nothing. A bit of searching revealed some newer applications I had never heard of. Vokoscreen had a reasonable user-interface with several configuration options, and I was confident enough to record an 11-minute take. Unfortunately, upon playing back, the audio was terribly choppy. I spent quite a bit of time fiddling with the framerate and pulseaudio settings trying to fix this, since otherwise Vokoscreen was convenient to use, but a tutorial with no audio is hardly a tutorial at all. Since the mic was working fine in Audacity, I inferred that it was a software problem, not a hardware problem.

I switched over to Kazam, and this ended up doing the trick. It allowed me to select an area of the screen where I could hop between Eclipse, Chromium, and the console, and the audio was captured with no trouble. The default file format uploaded flawlessly to YouTube, and I was able to share the video with my students. I had actually tried Kazam before Vokoscreen, but it wasn't working—turns out it was because I had the mixer settings for my mic much too low, and they needed to be at almost 100%. (Recordmydesktop still did not work with this fix, incidentally.)

The screencast itself is just an explanation of how to set up a PlayN project and hook it up to a Mercurial repository using the Computer Science Department's Redmine server. Hopefully next time I want to do an 11-minute screencast, it will take less than two hours of tinkering.

Friday, August 8, 2014

Painting Drizzt, Part 2: Heroes, Villains, and Big Monsters

This is the second part of my series of posts in which I reflect on painting the miniatures from Dungeons & Dragons: The Legend of Drizzt Board Game. In Part 1, I described my experiments with a few different techniques and ended with descriptions of some of the unique characters. The Drizzt set is the second collection of miniatures I have painted since my 20-year hiatus from the hobby, the first collection being Mice & Mystics—a painting experience documented in my January post.

I make sure to take pictures of all the miniatures I have finished, and I often take work-in-progress shots of interesting or challenging pieces as well. Using my Android phone, these get automatically backed-up to Google+, where I write my notes about techniques and colors. I also frequently use the image editing tools in Google+ to do some white balancing, and I find the "Lift" option combined with increased brightness makes up for my budget camera and lighting setup. (My only criticism is that these editing tools do not work in Linux, and so I have to run in Windows just to take good painting notes.) I mention this in part because I have had the Drizzt figures complete since May and have begun my next painting project; I am glad I took the notes, as they remind me of my focus at the time.

I left off my last post with this picture, the first step of an experiment with priming miniatures in white or black:

Let's revisit these characters in the painting process, starting with Guenhwyvar. Prior to painting her, I came across a post (maybe this one, though I do not remember so well now) explaining that one rarely paints in pure black. Guenhwyvar then is my first experiment in non-black black. The base coat is almost black with a bit of purple, and the drybrushed highlights are greys tinted purple. The base also has hints of purple, a color that was chosen in part to match my plans for Drizzt himself. The only pure black on this model is in the pupils and the roof of the mouth. I am pleased with the result.

The other black-primed figure was Yochlol, who I decided to build up in layers following the technique described by Dr. Faust. This figure also marks a change in my work lightbulb, switched to a Cool White Ecobulb (1170 lumens, 4100K), which gives much better light although also generates quite a bit more heat. Yochlol is really just a series of layers from yellow-brown up to yellow-white, as can be seen in little montage. I remember at the time feeling a bit silly covering the whole model in each subsequent layer, as opposed to a lighter base coat and using a wash to get in the cracks. However, it was a good experience to practice layering on this model, given that it's basically a blob of slime. If I could do it again, I would probably have it be a bit less brown, but I'm still pleased with the smoothness of the result.

For Athrogate, I tried to model his color scheme after a wild boar, inspired by his pet boar, Snort. I remember being happy with his base colors but thinking he was rather dull, and I was nervous to add highlights. I am glad I faced my fear and added the highlights, because I think it turned out great—and it's another novice fear eliminated!

Next up is Artemis Entreri, whom it seems we face every time we play a random-villain game. I was never really happy with his face, which came out kind of splotchy. This was also the first miniature I did after buying some glaze medium. I tried giving his vampiric dagger a red glaze, but it came out comically pink and was painted over. On the cloak, there was too much contrast between the highlights and shadows, and so I tried a glaze there, but I think I used too much medium. The result was an accidental "clothy" effect, which is not horrible but also not what I wanted. Long after working on this figure, I came across some tips on how to get cloth effects with sponge painting, and I think that's what I will try next time I approach a cape like this. All told, Entreri is passable, but not great.

Looking back at those four, it's hard to tell that the primer made any difference at all in the final model. Artemis Entreri does look darker than the rest, but he was also supposed to look darker, so I cannot say the primer was a major issue. My experience was that white lends itself to a painting sequence of mid-tone, wash, and highlight, while black works well for building up from dark tones (although I usually end up needing a pin wash to bring out the contrast at the end in this technique anyway). Based on this, I moved forward with white primer for the rest of the figures in this set.

I had been waffling with respect to the sequence of basing and priming, and I had also been experimenting with different primers. One thing I tried was basing first (with a mix of three sizes of ballast held down with thinned white glue) and then priming with my Vallejo White Surface Primer, which I brush on. I took this picture to show how it cracks after drying and shrinking (white minis on right), although the final effect isn't too bad (Entreri on left).

This next picture shows the streakiness of brushing on this primer, which I found annoying although, as you'll see, did not seem to effect the final paint job.

That inspired me to do some more reading about brush-on primers. I think I mentioned before that I was unsatisfied with my spraypaint experience when working on the Mice & Mystics figures, in part because of weather restrictions. After watching this video about using gesso as a primer, I thought I would give it a try.

Nice video, eh? Damned lies, I say. I bought some white Liquitex Gesso and tried it on Bruenor.

Looks like he fell in a pool of chalk dust. It took a bit of scrubbing to get him cleaned up again. (A note from the future: I have recently returned to the Gesso in my priming experiments, and it seems to work well when put on in thin layers, avoiding the "glop it on" advice from the video's article. However, I also realized that the gesso I have is leaving noticible texture on flat areas of my models. It doesn't look too bad on armor, but is not what I wanted. I also realized that I have the "Basics" line of Liquitex gesso, and I wonder if the particulate is less finely ground for the cheaper line. Sadly, I have yet to find any information to confirm or deny this, otherwise I would consider buying a new bottle of the higher-quality stuff—but that's not worth the gamble to me right now.)

Here is the finished Regis, who turned out pretty well.

Cattie-Brie was an interesting model. It looked like the miniature was based on this iconic painting of the character:

However, I didn't really want my son's first experience with a human female adventurer to be a leather bikini, and also, the arms had a "puffiness" that implied sleeves. I decided to give her a purple shirt, to go well with the green cloak and red hair. The blue gem on her sword is designed to bring out the blue in her eyes. I think the figure turned out well, especially considering that the miniature itself lacked a lot of definition.

I used a similar color scheme on Cattie-Brie as on her adopted father, Bruenor Battlehammer. This figure was my first to use my gold paint, and I followed some of the advice given in this article. I am quite pleased with the result, and I think it might be my favorite one from the whole collection.

Here's Wulfgar, on whom I got to practice drybrushing fur texture, layering skin tones, and 1980's heroic blonde hair.

"Hey," you ask, "Where's the star of the show?" The last hero I painted was Drizzt himself, painted to match the color scheme on the game's box. Turns out, though, that it was kind of a bad figure. The sword in his left hand and the cloth down his front are molded all the way back to his cape, and I found him generally uninteresting. He has the same problem as the other drow from the set: he's just plain dark. Oh well, they can't all be Bruenors.

With all the small figures done, I was left with only big monsters. Some of these had rather significant gaps: looks like they were cast in multiple pieces and hastily assembled. I picked up some Milliput and decided to try my hand at both filling gaps and crafting some more interesting terrain. Following the instructions, I worked together the two colors of epoxy, and then formed some rocky bases for the trolls and dragon and filled some gaps in the balor. I found out much later that my Milliput is probably too old: both rolls are discolored and chalky. Still, it worked well enough for this purpose, but if I were to do any more serious modeling, I should probably discard it and get some more workable putty.

The two feral trolls were tedious to paint. Their skin lacked meaningful texture aside from very shallow muscle shapes and a few warts. I also was painting them using a number 2 brush, and by the time I got the skin done, I was tired of painting them. The end result is somewhat uninteresting, but certainly passable for tabletop quality. In fact, looking back over my photos, I have two sets marked "final." After having them sitting on my desk for a few days, I decided they needed more highlights, so I touched them up and re-varnished. This one has some sloppiness on his right pectoral area, but this was a serious case of diminishing returns. They still look intimidating no matter who they face. I am glad I added the lumps to the ground, just for a bit of visual interest.

Shimmergloom was much more fun, and exercise in shades of grey. The highlights were added to each scale individually, and I think they add a lot of depth.

The last figure of the set was Errtu the Balor, and calling this a "miniature" seems like an abuse of the term. He is huge. I ran into the same problem as with the trolls: it took an enormous amount of paint and time just to do uninteresting things like cover his wings with a solid color, and he's mostly monochromatic. I decided to do most of the shading with washes for expediency's sake. 

When I finally got to the weapons, I got my second wind. According to the game rules, he has a flaming whip and a lightning sword. A Hot Lead article to me thinking about how to do the fire whip in a realistic way. Indeed, before reading the article, I was thinking about making exact mistake he points out and working the fire up to white instead of keeping the white at the hottest point. The sword is a "lightning sword," and I took inspiration from a BoLS article on painting science fiction power weapons. Errtu's sword was not a smooth surface, however, and I decided to try to make the edges look like the lightning strikes and the rest like clouds. Never having seen a real lightning sword in action, I figured this would be reasonable, and I am happy with the result. 

Writing this up, I realized that if you haven't played the game, you might not have a sense of scale of the balor, so I took this shot of Errtu about to consume the soul of Artemis Entreri.

One of the reasons I put off finishing this blog series (despite having finished the painting months ago) was that I had hoped to get some in-game photographs in good lighting conditions. However, we have only played the game once since I finished painting the figures, and that was mostly because my brother was visiting. It looked great, but my son and I had sort of "played out" this game already. 

I learned my lesson: paint the figures before playing the game! I picked up Dungeons & Dragons: Wrath of Ashardalon, which is from the same series as Legend of Drizzt, and I have enjoyed painting those figures. That, however, is a story for another blog post.

Thanks for reading! As always, feel free to leave comments below.