Tuesday, September 16, 2014

Bloom's Taxonomy vs. SOLO for serious game design

I think the first time I came across Bloom's Taxonomy for the Cognitive Domain was in my first two years as a faculty member, when I was just dipping my toes into the science of learning. I have written about Bloom's taxonomy before, including my preferred presentation of it:


I have found this presentation useful in teaching and practicing game design. Many games have a learning structure that follows the lower three levels. First, you are given some command that you must remember, such as "press right to move right" or "press B to jump." Then, you are given some context in which to understand the effect these commands have on the world: run to the right and the screen begins to scroll, then continue running and fall into a pit, forcing you to start again. Now, you have the context to apply what you understood, combining running and jumping to get over the pit.

I contend that most games don't really go beyond that. I hesitate to say that recognizing the pit as an obstacle constitutes analysis or that combining running and jumping is any meaningful synthesis. Most games do not teach the player to learn how to evaluate the game against the rest of their mental models either. The modern phenomenon of in-game "crafting" is similarly contained in the lower half of the taxonomy: remember that oil can be combined with flames to create an explosion, understand that this explosion hurts enemies, and you can apply this to defeat the enemy du jour. My contention is that games are designed experiences, and that players are remembering, understanding, and applying the constraints designed for them. Even Minecraft, with all its cultural importance, mostly has kids understanding and applying the rules of the world to build interesting things. Certainly, a few users recognize that the pieces they have been given can be used to synthesize something new, such as building circuits out of redstone, but my observation is that these are a minority—and the people who are following the tutorials to build copies, they are back to remembering, understanding, and applying someone else's design.

This use of Bloom's taxonomy is useful for game design thought experiments and for discussion, but it wasn't until GLS 2013 that I found out that many teacher educators are teaching Bloom's taxonomy as dogma, not as a useful sounding board. A poster session was presenting an alternative taxonomy, one that used the same labels but put analysis and synthesis closer to the bottom, using these to describe the kinds of tinkering users do with digital technology. (Clearly, they are using different interpretations of these labels than I am, but that's intellectual freedom for you.) This alternative was based on anecdotes and observations, much like Bloom's original, but this leads to a problem: Bloom's taxonomy is presented as a predictive, scientific model, but as far as I can tell, it is not empirical. In fact, cognitive science tells us that the human brain does not actually follow the steps presented in either of these taxonomies.

Reading Hattie and Yates' Visible Learning and the Science of How We Learn, which was recommended on Grant Wiggins' informative and inspiring blog, I was reminded of the fact that Bloom's Taxonomy does not represent a modern understanding of learning. The book introduced a different model, one that was first defined in 1982 but that I had never encountered before. It is Biggs and Collis' SOLO Taxonomy, where "SOLO" stands for Structure of the Observed Learning Outcome. It is summarized in this image, which is hosted on Biggs' site:


Hattie and Yates conveniently summarize the taxonomy—one idea, many ideas, relate the ideas, extend the ideas—and point out that the first two deal with surface knowing while the latter two deal with deeper knowing. The figure points out that each level of the taxonomy is associated with key observations that can be aligned with assessments. For example, if a student can list key elements of a domain but cannot apply, justify, or criticize them, you could conclude they are at the multistructured ("many ideas") level of SOLO. It strikes me that this has the potential to be powerful in my teaching, and I look forward to incorporating it.

So, how can SOLO contribute to an understanding of game design? It seems we run into the same limitations that hinder game-based learning, primarily those of transfer. Notice that the extend abstract level of SOLO explicitly refers to generalization to a new domain. It's true that I can learn how to jump over pits or destroy goblins with flaming oil, but this knowledge is locked away in the affordances of the game. This perspective is taken from Linderoth's work, particularly "Why gamers don't learn more," which applies the ecological theory of development to explain why learning from games does not transfer.

If nothing else, the SOLO Taxonomy can provide both a target for serious games and guidance toward assessments. Given a content area and the desire to create a game to teach it, I can target a specific level within SOLO. For example, if I only want players to emerge with surface-level knowledge, I might target the multistructural level, but if I wanted players to be able to connect the content to something else they know, I would need to target extended abstract. Then, I can reference the key words from the corresponding level of the taxonomy, and use these to define an assessment of whether or not the game worked. In fact, it strikes me that one could also take key words from the adjacent levels, and use this to detect extremes. As my game design course is wrapping up the preliminaries and moving into game concepts, I will try to create an opportunity to try this.

Monday, September 1, 2014

Learning from Failure—as a game mechanism

One of my favorite insights from the scholarship of teaching and learning is that, essentially, all learning is learning from failure. Every time a person encounters a mismatch between a mental model and reality, it is an opportunity to learn. I have been working for several years to incorporate this idea into my teaching, notably in the expansion of the CS222 project from two to three increments, specifically because it gives teams more chances for safe failure. Indeed, one of my favorite descriptions of university is that it is a "safe fail" environment, where students can fail in order to learn, without the economic cost it would take outside of academia.

It was only recently that I thought about what this phenomenon means in terms of game design, and in role-playing games in particular. Character advancement is a common component of RPGs, giving the player the opportunity to increase the skills and capabilities of his character. Such advancement is generally limited through in-game resource management, such as accumulating threshold values of experience points or by earning sufficient skill points to expend on new abilities. Dungeons and Dragons set the precedent whereby experience points are earned by overcoming obstacles, most often through combat but (with a good DM) also by other means. This has become the de facto standard and can be seen in all manner of modern games, including computer RPGs and RPG-inspired boardgames: success earns points that are used to gain skill.


Recently, I came across The Zorcerer of Zo (ZoZ), a role-playing game by Chad Underkoffler published by Atomic Sock Monkey. In fact, I have had a copy for almost a year, having bought it in a Family-Friendly RPG Bundle of Holding, but it wasn't until a few weeks ago that I read it. The game is based on Baum's Oz series, and since my two older boys and I are currently on the tenth book, we are quite familiar with the setting.

ZoZ uses the "good parts" variant of Underkoffler's Prose Descriptive Qualities (PDQ) system, the full version of which is described in a free document from Atomic Sock Monkey. A critical aspect of the system is that it eschews conventional attributes, skills, and inventory for qualities such as "world-traveler," "small," or "afraid of cats." Each quality is ranked, the range starting at Poor [-2], Average [0], and Good [+2], with each rank giving an adjustment to the 2d6 used for all conflicts. Again, for full detail, check that free PDF. It was a lot of fun to create ZoZ characters with my sons using this approach, since most of the time was spent describing the character's background and interests. One is a talking mouse from an island of merchants, who is in fact a small world-traveler who is afraid of cats. The other is a rockman warrior from a far-away island, who, since he does not need to breathe, simply walked through the ocean to Zo after hearing that it was a nice place to visit. There are no classes or races, and the setting lends itself to this kind of creative storytelling: it doesn't matter that there were no rock men in the Oz books, as long as my son wanted to make one, it was easy to create.

It is the character advancement system of ZoZ that got me thinking about the backwards nature of the conventional RPG systems. When a character encounters a conflict with a chance of failure, the player rolls 2d6 against a target number. If the roll meets or exceeds the target, the character succeeds. If not, the character fails and gains a Learning Point, and a player can improve his character by later spending these points. Consider how this matches what we know about how the brain works: when everything goes well, we actually learn very little, but when we make mistakes and reflect on them, our skills and knowledge improve.

It should not be missed that earning a Learning Point when missing a roll takes a lot of the sting away from failure. Although you did not get the result you wanted, you still get something: the opportunity to learn. This is a true yet countercultural statement. Conventional wisdom is that failure is bad, and this is a dangerous meme that is hard to overcome. Nowhere is it so endemic and unquestioned as in formal schooling environments, the very places whose mission it is (or should be) to instill a love of learning—and a love of learning necessitates tolerance for, if not embracing of, failure.

A player is still rewarded for success, of course, but it is done through the narrative. The use of narrative as a feedback mechanism is cleverly addressed in Koster's essay, "Narrative is not a game mechanic," which I recommend to anyone interested in weaving games and authored narratives together. In ZoZ, players can earn Hero Points for their characters by taking especially brave or noble actions. Hero Points are not used for character advancement, but rather to shift the story in the players' favor such as by getting hints, trading in favors, or getting a one-time boost to a roll—a sort of Oz Karma.

I have been impressed with Zorcerer of Zo and enjoyed playing it with my family. It has inspired me to consider how I might incorporate learning from failure as a game mechanism in my own designs, and more generally, how I might take more ideas from my scholarship and apply them to my game designs. This semester, I will be engaging in an experiment in public game design in concert with my honors colloquium on serious game design, and I expect to use this blog for that purpose. Watch this space for further announcements and designs, and as always, feel free to share your thoughts in the comments section.

Thursday, August 28, 2014

Impressions of Android Wear with the LG G Watch

I attended a GDG Muncie meeting over the Summer where I was lucky enough to win an LG G Android Wear watch. The longer version of the story is that the organizer was trying to determine the best way to generate random numbers for the lottery, and I suggested one of my very favorite sites on the Internet, random.org. He went to the site and generated my number: the system works!

I normally wear a watch—a Skagen titanium watch, to be precise. It is ultralight and quick to don or remove, both of which I consider to be great benefits. This is my second watch of this model, in fact, after having smashed the face of one on vacation several years ago. For those who don't know, I don't have a cell phone plan: the Nexus 4 I carry everywhere is used strictly as a pocket Wi-Fi device. Hence, my watch is not a fashion accessory, it serves an important function, and being so light, it does so innocuously.


My LG G arrived a few weeks after the GDG meeting, and I decided to wear it around the house for a few days. It is clunky and heavy, especially in contrast to my usual watch. I'm not hip to the technical terms for watchbands, but while the band is functional, it is fiddly, so it takes a few moments to put on or take off. Again, I am not interested in the piece as a fashion statement per se, but I think the picture shows how my rather thin hands and wrists are dominated by this piece of black technology.


It was easy enough to set up and sync with my Nexus 4. I like that the watch face can be configured to show the time, the date, and some ambient information, such as the temperature. I was afraid that the notifications system would be distracting, but I find it no less distracting than my pocket device, really. When I want to know whether I have new email, for example, I simply check. When I am in a situation where I don't want to be interrupted, I am not generally checking my watch for the time anyway: I am either in a situation where I don't care about the time (writing) or there's a clock readily available (meetings, teaching).

Given that I'm the type to turn off notifications and avoid distractions anyway, I also haven't found it to be that useful: almost any time I have used it, I could have about as easily used my pocket device instead. Perhaps that's due to immaturity of the platform, but I suspect it has more to do with me not being in the target demographic. All the same, it is kind of fun to check messages on my watch while walking down the hallway to the men's room. It doesn't feel any less isolated or rude than carrying a phone and checking messages in the same situation, but it does leave hands a bit more free. Probably the single-most feature I use on the watch besides time and date is the view of what appointment is coming next.

I do have a major complaint with the email authoring feature. It does feel very futuristic to talk to your wristwatch and have it send a message to someone. However, it is set up so that you narrate your brief message, and then the watch shows you what it recognized and sends it right away. The two times I've tried this, the speech recognition was terrible, but I had no opportunity to stop it before it sent—I was left with that awful feeling of having just sent a nonsensical message. In my opinion, it really needs a 2–3 second confirmation period in which one can stop the process.

Another usability failure on the watch arises from the context in which it is used, and in particular, I wonder if the designers considered users with small children. When my toddler sits on my lap or when I pick him up, his arms reach exactly to my wrists, and he tends to fiddle with whatever is there—a smarthwatch, for example. It's a bad feeling to be sitting happily with a child in lap and then suddenly realize that he may have knocked messages out of your inbox. The device really needs a hardware switch that turns on or off the touch-sensitivity, or it should come with a warning, "Not for parents of young children." I've started wearing it only on days when I am working from campus, and around the house, I stick with my Skagen and pocket-device.

Preliminary conclusion: It's a fun toy with a few good uses and a few usability problems. I would not buy one, but I am happy to tinker with one. I do have an idea for an app that I may experiment with in the next few days, but that depends on how the semester gets rolling.

UPDATE (8/29): A crazy thing happened this morning, the day after I posted my review. I checked my messages while walking down the hallway to work, and a colleague sent me a question that could be answered either "yes" or "no." I figured, how badly could the speech recognition mangle that? I hit "Reply" on my watch, and the voice recognition screen came up with the "Speak now" prompt. Then I swiped or tapped or something... I am not really sure what I did, which itself is interesting... and I got a "Yes/No" dialog. I hit the "Yes" button and an email was sent with exactly that content.

That is neat. I need to wait for someone else to send me an email that I can answer in one word and try that again.

Wednesday, August 20, 2014

Screencasting on Linux Mint 17

Sometimes I post philosophical essays, and sometimes I describe important teaching experiences, and sometimes I reflect on my miniature painting hobby. Today, however, I go to that much more pragmatic use of the blog: writing down how I actually got screencasting to work on my work machine.

I have a Dell Precision T3600 running Linux Mint 17. I switched from KUbuntu to Mint on my work machines last year, when I had some trouble with hardware recognition. Since I've been using KDE for over a decade (from Mandrake to Mandriva to KUbuntu), I use the KDE distribution of Mint as well. It seems this puts me in the minority, as most of the Q&A I see online assumes one is running something newfangled.

I have USB Logitech headphones that used for the screencast. After a bit of fighting with my mixer, I was able to get the microphone to work: the system wants to default to the hardware ports, even when there's nothing connected there. Audacity makes it very easy to test the sound configuration, and so I was sure that the microphone was working.

However, getting that microphone to work through screencast software was another problem entirely. I had no luck with the old standard recordmydesktop at all: video capture was fine, but the audio came up with nothing. A bit of searching revealed some newer applications I had never heard of. Vokoscreen had a reasonable user-interface with several configuration options, and I was confident enough to record an 11-minute take. Unfortunately, upon playing back, the audio was terribly choppy. I spent quite a bit of time fiddling with the framerate and pulseaudio settings trying to fix this, since otherwise Vokoscreen was convenient to use, but a tutorial with no audio is hardly a tutorial at all. Since the mic was working fine in Audacity, I inferred that it was a software problem, not a hardware problem.

I switched over to Kazam, and this ended up doing the trick. It allowed me to select an area of the screen where I could hop between Eclipse, Chromium, and the console, and the audio was captured with no trouble. The default file format uploaded flawlessly to YouTube, and I was able to share the video with my students. I had actually tried Kazam before Vokoscreen, but it wasn't working—turns out it was because I had the mixer settings for my mic much too low, and they needed to be at almost 100%. (Recordmydesktop still did not work with this fix, incidentally.)

The screencast itself is just an explanation of how to set up a PlayN project and hook it up to a Mercurial repository using the Computer Science Department's Redmine server. Hopefully next time I want to do an 11-minute screencast, it will take less than two hours of tinkering.

Friday, August 8, 2014

Painting Drizzt, Part 2: Heroes, Villains, and Big Monsters

This is the second part of my series of posts in which I reflect on painting the miniatures from Dungeons & Dragons: The Legend of Drizzt Board Game. In Part 1, I described my experiments with a few different techniques and ended with descriptions of some of the unique characters. The Drizzt set is the second collection of miniatures I have painted since my 20-year hiatus from the hobby, the first collection being Mice & Mystics—a painting experience documented in my January post.

I make sure to take pictures of all the miniatures I have finished, and I often take work-in-progress shots of interesting or challenging pieces as well. Using my Android phone, these get automatically backed-up to Google+, where I write my notes about techniques and colors. I also frequently use the image editing tools in Google+ to do some white balancing, and I find the "Lift" option combined with increased brightness makes up for my budget camera and lighting setup. (My only criticism is that these editing tools do not work in Linux, and so I have to run in Windows just to take good painting notes.) I mention this in part because I have had the Drizzt figures complete since May and have begun my next painting project; I am glad I took the notes, as they remind me of my focus at the time.

I left off my last post with this picture, the first step of an experiment with priming miniatures in white or black:


Let's revisit these characters in the painting process, starting with Guenhwyvar. Prior to painting her, I came across a post (maybe this one, though I do not remember so well now) explaining that one rarely paints in pure black. Guenhwyvar then is my first experiment in non-black black. The base coat is almost black with a bit of purple, and the drybrushed highlights are greys tinted purple. The base also has hints of purple, a color that was chosen in part to match my plans for Drizzt himself. The only pure black on this model is in the pupils and the roof of the mouth. I am pleased with the result.


The other black-primed figure was Yochlol, who I decided to build up in layers following the technique described by Dr. Faust. This figure also marks a change in my work lightbulb, switched to a Cool White Ecobulb (1170 lumens, 4100K), which gives much better light although also generates quite a bit more heat. Yochlol is really just a series of layers from yellow-brown up to yellow-white, as can be seen in little montage. I remember at the time feeling a bit silly covering the whole model in each subsequent layer, as opposed to a lighter base coat and using a wash to get in the cracks. However, it was a good experience to practice layering on this model, given that it's basically a blob of slime. If I could do it again, I would probably have it be a bit less brown, but I'm still pleased with the smoothness of the result.

For Athrogate, I tried to model his color scheme after a wild boar, inspired by his pet boar, Snort. I remember being happy with his base colors but thinking he was rather dull, and I was nervous to add highlights. I am glad I faced my fear and added the highlights, because I think it turned out great—and it's another novice fear eliminated!

Next up is Artemis Entreri, whom it seems we face every time we play a random-villain game. I was never really happy with his face, which came out kind of splotchy. This was also the first miniature I did after buying some glaze medium. I tried giving his vampiric dagger a red glaze, but it came out comically pink and was painted over. On the cloak, there was too much contrast between the highlights and shadows, and so I tried a glaze there, but I think I used too much medium. The result was an accidental "clothy" effect, which is not horrible but also not what I wanted. Long after working on this figure, I came across some tips on how to get cloth effects with sponge painting, and I think that's what I will try next time I approach a cape like this. All told, Entreri is passable, but not great.


Looking back at those four, it's hard to tell that the primer made any difference at all in the final model. Artemis Entreri does look darker than the rest, but he was also supposed to look darker, so I cannot say the primer was a major issue. My experience was that white lends itself to a painting sequence of mid-tone, wash, and highlight, while black works well for building up from dark tones (although I usually end up needing a pin wash to bring out the contrast at the end in this technique anyway). Based on this, I moved forward with white primer for the rest of the figures in this set.

I had been waffling with respect to the sequence of basing and priming, and I had also been experimenting with different primers. One thing I tried was basing first (with a mix of three sizes of ballast held down with thinned white glue) and then priming with my Vallejo White Surface Primer, which I brush on. I took this picture to show how it cracks after drying and shrinking (white minis on right), although the final effect isn't too bad (Entreri on left).


This next picture shows the streakiness of brushing on this primer, which I found annoying although, as you'll see, did not seem to effect the final paint job.


That inspired me to do some more reading about brush-on primers. I think I mentioned before that I was unsatisfied with my spraypaint experience when working on the Mice & Mystics figures, in part because of weather restrictions. After watching this video about using gesso as a primer, I thought I would give it a try.



Nice video, eh? Damned lies, I say. I bought some white Liquitex Gesso and tried it on Bruenor.


Looks like he fell in a pool of chalk dust. It took a bit of scrubbing to get him cleaned up again. (A note from the future: I have recently returned to the Gesso in my priming experiments, and it seems to work well when put on in thin layers, avoiding the "glop it on" advice from the video's article. However, I also realized that the gesso I have is leaving noticible texture on flat areas of my models. It doesn't look too bad on armor, but is not what I wanted. I also realized that I have the "Basics" line of Liquitex gesso, and I wonder if the particulate is less finely ground for the cheaper line. Sadly, I have yet to find any information to confirm or deny this, otherwise I would consider buying a new bottle of the higher-quality stuff—but that's not worth the gamble to me right now.)

Here is the finished Regis, who turned out pretty well.


Cattie-Brie was an interesting model. It looked like the miniature was based on this iconic painting of the character:

However, I didn't really want my son's first experience with a human female adventurer to be a leather bikini, and also, the arms had a "puffiness" that implied sleeves. I decided to give her a purple shirt, to go well with the green cloak and red hair. The blue gem on her sword is designed to bring out the blue in her eyes. I think the figure turned out well, especially considering that the miniature itself lacked a lot of definition.


I used a similar color scheme on Cattie-Brie as on her adopted father, Bruenor Battlehammer. This figure was my first to use my gold paint, and I followed some of the advice given in this article. I am quite pleased with the result, and I think it might be my favorite one from the whole collection.


Here's Wulfgar, on whom I got to practice drybrushing fur texture, layering skin tones, and 1980's heroic blonde hair.

"Hey," you ask, "Where's the star of the show?" The last hero I painted was Drizzt himself, painted to match the color scheme on the game's box. Turns out, though, that it was kind of a bad figure. The sword in his left hand and the cloth down his front are molded all the way back to his cape, and I found him generally uninteresting. He has the same problem as the other drow from the set: he's just plain dark. Oh well, they can't all be Bruenors.

With all the small figures done, I was left with only big monsters. Some of these had rather significant gaps: looks like they were cast in multiple pieces and hastily assembled. I picked up some Milliput and decided to try my hand at both filling gaps and crafting some more interesting terrain. Following the instructions, I worked together the two colors of epoxy, and then formed some rocky bases for the trolls and dragon and filled some gaps in the balor. I found out much later that my Milliput is probably too old: both rolls are discolored and chalky. Still, it worked well enough for this purpose, but if I were to do any more serious modeling, I should probably discard it and get some more workable putty.


The two feral trolls were tedious to paint. Their skin lacked meaningful texture aside from very shallow muscle shapes and a few warts. I also was painting them using a number 2 brush, and by the time I got the skin done, I was tired of painting them. The end result is somewhat uninteresting, but certainly passable for tabletop quality. In fact, looking back over my photos, I have two sets marked "final." After having them sitting on my desk for a few days, I decided they needed more highlights, so I touched them up and re-varnished. This one has some sloppiness on his right pectoral area, but this was a serious case of diminishing returns. They still look intimidating no matter who they face. I am glad I added the lumps to the ground, just for a bit of visual interest.

Shimmergloom was much more fun, and exercise in shades of grey. The highlights were added to each scale individually, and I think they add a lot of depth.


The last figure of the set was Errtu the Balor, and calling this a "miniature" seems like an abuse of the term. He is huge. I ran into the same problem as with the trolls: it took an enormous amount of paint and time just to do uninteresting things like cover his wings with a solid color, and he's mostly monochromatic. I decided to do most of the shading with washes for expediency's sake. 

When I finally got to the weapons, I got my second wind. According to the game rules, he has a flaming whip and a lightning sword. A Hot Lead article to me thinking about how to do the fire whip in a realistic way. Indeed, before reading the article, I was thinking about making exact mistake he points out and working the fire up to white instead of keeping the white at the hottest point. The sword is a "lightning sword," and I took inspiration from a BoLS article on painting science fiction power weapons. Errtu's sword was not a smooth surface, however, and I decided to try to make the edges look like the lightning strikes and the rest like clouds. Never having seen a real lightning sword in action, I figured this would be reasonable, and I am happy with the result. 


Writing this up, I realized that if you haven't played the game, you might not have a sense of scale of the balor, so I took this shot of Errtu about to consume the soul of Artemis Entreri.


One of the reasons I put off finishing this blog series (despite having finished the painting months ago) was that I had hoped to get some in-game photographs in good lighting conditions. However, we have only played the game once since I finished painting the figures, and that was mostly because my brother was visiting. It looked great, but my son and I had sort of "played out" this game already. 


I learned my lesson: paint the figures before playing the game! I picked up Dungeons & Dragons: Wrath of Ashardalon, which is from the same series as Legend of Drizzt, and I have enjoyed painting those figures. That, however, is a story for another blog post.

Thanks for reading! As always, feel free to leave comments below.

Wednesday, August 6, 2014

Revising Courses, Summer 2014: Game Programming

I returned home from a fantastic two-week vacation and formalized my ideas for this Fall's Game Programming course. The course is CS315/515—a cross-listed course for Computer Science undergraduates and graduate students. Although I have taught game programming many times and in many forms, this is my first time teaching it as a "regular" course with full enrollment. I have previously either had low enough enrollments to treat the course as a seminar, or I have had special projects by which I was able to restrict enrollments or require an application process. This Fall, I will have a full room of about thirty students. This is too many to run a seminar or studio model, and so I needed to set up something more structured.

The undergraduates have CS222 as a prerequisite and so will come in knowing a bit about working in teams, code quality standards, human-computer interaction, version control, and requirements analysis. I decided to build on this by borrowing from the structure of CS222 itself: I will have the students doing individual work the first few weeks as we work through the basics, and then the majority of the semester will be spent on team projects built in multiple iterations.

For each of the first five weeks, I will use Monday as a day for me to share new ideas, Wednesday to workshop with the students, and Fridays will be their day to show off what they did. The sequence of topics is image manipulation, animation, collision detection and processing, user interfaces and interaction, and finally, a quick introduction to the entity system architecture for game development. With so many students, we won't have time for each person to give a formal presentation to the class, so instead it will run like a poster session. I decided to award some course credit for completing peer evaluations during this time in order to foster communication. I anticipate that this interaction will help the students find teams with similar interests for the major semester project.

The project itself will be to recreate a game for HTML5 using PlayN, which has recently been my platform of choice. We won't be spending any significant time on game design, and so I am pushing students toward classics from the golden age of arcade games. Of course, few of these students will have lived through the arcade era or the 8-bit era. To get around problems of over-ambitious designs, I am requiring the students to submit both a game concept and a game proposal. These will be in Tim Ryan's format, which I have used many times in the past for game concept documents but never for game proposals. I expect I may get proposals drawn from more modern casual genres, such as match-three, but I need to ensure that they don't think they can create Chrono Trigger in ten weeks.

I wanted to include peer evaluations during the project portion, but I was afraid of creating a mountain of paperwork. Blackboard's peer assessment support is insufficient for what I want, which is to take my  peer evaluation rubric and automate it. In my search for any useful Blackboard tool support, a conversation thread informed me of TEAMMATES, which looks like it will fill the bill. My account was activated this morning and I tinkered with it for a few minutes. It looks like it will easily let me input my groups, enter my rubric, and then assign it for both self-evaluation and group-peer-evaluation. Students do not need to register on the site: they are sent emails with individualized URLs for their forms, based on the schedule specified for the assessment. I will try TEAMMATES in CS315/515 as well as CS222, and I'll be sure to write something about the experience once I get it rolling. As in CS222, I am planning to give each team member a grade for each iteration, and this grade will be the minimum of the increment evaluation and their peer evaluation—an approach that I hope will ensure all team members are contributing and that the work is of high quality.

This finishes my series of summer course revisions (Part 1, Part 2), and I feel ready for the semester. Having my planning done gives me some time to catch up on my painting and do a few more fun summer activities with the family—both of which are good things. Interested readers are welcome to take a look at the CS315/515 course description, and as always, I welcome your comments.

Wednesday, June 4, 2014

Revising courses, Summer 2014: Advanced Programming

I finished revisions on CS222, my advanced programming course. As I wrote about last summer, this was one of the courses that I radically altered last academic year. The 2013–2014 version used achievement-oriented assessment, a two-week warm-up project, a nine-week programming project, and reflective writing. I enjoyed teaching it in this format, and while I believe it was fair, I also think it may have been too scattershot. The best students did amazing work, but many students struggled with the balance of responsibilities from one class. With a bit of sadness, I have decided to drop the reflection essays. They were always a bit ancillary: although they tied to the themes of the class, they did not share the same kind of project-orientation as the rest of the work.

Dropping the requirement for separate reflections has allowed me to expand the writing requirements within the achievements themselves. I will be keeping the policy that allows students to resubmit achievements, which I had not allowed on the reflections due to the overwhelming amount of my time this would have required. Focusing my grading efforts on the achievements, and seeing them as the primary non-programming writing artifacts, I will be able to give more direct feedback and, hopefully, see students improve their writing more visibly.

Similar to my game design course revision, I have pulled the achievement system out of the first few weeks of the class. A gentler introduction to the themes of the class will be had from a more conventional approach, with assigned reading, in-class activities, and small practice assignments. The achievements system will kick in during the two-week project, which precedes the major, nine-week project.

Last year, I probably came the closest I ever will to my hypothetical "your grade is the lowest grade you earn all semester" grading scheme. It was a good teaching experience, and it formalized—in the course structure—the idea that a good computer scientist needs to be balanced. However, it did lead to some stress and heartache at the end of one of the semesters. I cannot say much about it publicly, but the anonymous version is that a team did a crackerjack job on a project but had not met the requirements for a passing grade in other areas. This experienced forced me to reflect on whether or not this grading scheme really reflected what I believed and valued. Of course, every semester is an experiment, and so I will try something different this year.

In the Fall, students will be able to earn up to a C grade by simply doing the final project reasonably well. If a student gets up to a C-level grade in the final project, that's all he or she needs to get that grade in the course. This sounds "average" to me: if you can work together with a group of near-strangers, understand the rules of Clean Code, and make a passable project, you can get through the course. (Majors need a C− or better to move on in our curriculum—a policy I disagree with, but that's for another day.) This will permit achievements to actually represent something beyond the usual, something "good". So, a student must complete ten achievements in order to get a B-level grade. If a student wishes to show that he or she is "excellent," an A-level grade can be earned by additionally completing a quest. I am not sure that's the right word, but it's better than "meta-achievement," which I used last year. Some quests are completed by finishing coherent sets of achievements, but I have used this to also open up new opportunities. One quest is completed by having the final project be a contribution to an open source project, and another is completed by working with a real client. I am hopeful that these will be enticing to the students and encourage them, from the beginning, to take the initiative and do something ambitious. Even if they take a risk and fail, they won't "fail" the course, since they can always fall back on a lower—but still passing—grade.

Last year, I did not include formal peer evaluations into the grades as I had in the past. It was still clear to me who was contributing to projects and who was not. However, I will have about  twice as many students in the Fall as last Spring, and an order of magnitude more projects to monitor, so I am bringing back peer evaluations. The peer evaluation form comes from a 2011 blog post, and I have described precisely how it fits into the grading scheme on the final project specification. In the past, I have sometimes forgotten to tell students ahead of time how each iteration is graded, but this time I have laid out all the details. I have occasionally not counted the first iteration at all, since they are sometimes pretty rough. This year, I will be using a quadratic series: first iteration has a weight of one, second iteration has a weight of four, and third iteration has a weight of nine. This should ensure that students still do an honest job on the first iteration while also reflecting that they should be getting better at working together as time goes on.

This Fall marks an important change in the population of my students. In Fall 2013, the foundations curriculum committee enacted a change to our introductory programming sequence. It had previously been taught in Java both semesters, using a conventional approach (read that with a bit of a sneer). Last academic year, our students started in Python using the Media Computing approach, following that with a more conventional course introducing data structures using Java. The Spring CS2 course was a bit rocky, as to be expected with any such shift in a curriculum. I would expect the students coming into CS222 in the Fall to be particularly weak with Java, although this will become less of a problem as the wrinkles in CS2 get ironed out. Time will tell what kind of remediation I will have to do. The truth is that most of these students have been programming for such a small amount of time, many harbor deep misunderstandings about essential programming ideas when they come into CS222. The good news is that by having them work in teams on a real project, they can learn from each other and be motivated to do better. Indeed, this is one of the main outcomes of CS222, so that then these students can move on from our foundations courses and be productive in the rest of the curriculum.

Here are some links for reference: