Wednesday, May 6, 2015

The Story of Collaboration Station

I had an amazing team this past Spring—Space Monkey Studio—and we built an original educational game called Collaboration Station. The rest of the post is my reflection on the semester's activity.

Title Screen
This past academic year, I was fortunate to receive internal immersive learning funding to undertake a project in educational game design and development. My friends at The Children's Museum of Indianapolis served as community partners, and we agreed that it would be interesting to theme games around the International Space Station. In particular, we were interested in how the science of the ISS can be expressed in games that are accessible to kids.

I followed the same format as the last two years, where in the Fall, I taught an Honors Colloquium on educational game design, and in the Spring, I led a game production studio course. The colloquium introduced the students to fundamentals of game design and learning science, and each student created prototypes of original educational games. Teaching the colloquium gave me the opportunity to think about these game designs as well, and so I was able to use some of my own creative processes as a case study: that is, I was engaged in the same authentic learning and design activities as the students, and so I could share some of the tricks and techniques that help me. I had originally planned to share my iterative designs online through this blog, but other work took priority over that; I would still like to experiment in public design at some point in the future, but it wasn't in the cards this time around.

Memory Game: The first minigame
It's worth mentioning that this colloquium was one of the best I have led. It helps that I get a little better at it each time, of course. This group quickly caught on to the rhythm of activities and presentations. Many of the final projects ended up looking a lot like the example games I had students play, but this always happens. For most of the students, it is their first and their last encounter with game design, and so one should not be surprised if the results are relatively simple. The biggest challenge for me in that class is balancing the desire to introduce a broad range of game genres vs spending time on deeper discussions of analysis and design.

During the colloquium, I assembled a few of my ideas with some of the students' ideas to develop the core concept that would become Collaboration Station. In fact, that name was used by one of my students for a completely different project, but it turned out to be a good fit for what we were doing. Here's the introduction section from the game concept document:
ISS Mission is a local-network cooperative asymmetric digital game in which each player takes the role of an astronaut on the International Space Station. Players are faced with authentic ISS missions and have to delegate responsibilities among themselves. If the players are successful, they unlock more challenging missions. If the players fail, public support for science drops and the communists win.
The document cites Space Cadets and Puzzle Pirates as inspirations; not mentioned in the list, but also a critical inspiration, was SpaceTeam.

Sliding Tile Puzzle Game: Gets harder the more you play
I recruited a team of ten undergraduates to participate in the six credit-hour Spring Game Studio course. The team comprised six Computer Science major (one dual major with History), one English major, two Art majors (Animation), and one Music major (Music Media Production). The tech side of the team was relatively young in the major, but I knew that the networking side of the application was going to require some heavy lifting. I decided to do some tech experiments during Winter Break with the goal of having the basic networking back-end completed. I started by using WiFiP2P, which the documentation certainly makes sound appealing, and we began production with this as our target network layer. However, after seemingly endless problems, we switched to Bluetooth, which was much more consistent and reliable. In fact, one of the biggest learning moments for me was reverse-engineering how SpaceTeam is able to determine what local devices are running their service: the short answer is very clever use of Bluetooth device renaming.

I'm getting ahead of ourselves. "Networking is hard" was a theme during the semester, but I want to go back to early January and the first time the team got together. Of the ten, four of them had taken my colloquium in the Fall, which I believe is the most cross-over from any group. I really liked these four: they were reliable, intelligent, and funny, and so I was very glad that they applied.

Sequence-Matching Game: The one with the best sound effects
The past two years, I have tried to lead the studio students through a rapid design process. This led to several problems, which I probably mentioned in my lengthy post-mortems. The two biggest problems are these: that the students' designs are necessarily amateurish, and that students become disenfranchised when their designs are not chosen to move forward. This year, I decided to tackle this problem by taking on the  lead designer role myself. I came in with my "ISS Mission" idea, which had already been vetted and discussed in the colloquium as well as shared with the partners at The Children's Museum. Although this took some creative control from the students, it also meant that we could accelerate our production schedule, which, after the previous two years' experience, was worth the sacrifice.

Our schedule for the fifteen-week semester, then, was to have one week of orientation followed by seven two-week sprints. The orientation week served to help the team understand some of the fundamentals of what we were trying to accomplish, both in terms of educational game design and development methodology. I had been using Buffalo as one of my ice-breakers, and it's a great example because it's fun—it feels like a "normal" party game—but it's also a product of a research lab and has been shown to reduce prejudice in players. However, half the team had already seen Buffalo in the Fall and so they knew the twist, and in looking for something else, a friend recommended Two Rooms and a Boom. I used a print-and-play version of the game, and it was perfect for the job: it got everyone talking to each other, learning their names, working together as well as you can in a social deduction game, and laughing. I will definitely use this again.

Tile Rotation Game: The only one I am good at
For production, I fell back on my old standards: the principles of Crystal Clear combined with elements of Scrum, adapted for use in an academic studio context. I have a paper in press about the academic studio, and how I use it to balance the needs of production with the traditional values of academia; that paper will be published soon in Transactions on Computing Education. One of the primarily elements I use to frame our work is essential questions, taken from McTighe and Wiggins' Understanding by Design framework. I have been using EQs in all of my courses the last few years, and I find them to be a powerful centering and focusing technique. For this Spring Studio's work, I chose these four questions:
  • How do multidisciplinary teams coordinate activity to create original software products?
  • How does our immersive learning context—creating an original educational video game in collaboration with The Children's Museum—affect our software development practices?
  • Why and how do visual elements, audio elements, source code, and writing interact in the process of game development?
  • Why and how does the cooperative game principle impact us?
At the end of each iteration, the students wrote reflective essays in which they tied their experiences back to these questions. I asked them to write about specific instances rather than generalities, although some of them needed multiple reminders of this. As I am sure I've mentioned before, I have noticed that my students are much more comfortable writing in unsupported generalizations rather than taking ownership of their writing and addressing the challenges of specifics. However, most of them, when pressed, come to realize that the learning comes from the specifics.

Regarding that last EQ, I have been thinking a lot about the cooperative game principle lately, and at some point I need to make time to write out my thoughts about the strange loops between it, game design, and education. It was certainly the one least selected by students for writing about, but it helped me to focus my own perspective. I had thought about writing my own EQ-centered essays at the end of each sprint as well, but between sprints I also write sprint retrospective reports and curate the product backlog, and these two things took up all of the time I could allocate to this work.

Hold each other accountable for writing unit tests
As I was working on my TOCE article last summer, I was forced to return to a challenging conversation I had with an academic a few years ago. I was talking about my studio-oriented teaching, and I compared it to Lave and Wenger's communities of practice. He asked, "Centered on what?" The question was jarring to me, and it encouraged me to re-evaluate what the studio means from a CoP perspective. I realized that the best game development studio experiences I have had at Ball State have been those that have centered on my practice. That is—trying not to sound megalomaniacal—students did best when they could see how I work, when I could explain myself and guide them, and when I was actively contributing to the project. Based on this finding, I made sure that I was not just a mentor to the project this year, but also a bona fide team member. I think the students appreciated working with me in this capacity, although it's impossible for them to compare this experience to other teams' lived experience. My decision to be an active team member did lead to some internal conflicts between when I should be leading by example and when I should be letting them fail gracefully. At the meso level, decisions such as "When should I pass the keyboard?" in pair programming could become challenging, as I knew I could dump out working code faster than the students, but at some point, I needed to intentionally slow down production by switching to the navigator's role.

A reminder to change partners when pair programming

I read some advice a long time ago that said that you should not start by naming the team, because when you first start, you are not yet a team. This advice has served me well. After our first sprint, the team spent some time figuring out who we really were. The team centered on Space Monkey Studio, and one of our artists put together a snazzy team logo.

Team Logo (non-Kosmo version)
Shortly thereafter, one of the team members brought in a stuffed monkey in a space suit—certainly the coolest contribution to a team since someone brought in Computer Engineer Barbie for the Morgan's Raid project. The monkey was named "Kosmo," and like Computer Engineer Barbie, he was perched on a side of the whiteboard where anyone could contribute captions.

Kosmo gets a name
The rhythm of the semester was quite good. The first sprint went well, and it was designed to be a relatively easy win. The second sprint was where, in my feedback, I pushed back on the students a bit for being lax in their commitments. We had a good rapport, and they took this as the formative feedback it was intended, and the rest of the sprints went very well. Even Sprint six, which you can see from the burndown chart left some work incomplete, was a managed failure: the team recognized what went wrong, and we very quickly righted ourselves.
Seven Burndown Charts
Collaboration Station was developed using PlayN, an excellent open source library for game development that allows for cross-compilation to Android, iOS, and Java desktop. From the beginning, we agreed to focus on Android, since we only had one semester and we wanted to keep our scope limited. However, the game relies on Android devices with Bluetooth, which introduces to problems for automated testing: you cannot simulate Bluetooth within the Android emulator, and deploying to physical devices is slow. I developed a clever workaround where the network layer is abstracted so that you can run multiple instances of the game on the desktop and they will communicate over a local socket, whereas on Android, it will use Bluetooth. The reason it is a socket rather than another IPC solution is that the original networking layer used WiFiP2P, which is based on traditional socket communication, so this was a relatively easy abstraction to build in. I would like to add iOS support in the future, and that's something for which I am currently seeking funding.

I am a firm believer in the power of food to bring people together. Talking about this with the team, one member pointed out that eating together specifically encourages safety: if we are sharing food together, then we must be with people who can be trusted. This team ate like no other, and while many people did their fair share to bring treats, one in particular went above and beyond. The picture below shows our snack table near the start of the semester; by the end of the semester, it was a bounty of sweet and sustaining food—and some of us even knew about the secret stash of emergency Girl Scout Cookies in the cabinet. I brought down my Keurig from the office, which a few team members enjoyed. Others were tea drinkers, and so they brought in the Bunn on the left for heating up water. Our main-treat-contributor kept the team in constant supply of K-cups, teabags, and filtered water, in addition to the rest of the comestibles.
The snack table, with inspirational poster above
Early in the semester, the team identified several learning objectives for the game, drawing these from state standards for 4th through 8th grades. Our intention was to use these objectives to derive the minigames and narrative content. However, early playtesting revealed something unexpected: the players by and large did not know anything about the space program at all. They did not know what astronauts did besides "float around," nor did they know that the International Space Station was a real thing. Inspired by this finding, the team revised our vision away from the content-oriented state standards and more toward the broader goals, that the ISS is real and what astronauts do—still emphasizing the cooperative nature of space expeditions. One of the concrete actions this prompted in the team was to replace the hand-drawn images of our introduction screens with actual photographs of astronauts and the ISS, to drive home the idea that although the game is fictional, its setting is not.
A scene from our introduction, only slightly altered from the original photograph
The game evolved and grew during production. We started with a single mini-game, the memory game. In keeping with the agile spirit of regular delivery, we built a very loose narrative around the original version, where the player had to clean up the experiment area of the ISS. The images were drawn from experiments on the ISS. Over the course of the following twelve weeks, the story, all of the art, and almost all of the code within this minigame would be replaced, based on the team's learning to work together and to interpret player feedback. We added a sliding tile puzzle next. About 2/3 of the way into the semester, we hit stride, and we added the two final minigames: a tile rotation puzzle and a sequence-matching game. This gave us two minigames themed around the science of the ISS and two around the maintenance of the ISS; in the main narrative, then, the players must do both in order to succeed across the three scenarios that comprise the expedition. Other features got smoothed out as well during production, aided by the transition from WiFiP2P to Bluetooth mentioned earlier. Early builds allowed the player to specify their own name, but this was dropped in favor of the current approach, which maps players to countries.
Original Welcome Screen (Game Version 0.1)
I missed not having someone dedicated to social media as I did last year; we have not built up a following around Collaboration Station, so even though we launched a few days ago, we still have very few installations. However, this team did have someone who had a natural talent for local outreach and event planning. I find such work to be stressful, but students to whom I have delegated such work have often dropped the ball. This particular team member really shined here: we had hands-down the best presentations at the Ball State Student Symposium and the Butler URC, and our launch party at the Charles W. Brown Planetarium was a thing of beauty. I find myself wondering how I will recruit someone to do this kind of work in the future. Past attempts to recruit someone with a marketing, PR, and outreach focus has never worked out well, but twice now I have been lucky to have such people end up on the teams regardless of my efforts. I suppose luck is one of my skills.
In conclusion, I am proud of Space Monkey Studio and the work we did together, and I am also happy with the structural changes I made this semester. Taking the role of lead designer allowed us to start production earlier and removed the possibility of students being disenfranchised that their own designs were not chosen over their peers'. Using short, two-week sprints helped us keep a regular rhythm during the semester. Having a team member dedicated to social events and outreach took a lot of that pressure off of me. My being an active team member meant that we could take on a game with significant technical challenges despite the relative inexperience of the undergraduates.

Thanks for reading, or at least for flipping through the pictures! Check out the game on the Play Store, check out the project Web site, and feel free to leave a note in the comments.

Tuesday, May 5, 2015

What We Learned in CS222, Spring 2015 Edition

In the Fall, I had some nagging doubts about the balance between individual learning and collective learning of fundamental Clean Code concepts in CS222, and so for the Spring, I made a significant change to the first few weeks of the course. I kept the overall rhythm: three weeks of warm-up, a two-week project, and a nine-week project delivered in three iterations. In those first three weeks, I added weekly assignments designed to help students keep pace with the reading and to assess their learning of the same.

I recognize that the ideas presented in the book can be rather challenging, and many students deceive themselves into thinking they understand it when, in fact, they cannot apply the concepts in programming. I do not blame them for this, but rather, I see it as one of the big challenges of CS222 to level the playing field; for example, a student who comes in with relatively little understanding of what a subroutine does still needs to emerge with a stronger understanding of functional decomposition, that a method should do just one thing without side-effects. Hence, I decided to use a mastery learning approach where the three assignments are graded on an S/U basis—either you showed that you understood the assignment or not—and you can turn in one assignment for evaluation or re-evaluation each week of the semester.

I knew that this would increase my grading load, but I was also excited at the prospect of incorporating mastery learning into the course. This model should, in theory, allow me to address my concerns that some students were "slipping through" the course by having their partners do most of the heavy lifting on the nine-week project. However, in order for the assessments to be valid, I needed differentiation: if everyone was evaluating the same code, then it would be too easy for the unscrupulous or the panicked to copy a solution from a classmate. Hence, I reused an approach that I have used in the past, where I ask them to evaluate code that they themselves had written in the past. I also made available the option that they could evaluate open source project code instead of their own, although this ran into the standard problems (which I guess I had forgotten about) that these students don't know what open source means nor how to differentiate a real project from throwaway examples on the Web; the only result of this was that some students wasted weeks writing about low-quality, non-open-source tutorial code they found on a random search.

The last piece of the puzzle was how to ensure that all students actually earned their satisfactory grade on the three assignments by the end of the semester. I decided to make this overt and clear: if you don't complete all three assignments, you cannot earn higher than a D in the course. These assignments, after all, represent that you personally understand the required course content, in particular the use of good names, clear methods that do one thing, readable source code, and the single responsibility principle.

These good intentions led to some stressful times for both me and the students in the Spring. The most challenging impediment was the initial low quality of the code that students brought with them from their previous semester. Most students had relatively little trouble fixing names once they understood the standard conventions (methods should be verb phrases, for example), but beyond that, there were much deeper problems. Their past projects were rife with side-effects, long methods, mixing of model and view, forced garbage comments (such as "// end of for loop"), and no evidence of object-oriented decomposition: everything was static and public. Add to this the fact that the students by and large did not care about the code—neither now nor in the past—and you get rather a difficult slog through the assignments. The third assignment was essentially to refactor the code to follow SRP, but in most cases this required touching literally every line of code. Kudos to those students who got through it, and I know that there were some students who got through the assignments with a much richer understanding of course content as well as much improved metacognitive and code-reading skills. However, this came at a cost of unreasonable time spent by both them and me: there must be a way to achieve these learning and assessment goals with less pain.

At the end of the semester, I dropped the D-cap grading policy and instead instituted a quantum deduction for each assignment not yet completed, and I combined this with an offer for students who were "stuck" on the assignments to send me alternative evidence that they learned the material. Only one student took this approach, but what he did was brilliant: he restated the assignments, pointed out what intellectual work they required, and explained how he had shown his ability to do this within the final project. Right there, we have an example of what I mean: he showed me that he understood specific elements by clearly and proficiently writing about them.

One idea for fixing this part of the semester is to reuse some past game studio project code and give this to CS222 students for evaluation. Unlike their own past code, these projects were ostensibly written to follow Clean Code from the beginning, and so there should be fewer fundamental problems. I am not sure of a fair way to break up a project across thirty sophomores, and doing it randomly could end up with a struggling student inheriting some very bad code. However, this would have the advantage that they would be dealing with real project code that was written by students only one or two years their senior.

During the rest of the semester, I reused by achievement-based grading system as before, where earning achievements unlocks higher grades. I took away the "quest" system for simplicity, but I found myself missing the concept of leveled achievements. I am not yet sure if I will bring that back in the Fall. One piece I have spent some time thinking about is how the achievement system could use more peer evaluation. Reading about digital badges, I know that a common successful practice is to allow anyone to sign a badge, but that the perceived quality of the badge is related to the signer's authority and expertise. I would like to do something similar, allowing the students to grant achievements to each other, which would provide incentives for them to look at and think about each others' work. This could also serve as a high-pass filter, where students only show me their work if it has been vetted by a peer. I spend an unfortunate amount of time writing feedback of the form, "This method is not a verb phrase," and if I can turn that pain into a peer learning opportunity, all the better for it.

In the Fall, I am scheduled to teach two sections of CS222, which I have not done before. I need to think a bit this summer about what that means. Often I come to class with rough lesson plans but end up reacting more to what students are asking about or struggling with. With only one section, it doesn't matter if I shuffle the order of topics based on this interaction, but with two sections, this sounds like a recipe for a headache. I wonder if I should take a more studio-oriented approach, spending more time in class helping students with activities rather than modeling and moderating discussions. I have tinkered with producing videos to model behavior (a story for another blog post), but I find this to be very stale: so much of my in-class modeling is really an interactive performance, feeding off of the verbal and physical feedback of the crowd. If I drop a new keyboard shortcut in a live session, for example, I can see the perplexity and astonishment of the crowd and then talk about not just what I did, but how I did it. If I record a video, it's much more automatic and looks like magic, without students' being able to raise their specific questions at just the right time. On the other hand, the video does make it easier for students to rewind and hear me explain some theoretical ideas more than once instead of relying on their notes or, more likely, ephemeral memory.

I don't regret the changes I made this semester, but they certainly had some unintended consequences. I felt like I had some difficulty "reading" the class, perhaps because it was 8AM and so had more spotty attendance than usual. However, I felt that once we hit stride, I really got to know the teams, and I am proud of their final projects. As usual, there were ambitious projects that had to be scaled back, but really, all of them ended up successful.

At the end of the semester, the students rated these items as being the most important to them:
  • Test-Driven Development [11 votes]
  • Don't Repeat Yourself [11 votes]
  • Working on a team [10 votes]
  • Single-Responsibility Principle [10 votes]
Looking at the rest of the 112 items that they listed, there are specific practices identified which make up these top four, such as naming methods with verbs, respecting teammates' perspectives, and model-view separation. As the students came up to place their votes with stickers, a student asked if he could put a sticker on me, since he thought I was the most important part of the class. This was a nice compliment, and I don't think it had ever come up before in teaching this course. As another student heard this, he said that he wanted to put a sticker on my mug for the same reason, but he figured the mug was too sacred an artifact.

I will be revisiting CS222 this summer as I prepare for Fall, but for now, it's time to change gears. There are still many stories from the Spring semester that I would like to reflect on and share here. One of my early-summer tasks is to prioritize the rest of my summer tasks, and I need to be conscientious about making time for reflective writing. In the meantime, however, it's time for some breakfast. Thanks for reading, and as always, feel free to leave a note in the comments.

Friday, May 1, 2015

Getting practice at programming

Around the time that grades are posted at the end of the semester, I tend to get a few emails from my CS222 students, asking what they can do to get more practice with programming. The questions often come from students who got through the first two programming courses without really understanding some core concepts, and these deficiencies start rearing their heads during the nine-week project of CS222. This is a great awakening for the students, and I applaud them for the desire to invest personal time between semesters to improve their skills.

Also, I recently came across—though now cannot figure out where—the idea that we each have a limited number of keystrokes in us before we die. (The concept seems to be tied to keysleft.com, although I only found that today in trying to figure out who I saw writing about it.) From this perspective, all those emails I've sent to students over the years seem like a bad investment, not because it didn't help that one student, but because it could only have helped that one. This is related to the this semester's lack-of-blogging guilt, which I'm sure I will be able to write about in a few days.

Without further ado, let's get to advice for students (roughly sophomores) who want to get some practice.

First, if your main problem in CS222 was with Test-Driven Development, then the next step is easy: read Test-Driven Development: By Example. Then, read it again. The book is deceptively simple, but it contains the seeds for cognitive transformation. Then, the next time you want to write any kind of code, hold yourself accountable to TDD.

If you feel like your Java skills are just not up to snuff, spend some time with The Java Tutorials. The great thing about that site is that you can easily pick the area you want to learn about: the basic language, essential parts of the standard API, building a simple user interface in Swing, and so on. Another advantage of these official tutorials from Oracle is that they are precise and consistent in their use of language and terminology, and this can help you build your technical communication skills. Yes, you can do a random Web search for any of these, but Oracle's collection is well edited and presented.

A few sites have emerged in the last few years dedicated to helping programmers hone their skills with canned problems. Project Euler is one of the most famous, and I know students and alumni who really enjoy the challenges and the community. They make great training exercises if you are interested in programming competitions, such as the ones at the annual CCSC Midwest Conference or the ACM International Collegiate Programming Competition. Personally, I find them a bit too much like homework, but your mileage may vary.

A different take on this kind of exercise is the Code Kata. The idea behind this is that you get to be a better programmer through intentional exercises, exercises that both help you remember what you know and push you to extend your skills.

A talented alumnus recommended cyber-dojo.org, which includes a wide selection of exercises and a community of reviewers. A slightly more aggressive model comes up at codefights.com, which I have not tried but heard recommended by another alumnus.

There are myriad open source projects on the Web. Sometimes it can be difficult to see how to get involved, and I see that many people mistakenly think that "get involved" necessarily means writing production code. This article at SmartBear paints a more accurate picture of how these communities tend to work and how you can contribute, regardless of your programming skill level.

One of the best ways to motivate yourself is to build a thing you care about. Whatever your interest is, you can write programs for it. Games? Figure skating? Home security? Soccer? Whatever it is, think of what you can do at a very small scale, and build it. Keeping the scale small is important, because actually completing a project is much more difficult than just tinkering around with it. I do a lot of work with game programming, and many students come to Computer Science because of an interest in video games. Many of these same students fall into the trap of thinking that the first game they program should be their grand vision for a new strategic multiplayer role-playing 4x roguelike. Don't do it. Start by recreating the classics, like Asteroids, Space Invaders. Keep the scope small, so that you can get the learning that only comes from finishing.

No matter what you do, do it with reflective practice. Think about what you are doing and why you are doing it. Write about it and talk to people about it. Find a community where you can share your findings and ask questions. This will help you develop your metacognitive skills. If you find that you want to learn more about how to learn, I recommend Pragmatic Thinking and Learning.

I hope this list has been helpful. If you have more favorite links, books, and tricks, feel free to leave them in the comments.

Thursday, March 5, 2015

Painting Imperial Assault, Part 1: The Non-Uniques

I've had my eyes on Descent since before I got back into painting because of the quality of the miniatures. I still have yet to play, but it's a popular fantasy romp. When I heard about Imperial Assault, which is essentially bugfixed Descent + Star Wars, I think we all knew it was a matter of when and not if. It made a good Christmas gift to myself, and I think I started in on the miniatures the day after.


After cleaning up the miniatures, my next step was to think about basing. The tiles in Imperial Assault come in three fundamental types: badlands, forest, and indoor. It took some careful paint mixing to get the shades right, but I was able to match the badlands and forest ground color fairly well.


Even while I worked on it, though, I had some doubts. The game rules include the concept of "deployment groups," so for example, the nine Stormtrooper miniatures are deployed in groups of three. I thought that basing would be a good way to visually distinguish the groups. The more I got into it, however, the more I worried about a visual-story dissonance: badlands troopers would look kind of silly deployed in an imperial base, for example. I turned to Facebook to ask for some feedback, and two of my artist friends quickly responded with, essentially, "I'd just paint them black." It hurt a bit to undo the work that I had done, but I am also acutely aware of the sunk cost fallacy. On the positive side, this gave me the occasion to buy some Simple Green and try stripping. It worked well enough on the Imperial Assault miniatures and some other hard plastic ones that my sons wanted to repaint, but I was surprised at how poorly it worked on the handful of Bones miniatures we also threw in.


For the Stormtroopers, I followed the approach recommended on this BGG thread, namely, using a light basecoat and then highlighting the contours with greylining. All but one of the Stormtroopers was primed white, and after enough layers, you cannot really tell who was my black-prime test. The sequence of layers was a bit frustrating: I did the black after the greylining, but inevitably, I got black outside of where it should be, since it's all in recessed areas. The color I needed to clean this up was, essentially, the greylining shade. I switched the order for the gunners, as described below. I colored the shoulders of the units in order to distinguish between deployment groups. Painting nine nigh-identical Stormtroopers was a bit tedious, but I think they look great assembled together.


The probe droids started with a straightforward metallic drybrush on black primer. The two in the back were done together and were originally identical. The bronze color was achieved through two layers of ink, which did exactly what I hoped it would do. I had only recently built up my collection of inks, and it's been fun for me to experiment with inks alone as well as mixing inks and acrylic paints. I wasn't sure how to do the lenses, so I again followed the advice on the BGG thread, which is actually an old technique used for painting fantasy gems. I sat on the third probe droid for some time, trying to determine how to make it visually distinct from the other two. I ended up just giving it lighter tones all around, which, as you can see, is sufficient.


I am glad I am not the only one who is unclear on the pronunciation of "Trandoshan," but here they are. (I've fallen into the tran-DOH-shun camp, following on the lines that they are from planet Dosh. Thanks, Internet!) I did the yellow pair first and the brown-green ones a few weeks later. If anyone asks "How many layers of yellow did it take to paint these miniatures?" the answer is "One less than infinity." Clearly, the ones on the left are painted to look like Bossk, who, incidentally, is wearing an unaltered High-Altitude Windak High Pressure Suit designed for the Royal Air Force in 1962 (Thanks again, Internet!) I am pleased with the contrast between the smooth cloth suit and the rough, lizard-like skin. The photo does not really show it, but I also pulled some tricks with varnish here: the skin and metal bits are done with Liquitex Matte Varnish, which has more of a satin finish, and the cloth is done with Model Masters Acryl, which is flat/matte. The eyes and inside of the mouth are gloss varnished, and the effect is subtle but pleasing on the table.

If you look carefully, you'll see that the white flak jacket is painted slightly differently on the two deployment groups. On the right, I used the greylining technique as done on the Stormtroopers to bring out the contrast a bit more, whereas the left was done with pure layering. I prefer the ones on the right, as there's a bit more contrast. I have been thinking about contrast a lot lately and wondering if I should include it more deliberately in my painting. I find that I tend toward more gentle gradients that look good under my painting light, where sharper contrast may help the miniature read better from across the table. The Trandoshans are good example of this. All the yellows on the left two are very close to each other, even though I started with a medium brown and worked up from there. The two on the right, I tried more deliberately to separate shades and highlights, to what I believe is good effect. Of course, the colors are so vastly different that it might just be a matter of apples and oranges.

One last note on the Trandoshans: the yellow ones were the first in this batch where I was working deliberately with blacks. As I've mentioned before, black and white are challenging because you cannot shade black and you cannot highlight white. Yet, many of these Star Wars minis feature blacks and whites, Stormtroopers being the classic example. For the Trandoshans, I was careful to use shades of grey to highlight the backpack and cuffs, while trying to make them still "read" as black. This leads into the next set:


These Imperial Officers are wearing black suits, but again, they were painted in all shades of grey, with pure black only used in the deepest recesses and achieved with black ink. They are deployed individually, and so each had to be visually distinguishable from the rest. Since gloves are optional for Stormtrooper officers (wow, Internet, you have a lot of Star Wars information), I decided to have one gloved officer; his hair is also a mix of browns and greys, trying to make him look middle-aged, although the fact you cannot tell from the photo says it probably wasn't worth the effort. The Imperial Assault card art features Imperial Officers with trim goatees, so I decided to put facial hair on one officer to make him stand out. It works, but it also feels kind of strange to put a beard on a military officer. I don't remember seeing any facial hair throughout the Death Star. Anyway, it matches the card art, so we'll take it for now. I am pleased with how, again, the uniforms look black but still contain discernible highlights.


I believe it was while working on the royal guard that my wife pointed out how monochromatic the villains are. I suppose this reflects in the story as well: in the movies, these guys are scary looking soldiers that don't actually do much of anything. Red can be a tricky color to paint, but I am really happy with the result here. The shade layer is a mix of red and brown, going up to a main highlight color of flat red. For additional highlights, I use light shades of orange, but even knowing it's there, you cannot really see it: it simply looks like highlights. It's a bit hard to tell in the photo here, but the tips of their staffs are different, to make it clear that they are in two deployment groups of two. I also did some varnish tricks here, doing the helmets in the satiny Liquitex Matte Varnish and the robes in Model Masters Acryl flat. I was rather disappointed, however, to see how the Acryl muted the vibrancy of the reds. It's not just a matter of the helmets being more shiny: the actual tone of the red was taken down by the flat varnish. In future figures, after doing a bit of research, I've been thinning the Acryl with a little water. It's hard to say if this has changed it significantly, however, since no other figure has two sections that would otherwise be the same color but using the different varnishes.


These nexus are the only real monsters in the set, and so I was looking forward to painting them as a break from humanoids. Knowing there would certainly be some lore about them on the Internet, I looked them up, and I was disappointed to see that they come from the dreaded prequel trilogy. As much as I love painting, I wasn't sure I wanted to melt them down and mold them into some kind of baby rancors.

The fur is base coat, ink wash, and a few layers of drybrushing, with the stripes painted in sepia ink. My intention here was to try to shade the color of the fur rather than paint a stripe over it, and I think that worked. It probably would also have worked with regular paint, of course, but since it worked on one, I did it on the other. (Actually, in my first pass didn't go quite as well. The stripes were too far apart, and I tried adding highlights just to them. I ended up repainting them entirely to what you see in the photo.) The tails were done with the same basic approach I used about a year ago with my Mice and Mystics minis, using a flesh tone and a brown wash. The spines and claws were done in very dark browns, with varying varnishes again used to make the spines, eyes, and inside of the mouth shine. The mouth itself is done in dark purples, but looking at it here, it probably could have been brighter. Still, I'm happy with it, as it leaves the comically large mouth looking cavernous.


The Silliest Name in Imperial Assault Award goes to the E-Web Engineers. I ended up priming the gunner himself white while the gun, generator, and base were primed black. The gun was originally just layers of metallic drybrushing, but this looked drab after finishing the gunner, so I went back over several areas in the brighter steel color seen here. I have generally been working with black primer for the past several months, in part because I've been following a lot of the techniques from Dr. Faust's Painting Clinic. With these guys, I was reminded of how annoying white primer can be: it's easy to miss a seam between two colors, and so you get white spots are lines in what should be the darkest recesses. Unlike the Stormtroopers, where I used a basecoat and painted in the shadows, these were layered up from shade layer to highlight layer. It probably took longer per figure, but there were only two. As with the Stormtroopers, it was a good exercise in painting both blacks and whites. In case you're curious, these miniatures are deployed individually, and the only visual distinction between them is the readout on top of the generator: the one on the left has green details and the one on the right has red.

That's it for the non-unique Imperial Assault miniatures. Next up will be the unique heroes and villains, who are cleaned and primed but not yet started. I'll post a report when they're done. Who knows, maybe some day I'll even play the game.

Thursday, February 12, 2015

On Measurement in Computer Science

In a chance encounter last semester, I befriended an undergraduate honors student who was not involved in any of my courses. She has become enamored of the notion of measurement—what it is and what it means. She asked me to contribute an essay on measurement in Computer Science for a collection that will become her honors thesis. It was an honor to be asked, and although I had plenty of inspiration, I found the actual writing to be difficult. After a few discarded drafts, I decided to focus on the paradox of the measurable and the unmeasurable within Computer Science, while trying to stay apart from political debate (we need more Computer Scientists!) or research loggerheads (agile vs waterfall). Without further ado, this is the final draft I sent to her yesterday, although I wonder if I may tweak it a bit before she publishes it.

The discrete and formal measurements of Computer Science reflect its foundations in mathematics and electrical engineering. Computer programs are composed of unambiguously parsed statements and expressions, which are compiled into atomic hardware directives. Information is transmitted over networks by measuring electronic pulses and radio waves, interpreting these as bits and bytes. Computational theory tells us that all computable problems belong to a single space-time hierarchy and that some problems cannot be computed at all. This leads us to the biggest question of Computer Science—the question of whether P=NP—which is fascinating not only because we do not know the answer, but because we also do not know if it is answerable.

These discretely measurable aspects of the discipline complement a more humanistic side. Creativity and ingenuity are required in determining what line of code to write, what proof technique to use, or what software architecture to follow. First attempts almost always fail, and so we learn to approach problems iteratively, using formal and informal heuristics to determine whether we are moving closer to a desirable solution. That is, we recognize that doing Computer Science is a learning process, that building a system is the process of learning how to build the system. Unlike in engineering and architecture, where models are built for others to manufacture, the computer scientist deals directly with the elements that compose the artifact. This creates a rapid feedback loop for the reflective practitioner, giving rise to the model of expert computer scientist as master craftsman.

Building any nontrivial software system requires a multidisciplinary collaboration, where the tools of the Computer Scientist are combined with approaches from the social sciences, humanities, and the arts. Research consistently shows that teams with strong communication practices produce better results; how to actually conduct these measurements is a perennial research question. Martin Fowler—a recognized industry leader software development—has gone so far as to claim that productivity cannot be measured at all, and that any attempted measurements will only produce misinformation.

My own scholarship involves building original educational video games with multidisciplinary teams of undergraduates. When interviewed about what most contributed to their successes, teams inevitably split their answers between the measurable and the non-measurable. They find that formal processes and communication patterns, quantitative feedback, and technical structure give them the confidence to build and revise complex software systems. At the same time, they recognize that empathy has the most dramatic effect on both team productivity and the quality of the product. These two sides, the scientific and the humanistic, are not opposed to each other: they form a synergy that can propel a team to success or, in their absence, drag a team to a near standstill.

This harmonious balance of the measurable and the unmeasurable characterizes Computer Science. Alistair Cockburn captures this paradox in his cooperative game principle in which he defines software development as a cooperative game of invention and communication. Both kinds of moves are necessary for the team to create a solution of value to stakeholders and, thereby, to win the game. A winning team must make measurable and unmeasurable moves, creating technical and artistic artifacts while continuing to build empathy, all the while reflecting on how they are learning through this process.



Friday, February 6, 2015

Spring 2015 Game Studio project announcement: Collaboration Station

It's been a busy time here, and it seems I missed January entirely on my blog. One of the reasons for the tight schedule is that I am once again mentoring a major immersive learning project, working with an multidisciplinary team of ten undergraduates to create an original educational game. We're partnering with the Children's Museum of Indianapolis, who are helping to provide feedback as we work on Collaboration Station, a cooperative game about the International Space Station.

Our social media presence just went live, so please visit the team's blog and check out our Twitter feed. Right now, there's just an introduction to the project, but you can expect more content in the coming weeks.

Tuesday, December 23, 2014

CS222 Fall 2014: What we learned, and where we're going

The final exam for my CS222 class once again featured the construction of a list of what the students learned (or more properly, what they think they learned). They compiled a list of 111 items, and each student was asked to pick their top seven. Once all the votes were tallied, these was most consensus around these four items:

ItemVotes
Test-Driven Development16
JUnit testing12
Make realistic goals12
You can learn a lot from failures10

Transcribing the list is like a semester-in-review. There are some very powerful ideas that the students brought up, many of which only had one or two votes. Examples include "Nobody owns code in a group project," "Dirty code is easy to write and hard to maintain," "Be wary of second-order ignorance," and "Donuts solve most transgressions." I could tell stories about each one of these—and I know that if I don't write down the stories, the details will be lost to the sands of time. However, I also know that this is going to be a long post, so I will have to leave out some of these details. It's worth noting that some of the items in the list also embody lingering confusion. I just write down what the students say during this exercise, only asking for clarifications. Still, my teacher-sense was tingling when students offered items like "Instantiating variables," which shows a misunderstanding of terminology, semantics, or both.

The students were asked to write about what experiences led them to learn one of these items. I believe everybody chose to write about learning from failure, which turned out to be a good theme for the semester, as I talk about more below. One of the students, in his closing essay, pointed out that "make realistic goals" and "learn from failure" are the kind of thing you'd expect to hear from a weekend leadership workshop at a hotel, which I thought was an astute observation and brought a smile to my face. He himself acknowledged that these items are easy to say and hard to follow, and I found it uplifting to read about how so many students had transformative experiences around personal and team failures during the semester. My hope is that they integrate these lessons into better practices as they move forward in college and then in their lives as alumni.

Taking a step back from the end of the semester, let me set the context for this particular class. About two years ago, the Foundations Curriculum Committee approved a change to our introductory Computer Science class. We decided to adopt the combination of Media Computing, Pair Programming, and Peer Instruction that proved so successful at San Diego. The CS222 course that I have been working on for a few years comes two semesters after this course, and so this Fall I had my first cohort of students from this change. The change of context went along with a change of language, so whereas previously my students had two introductory semesters in Java, this group had a semester of Python and a semester of Java.

I was a bit surprised that these students did not appear to be any better or any worse prepared for CS222 with respect to maturity of programming. As always, they could generally hit the keys until programs kind of worked. There were some who still struggled with basic control structures, almost uniform misuse of technical terminology, and practically no understanding of either object-oriented programming or its syntactic elements. I suppose this means that our first round of data point to the change being a success, since I did not notice any real change in preparation, yet more students are sticking with the major. (Then again, it could just be the economy.)

I think I had slightly overestimated their understanding of Java in the early part of the semester. My intention was to give some warm-up exercises, but I think these were neither formal enough nor scaffolded enough. There were several days where I posed a challenge and said, "Try this for next time!" but—with shades of when I tried making the final project not worth course credit—because it wasn't collected and graded, I do not think many people really tried. For the Spring, I have integrated three formal assignments, one per week before the two-week project. Because these assignments are intended to form students' for the coming weeks of activity, I have decided to adopt a mastery learning approach here: students have to do the assignments until they are correct. (As I write this, I realize that there may be a hole here: right now, I have it set up so that the assignments must be correct to get higher than a 'D' in the course, but this means students might put them off. I think I will have to revise that before Spring, to actually stop them from moving on in the course somehow until the assignments are done, or put a strict time limit on how long they have to submit a reasonable revision.)

The two-week project in the Fall was a good experience, although I think the students didn't realize it until for a few weeks afterward. I gave them a good technical challenge, involving processing data from NPR's RSS feeds, and Clean Code with TDD was required. Of course, many students did not start the project when they should have, but more importantly, I don't think that anybody actually followed the requirements. Several students had projects that ostensibly worked, and they were proud of these, but they were horribly written. I was honest in the grading, which I think many of the students had never experienced before either. Many of them panicked, but then I pointed out to them that the project had no contribution to their final grade at all. This allowed us to talk honestly about the difference between requirement and suggestion, and it forced them to rethink their own processes. In their end-of-semester essays, many students came back to this as one of the most important experiences of the course—a real eye-opener. I am fairly certain this contributed to "learn from failure" being one of the top items of consensus in the final exam.

I realized too late in the semester that I had a problem with the achievement-based grading system. I had designed several interesting quests, which a student had to complete in order to unlock A-level grades. One of them, "Clean Coder," was designed to be the easiest one: it required the completion of several achievements related to Clean Code, which was required for the project anyway. However, it looked like it was harder, because it had more steps than the others. The other achievements had fewer steps required because I knew that students would be doing Clean Code activities as well. Sadly, the students didn't think this through, and I did not convey it clearly enough, with the result that nobody pursued the Clean Code achievements. Not coincidentally, many teams struggled with fundamental Clean Code ideas in their projects.

I also encountered something new this semester, which could actually be novel although I suspect it was previously under my radar. As in previous semesters, I allowed collaborative achievement submissions, since much of the work is intended to be done by the team collectively. However, it came to my attention that a few teams were assigning an "achievements person" who became responsible for doing the achievement work while the rest did other things. This is quite a reasonable division of labor, but it's not at all what I intended.

Because of the quest difficulties and the unintended division of labor, I made several changes to the achievement-based assessment system for the Spring. All achievement claims will now be made by individuals, which helps me ensure that I know each student earns their keep. I also reduced the number required to unlock different grade levels. However, the total workload should be about the same, as I am bringing in end-of-iteration reflection essays. Several semesters ago, I required both achievements and reflection essays, but I found this to be too much work. I find myself wanting students to tie their experiences more to our essential questions, and so I'm pulling the end-of-iteration essay idea from my studio courses into CS222. I think it should be a good fit here, although I am dreading the return of the inevitable "This feels like a writing course!" evaluations from students who don't recognize the epistemic value of writing.

I have also completely removed the quest construct. Many of the items that were quests are now simply achievements. The quests are fun and provide a nice narrative on top of the achievements ("I am becoming an engineer!" "I care about user-centered design!"), but they also bound the students too early to a specific path: a team who decided on one quest could not feasibly change paths to take another, which is unfortunate since the whole thing was really designed to be inspirational, not constricting. In the Spring, then, there will be no other barrier between B-level and A-level grades aside from the number of achievements. Realistically, it's the project grade that most influences final grades anyway.

Before winter break is over, I plan to make a checklist for the final project iterations. I don't know if students actually will read it or use it, but maybe it will help those teams who are working diligently but suffering from second-order ignorance. Common items that teams forget before the submission include removing compiler warnings, tagging the repository, and checking out their project onto a clean machine to ensure that all of their dependencies are properly configured or documented.

I incorporated self- and peer-evaluations into each iteration in the Fall, and these provided a throttle on how many points an individual could earn from a collective team project. The design, obviously, is to catch freeloaders. I used a similar system in my game programming course, where I also asked students to reflect on it explicitly, and that group was pretty evenly split between people who liked it, those who didn't, and the neutral. Although I did not ask students to reflect on these evaluations explicitly in CS222, I was surprised that it just didn't come up in most students' writings, and where it did, it was very positively. There were some teams where I think they even helped students to air some difficulties early and get over them. I still need to find a better way to help students recognize that, given a rubric, all I want is an honest response. I did hear students talking about giving each other "good" evaluations, and I tried to convince them that it wasn't about "good" and "bad", but about honest feedback. Perhaps it's an intractable problem because these evaluations contribute to a grade, and so inevitably students pragmatically see "good" and "bad" as having priority over team cohesion.

The course evaluations for Fall were nice to read, as the students praised the areas of the course that I think went well, and they provided constructive criticism in places where things were rocky. At least two students described the achievement-based system as feeling like it was "still in beta," but both were quite forgiving of this as well. I think drawing on video game metaphors here helped me, as students recognize that "beta" incorporates their feedback for the betterment of future players. Despite generally high praise for my course, I cannot get out of my head the fact that more than one student wrote something like, "Don't believe the people who say you are a monster." Two used the word "monster" specifically. These were all couched in grateful and kind ways, from students encouraging me to continue to be honest in my critical feedback. I suppose I must have a mixed reputation. I know I shouldn't let it get to me, but I cannot help but wish to be a fly on the wall.

This was a really fun group of students, and I think we did a lot of good work together. I hope they carry these big ideas of software development and craftsmanship with them as they move on. In my last presentation to them, I reminded them that I spent the last fifteen weeks holding them accountable to high professional standards, and that in the coming semesters, they will be responsible for holding themselves and each other accountable. I hope they do.

Some references: