Monday, October 20, 2014

Managing Student Peer- and Self-Evaluations using Google Forms

I have written before about the rubric I use for students' self- and peer-evaluation. Here is the latest version that I have deployed this semester, copied from my 2011 post.

3210
Commitment Attended all scheduled team meetings or notified the team of absence. Missed team meetings, with notifications, with enough regularity to be problematic. Missed one or more team meetings without notifying the team. Regularly missed team meetings without notifying the team.
Participation Contributed to project planning, implementation, testing, and presentations. Did not contribute to one of the following: project planning, implementation, testing, and presentations. Did not contribute to two of the following: planning, implementation, testing, presentation. Did not contribute to three or more of the following: planning, implementation, testing, presentation.
Communication Clear reports on what has been accomplished, what is in progress, and what stands in the way, thereby facilitating progress. Sometimes is unclear about what has been done, what is in progress, and what stands in the way, creating minor impediments to progress. Is regularly unclear about what has been done, what is in progress, and what stands in the way, creating significant impediments to progress. Communication patterns directly disrupt team progress.
Technical contributions High quality technical contributions that facilitate success of the team. High quality technical contributions that do not directly facilitate the team's success. Low quality technical contributions that frequently require redress by other team members. Low quality technical contributions that inhibit success.
Attitude and Leadership Listens to, shares with, and supports efforts of others, and actively tries to keep the team together. Listens to, shares with, and supports the efforts of others. Frequently fails to listen, share, or support teammates. Displays an antagonism that inhibits team success.


I have been conducting the evaluations on paper and transcribing the data into a spreadsheet for analysis. I realized last academic year that the analog form was preventing me from using self- and peer-evaluations more frequently. I suspect that if teams had to give and receive this feedback more regularly, more teams would be able to identify and mitigate collaboration problems. I decided to require self- and peer-evaluations for each iteration of my Game Programming and Advanced Programming courses, in part to push me to explore a streamlined, digital solution.

My first stop was Blackboard, which is required at my university at least for entering students' final grades. I found no support for the kind of group-restricted peer evaluations that interest me.

Enter Google Forms. If you have not used this before, it is an easy-to-use tool for creating forms, distributing them, and analyzing the results via integration with Google Sheets. I created a template that converts the tabular rubric into a series of five questions on a zero-to-three scale. To use the form for any particular class, I copy the list of student identifiers from Blackboard and paste them into the first two questions—the selection of the evaluator and the subject under evaluation. I could not find a way to make the template publicly readable without also making it publicly editable. If you want to try the form yourself, contact me and I'll share it with you via Google Docs, then you can make your own copy of it.

Once the students have completed the form, I use a pivot table to summarize the results. Specifically, I add a row for the subject of evaluation ("evaluatee"), then add values for the five rubric categories, summarizing by median. The pivot table is itself a sheet, and so I can add a new column that sums the medians to reach a student's contribution score.

Getting this information back to the student is still a manual process. My current approach is to make a new column in Blackboard's grade book and then sort the grade book table by student identifier to match the sequence in the pivot table. I step through, student by student, entering the contribution score and, in the comments field, the five medians in a simple, space-separated format. While this is somewhat tedious, it does force me to look at each line and identify whether I should intervene with a team or an individual.

The form cannot validate that each student completes the form exactly once for each peer. Another quick pivot table makes this easy to verify, however: add a row for each evaluator and a column for each evaluatee, add the evaluatee data, and summarize by "COUNTA". Anyone who has submitted more than one evaluation of the same person will be notable in the table, and each row's total should be the number of person in that student's team. This technique allowed me to recognize that a few students submitted multiple evaluations of a partner, presumably by mistake since the data matched up.

This approach presumes that students will be honest in identifying themselves, but the whole self- and peer-evaluation system already presumes honesty. I talk to my students about this, that I assume they will be honest with me because I have been honest with them.

Hopefully these notes will be useful to others who are thinking about how to conduct peer evaluations. If nothing else, this will help me remember how I handled the first iteration so that I can replicate it in the second!

Thursday, September 25, 2014

Quiz Games and Learning Taxonomies

A few days ago, I posted about the SOLO Taxonomy and my desire to incorporate its ideas into educational game design. Yesterday's serious game design class provided just such an opportunity.

This week was my students' first opportunity to present game concepts. Over the coming weeks, they will be presenting additional concepts as well as multiple iterations of prototypes, as I described in my course planning post. This was their first real opportunity to show something original, and most of them chose to make games based around the International Space Station, a topic of particular interest to the class' partner, The Children's Museum of Indianapolis. The designs varied in genre, scope, and clarity, as to be expected from a first pass. One particular design was essentially a series of questions, where each question was worth a point. That is, this design was essentially a quiz, with no interesting or relevant decisions being made by the player.

I told the students how, in general, I don't have much interest in quiz-style games. Coincidentally, they had just read The Education Arcade whitepaper, “Moving Learning Games Forward,” and I was able to draw on some of the ideas in that whitepaper to justify my indifference. Looking at quiz-style games from the perspective of SOLO, they clearly fall into either the unistructural or multistructural levels of the taxonomy. That is, either the player is recalling a single idea (identify or name) or sets of ideas (list, enumerate). SOLO makes it clear that these games exist at the lower level of learning activity. By contrast, a game that requires analysis or application of domain knowledge would place the learning activity at the relational level, and a game that requires invention or theory-building would be at the extended abstract level. These levels inherently reflect what Burgun calls endogenously meaningful ambiguous decision-making, which I find to be a useful heuristic for evaluating whether or not a game concept will be worth prototyping.

It seems to me that identifying quiz games as low-level learning is just as easy with Bloom's Taxonomy of the Cognitive Domain: quiz games are usually situated at the Remember or possibly the Understand level, the very lowest levels of the taxonomy. The applicability of SOLO vs. Bloom seems to get murkier once when adds meaningful choices to the game. This gives me a direction in which to consider these taxonomies as I move forward with this semester's serious game design efforts.

Wednesday, September 24, 2014

Configuring Java development on Linux

I run Linux Mint on my workstation and my laptop, and so my students see me using it in the classroom and in my office. A few students have dipped their toes into GNU/Linux this semester, but configuring a useful and robust Java development environment can be challenging for someone new to the Linux ecosystem. I am writing this blog post to document how I set up my environment, and why I do it this way. I will assume a Debian-based Linux distribution, although I have used this approach on RPM-based distros as well.

The first step is to get rid of the default OpenJDK installation. I sympathize with the goals of the OpenJDK community and of free software in general, but at the same time, I am content to run Oracle's JDK for development purposes. Instructions for doing this depend on your distribution, and a quick search will likely show something like this:

sudo apt-get purge openjdk*
sudo apt-get purge icedtea-* openjdk-*


It probably doesn't matter if you keep the OpenJDK installed, but since we're not using it, we may as well clean it out.

In the bad old days, I would install all my Java libraries into /usr/local, and then I would curse myself over getting the permissions right. These days, I just put them in an apps folder off of my home directory—that is, at /home/pvg/apps. If you're very new to *nix, not that the tilde (~) is used to represent your home directory, so I also refer to this directory as ~/apps.

Next step is to download the JDK from Oracle. I will assume we're using Java SE 7, release 40, just for the sake of discussion. The approach is the same for any release, and in fact, allows for parallel installation of different releases. I extract the JDK into ~/apps/jdk1.7.0_40 using standard archiving tools. Then, set up a symbolic link in ~/apps to point to this particular release of the JDK. From the ~/apps directory, it looks like this:

ln -s jdk1.7.0_40 jdk1.7.0

Now, I can reference ~/apps/jdk1.7.0 and it goes immediately to release 40. If I want to update my Java version, I simply drop it into ~/apps and update the symbolic link—no other changes are required.

Making this JDK accessible from the command line just requires a few tweaks to the .bashrc file. This is a hidden file in your home directory; this means you won't see it with a standard ls command, but you can if you do ls -a. Since I use emacs, I can open the file with emacs .bashrc from my home directory; vim or another editor would work similarly, except not quite as well because they are not emacs. What we'll do is add two export commands, one to specify JAVA_HOME and another to tweak the path. I put them at the bottom of my .bashrc, along with a comment reminding myself what I was doing. The result looks like this:

# Use the JDK in ~/apps
export JAVA_HOME=/home/pvg/apps/jdk1.7.0
export PATH=$JAVA_HOME/bin:$PATH


You will need to open a new console or source the .bashrc file to see the change. Now, if you type the command java -version, you should see that you're running JDK 1.7.0 Release 40 (in my example). Note that you can use the which command to determine which executable is being run, so which java should return /home/pvg/apps/jdk1.7.0/bin/java.

My installation strategy for Maven, Ant, and Eclipse is similar. This gives me fine-grained control over which version I have installed, and I can hop between versions by simply tweaking my path. For example, in my ~/apps directory I have apache-maven-3.1.0, and then in ~/.bashrc I have added the following lines:

export MAVEN_HOME=/home/pvg/apps/apache-maven-3.1.0
export PATH=$MAVEN_HOME/bin:$PATH


I keep a link to Eclipse on my desktop for ease of access. There is a trick to making this work in KDE; I don't know if this is applicable to other desktop environments. In my Eclipse configuration file (~/apps/eclipse/eclipse.ini) I have added a link to the JavaVM that should be used to start Eclipse. The two lines look like this:

-vm
/home/pvg/apps/jdk1.7.0/bin/java


These go before the -vmargs flag that is probably already there.

I hope this is useful information for my students and others who are experimenting with configuring GNU/Linux systems for Java development. If you have other tips or approaches, feel free to leave them in the comments.

Tuesday, September 16, 2014

Bloom's Taxonomy vs. SOLO for serious game design

I think the first time I came across Bloom's Taxonomy for the Cognitive Domain was in my first two years as a faculty member, when I was just dipping my toes into the science of learning. I have written about Bloom's taxonomy before, including my preferred presentation of it:


I have found this presentation useful in teaching and practicing game design. Many games have a learning structure that follows the lower three levels. First, you are given some command that you must remember, such as "press right to move right" or "press B to jump." Then, you are given some context in which to understand the effect these commands have on the world: run to the right and the screen begins to scroll, then continue running and fall into a pit, forcing you to start again. Now, you have the context to apply what you understood, combining running and jumping to get over the pit.

I contend that most games don't really go beyond that. I hesitate to say that recognizing the pit as an obstacle constitutes analysis or that combining running and jumping is any meaningful synthesis. Most games do not teach the player to learn how to evaluate the game against the rest of their mental models either. The modern phenomenon of in-game "crafting" is similarly contained in the lower half of the taxonomy: remember that oil can be combined with flames to create an explosion, understand that this explosion hurts enemies, and you can apply this to defeat the enemy du jour. My contention is that games are designed experiences, and that players are remembering, understanding, and applying the constraints designed for them. Even Minecraft, with all its cultural importance, mostly has kids understanding and applying the rules of the world to build interesting things. Certainly, a few users recognize that the pieces they have been given can be used to synthesize something new, such as building circuits out of redstone, but my observation is that these are a minority—and the people who are following the tutorials to build copies, they are back to remembering, understanding, and applying someone else's design.

This use of Bloom's taxonomy is useful for game design thought experiments and for discussion, but it wasn't until GLS 2013 that I found out that many teacher educators are teaching Bloom's taxonomy as dogma, not as a useful sounding board. A poster session was presenting an alternative taxonomy, one that used the same labels but put analysis and synthesis closer to the bottom, using these to describe the kinds of tinkering users do with digital technology. (Clearly, they are using different interpretations of these labels than I am, but that's intellectual freedom for you.) This alternative was based on anecdotes and observations, much like Bloom's original, but this leads to a problem: Bloom's taxonomy is presented as a predictive, scientific model, but as far as I can tell, it is not empirical. In fact, cognitive science tells us that the human brain does not actually follow the steps presented in either of these taxonomies.

Reading Hattie and Yates' Visible Learning and the Science of How We Learn, which was recommended on Grant Wiggins' informative and inspiring blog, I was reminded of the fact that Bloom's Taxonomy does not represent a modern understanding of learning. The book introduced a different model, one that was first defined in 1982 but that I had never encountered before. It is Biggs and Collis' SOLO Taxonomy, where "SOLO" stands for Structure of the Observed Learning Outcome. It is summarized in this image, which is hosted on Biggs' site:


Hattie and Yates conveniently summarize the taxonomy—one idea, many ideas, relate the ideas, extend the ideas—and point out that the first two deal with surface knowing while the latter two deal with deeper knowing. The figure points out that each level of the taxonomy is associated with key observations that can be aligned with assessments. For example, if a student can list key elements of a domain but cannot apply, justify, or criticize them, you could conclude they are at the multistructured ("many ideas") level of SOLO. It strikes me that this has the potential to be powerful in my teaching, and I look forward to incorporating it.

So, how can SOLO contribute to an understanding of game design? It seems we run into the same limitations that hinder game-based learning, primarily those of transfer. Notice that the extend abstract level of SOLO explicitly refers to generalization to a new domain. It's true that I can learn how to jump over pits or destroy goblins with flaming oil, but this knowledge is locked away in the affordances of the game. This perspective is taken from Linderoth's work, particularly "Why gamers don't learn more," which applies the ecological theory of development to explain why learning from games does not transfer.

If nothing else, the SOLO Taxonomy can provide both a target for serious games and guidance toward assessments. Given a content area and the desire to create a game to teach it, I can target a specific level within SOLO. For example, if I only want players to emerge with surface-level knowledge, I might target the multistructural level, but if I wanted players to be able to connect the content to something else they know, I would need to target extended abstract. Then, I can reference the key words from the corresponding level of the taxonomy, and use these to define an assessment of whether or not the game worked. In fact, it strikes me that one could also take key words from the adjacent levels, and use this to detect extremes. As my game design course is wrapping up the preliminaries and moving into game concepts, I will try to create an opportunity to try this.

Monday, September 1, 2014

Learning from Failure—as a game mechanism

One of my favorite insights from the scholarship of teaching and learning is that, essentially, all learning is learning from failure. Every time a person encounters a mismatch between a mental model and reality, it is an opportunity to learn. I have been working for several years to incorporate this idea into my teaching, notably in the expansion of the CS222 project from two to three increments, specifically because it gives teams more chances for safe failure. Indeed, one of my favorite descriptions of university is that it is a "safe fail" environment, where students can fail in order to learn, without the economic cost it would take outside of academia.

It was only recently that I thought about what this phenomenon means in terms of game design, and in role-playing games in particular. Character advancement is a common component of RPGs, giving the player the opportunity to increase the skills and capabilities of his character. Such advancement is generally limited through in-game resource management, such as accumulating threshold values of experience points or by earning sufficient skill points to expend on new abilities. Dungeons and Dragons set the precedent whereby experience points are earned by overcoming obstacles, most often through combat but (with a good DM) also by other means. This has become the de facto standard and can be seen in all manner of modern games, including computer RPGs and RPG-inspired boardgames: success earns points that are used to gain skill.


Recently, I came across The Zorcerer of Zo (ZoZ), a role-playing game by Chad Underkoffler published by Atomic Sock Monkey. In fact, I have had a copy for almost a year, having bought it in a Family-Friendly RPG Bundle of Holding, but it wasn't until a few weeks ago that I read it. The game is based on Baum's Oz series, and since my two older boys and I are currently on the tenth book, we are quite familiar with the setting.

ZoZ uses the "good parts" variant of Underkoffler's Prose Descriptive Qualities (PDQ) system, the full version of which is described in a free document from Atomic Sock Monkey. A critical aspect of the system is that it eschews conventional attributes, skills, and inventory for qualities such as "world-traveler," "small," or "afraid of cats." Each quality is ranked, the range starting at Poor [-2], Average [0], and Good [+2], with each rank giving an adjustment to the 2d6 used for all conflicts. Again, for full detail, check that free PDF. It was a lot of fun to create ZoZ characters with my sons using this approach, since most of the time was spent describing the character's background and interests. One is a talking mouse from an island of merchants, who is in fact a small world-traveler who is afraid of cats. The other is a rockman warrior from a far-away island, who, since he does not need to breathe, simply walked through the ocean to Zo after hearing that it was a nice place to visit. There are no classes or races, and the setting lends itself to this kind of creative storytelling: it doesn't matter that there were no rock men in the Oz books, as long as my son wanted to make one, it was easy to create.

It is the character advancement system of ZoZ that got me thinking about the backwards nature of the conventional RPG systems. When a character encounters a conflict with a chance of failure, the player rolls 2d6 against a target number. If the roll meets or exceeds the target, the character succeeds. If not, the character fails and gains a Learning Point, and a player can improve his character by later spending these points. Consider how this matches what we know about how the brain works: when everything goes well, we actually learn very little, but when we make mistakes and reflect on them, our skills and knowledge improve.

It should not be missed that earning a Learning Point when missing a roll takes a lot of the sting away from failure. Although you did not get the result you wanted, you still get something: the opportunity to learn. This is a true yet countercultural statement. Conventional wisdom is that failure is bad, and this is a dangerous meme that is hard to overcome. Nowhere is it so endemic and unquestioned as in formal schooling environments, the very places whose mission it is (or should be) to instill a love of learning—and a love of learning necessitates tolerance for, if not embracing of, failure.

A player is still rewarded for success, of course, but it is done through the narrative. The use of narrative as a feedback mechanism is cleverly addressed in Koster's essay, "Narrative is not a game mechanic," which I recommend to anyone interested in weaving games and authored narratives together. In ZoZ, players can earn Hero Points for their characters by taking especially brave or noble actions. Hero Points are not used for character advancement, but rather to shift the story in the players' favor such as by getting hints, trading in favors, or getting a one-time boost to a roll—a sort of Oz Karma.

I have been impressed with Zorcerer of Zo and enjoyed playing it with my family. It has inspired me to consider how I might incorporate learning from failure as a game mechanism in my own designs, and more generally, how I might take more ideas from my scholarship and apply them to my game designs. This semester, I will be engaging in an experiment in public game design in concert with my honors colloquium on serious game design, and I expect to use this blog for that purpose. Watch this space for further announcements and designs, and as always, feel free to share your thoughts in the comments section.

Thursday, August 28, 2014

Impressions of Android Wear with the LG G Watch

I attended a GDG Muncie meeting over the Summer where I was lucky enough to win an LG G Android Wear watch. The longer version of the story is that the organizer was trying to determine the best way to generate random numbers for the lottery, and I suggested one of my very favorite sites on the Internet, random.org. He went to the site and generated my number: the system works!

I normally wear a watch—a Skagen titanium watch, to be precise. It is ultralight and quick to don or remove, both of which I consider to be great benefits. This is my second watch of this model, in fact, after having smashed the face of one on vacation several years ago. For those who don't know, I don't have a cell phone plan: the Nexus 4 I carry everywhere is used strictly as a pocket Wi-Fi device. Hence, my watch is not a fashion accessory, it serves an important function, and being so light, it does so innocuously.


My LG G arrived a few weeks after the GDG meeting, and I decided to wear it around the house for a few days. It is clunky and heavy, especially in contrast to my usual watch. I'm not hip to the technical terms for watchbands, but while the band is functional, it is fiddly, so it takes a few moments to put on or take off. Again, I am not interested in the piece as a fashion statement per se, but I think the picture shows how my rather thin hands and wrists are dominated by this piece of black technology.


It was easy enough to set up and sync with my Nexus 4. I like that the watch face can be configured to show the time, the date, and some ambient information, such as the temperature. I was afraid that the notifications system would be distracting, but I find it no less distracting than my pocket device, really. When I want to know whether I have new email, for example, I simply check. When I am in a situation where I don't want to be interrupted, I am not generally checking my watch for the time anyway: I am either in a situation where I don't care about the time (writing) or there's a clock readily available (meetings, teaching).

Given that I'm the type to turn off notifications and avoid distractions anyway, I also haven't found it to be that useful: almost any time I have used it, I could have about as easily used my pocket device instead. Perhaps that's due to immaturity of the platform, but I suspect it has more to do with me not being in the target demographic. All the same, it is kind of fun to check messages on my watch while walking down the hallway to the men's room. It doesn't feel any less isolated or rude than carrying a phone and checking messages in the same situation, but it does leave hands a bit more free. Probably the single-most feature I use on the watch besides time and date is the view of what appointment is coming next.

I do have a major complaint with the email authoring feature. It does feel very futuristic to talk to your wristwatch and have it send a message to someone. However, it is set up so that you narrate your brief message, and then the watch shows you what it recognized and sends it right away. The two times I've tried this, the speech recognition was terrible, but I had no opportunity to stop it before it sent—I was left with that awful feeling of having just sent a nonsensical message. In my opinion, it really needs a 2–3 second confirmation period in which one can stop the process.

Another usability failure on the watch arises from the context in which it is used, and in particular, I wonder if the designers considered users with small children. When my toddler sits on my lap or when I pick him up, his arms reach exactly to my wrists, and he tends to fiddle with whatever is there—a smarthwatch, for example. It's a bad feeling to be sitting happily with a child in lap and then suddenly realize that he may have knocked messages out of your inbox. The device really needs a hardware switch that turns on or off the touch-sensitivity, or it should come with a warning, "Not for parents of young children." I've started wearing it only on days when I am working from campus, and around the house, I stick with my Skagen and pocket-device.

Preliminary conclusion: It's a fun toy with a few good uses and a few usability problems. I would not buy one, but I am happy to tinker with one. I do have an idea for an app that I may experiment with in the next few days, but that depends on how the semester gets rolling.

UPDATE (8/29): A crazy thing happened this morning, the day after I posted my review. I checked my messages while walking down the hallway to work, and a colleague sent me a question that could be answered either "yes" or "no." I figured, how badly could the speech recognition mangle that? I hit "Reply" on my watch, and the voice recognition screen came up with the "Speak now" prompt. Then I swiped or tapped or something... I am not really sure what I did, which itself is interesting... and I got a "Yes/No" dialog. I hit the "Yes" button and an email was sent with exactly that content.

That is neat. I need to wait for someone else to send me an email that I can answer in one word and try that again.

Wednesday, August 20, 2014

Screencasting on Linux Mint 17

Sometimes I post philosophical essays, and sometimes I describe important teaching experiences, and sometimes I reflect on my miniature painting hobby. Today, however, I go to that much more pragmatic use of the blog: writing down how I actually got screencasting to work on my work machine.

I have a Dell Precision T3600 running Linux Mint 17. I switched from KUbuntu to Mint on my work machines last year, when I had some trouble with hardware recognition. Since I've been using KDE for over a decade (from Mandrake to Mandriva to KUbuntu), I use the KDE distribution of Mint as well. It seems this puts me in the minority, as most of the Q&A I see online assumes one is running something newfangled.

I have USB Logitech headphones that used for the screencast. After a bit of fighting with my mixer, I was able to get the microphone to work: the system wants to default to the hardware ports, even when there's nothing connected there. Audacity makes it very easy to test the sound configuration, and so I was sure that the microphone was working.

However, getting that microphone to work through screencast software was another problem entirely. I had no luck with the old standard recordmydesktop at all: video capture was fine, but the audio came up with nothing. A bit of searching revealed some newer applications I had never heard of. Vokoscreen had a reasonable user-interface with several configuration options, and I was confident enough to record an 11-minute take. Unfortunately, upon playing back, the audio was terribly choppy. I spent quite a bit of time fiddling with the framerate and pulseaudio settings trying to fix this, since otherwise Vokoscreen was convenient to use, but a tutorial with no audio is hardly a tutorial at all. Since the mic was working fine in Audacity, I inferred that it was a software problem, not a hardware problem.

I switched over to Kazam, and this ended up doing the trick. It allowed me to select an area of the screen where I could hop between Eclipse, Chromium, and the console, and the audio was captured with no trouble. The default file format uploaded flawlessly to YouTube, and I was able to share the video with my students. I had actually tried Kazam before Vokoscreen, but it wasn't working—turns out it was because I had the mixer settings for my mic much too low, and they needed to be at almost 100%. (Recordmydesktop still did not work with this fix, incidentally.)

The screencast itself is just an explanation of how to set up a PlayN project and hook it up to a Mercurial repository using the Computer Science Department's Redmine server. Hopefully next time I want to do an 11-minute screencast, it will take less than two hours of tinkering.