Thursday, October 25, 2018

Answering three student questions about Computer Science, Internships, and Industry Trends

One of my students sent me an email last week with three thoughtful questions in it. Following the principle of no-wasted-writing, I decided to share my answers here on my blog. The questions are in bold, and I did not edit them for posting here. I'm going to encourage the student to consider his own answers before reading mine, and you, dear reader, may try the same.

What really defines if a student successfully completes the undergraduate CS program at Ball State? I’m not talking about the program and university degree requirements, but rather if they got everything they are meant to out of the CS program. Is there a “tipping point”?

That's a great question, and there's a couple of different ways to answer it. I will start with the structural approach that is conventional in industrialized higher education.

For most of its history, the department operated with implicit objectives, but almost ten years ago, we formally agreed to a set of five. These were adapted from ABET's accreditation criteria, and they define what a student should be able to do after graduation. They are:
  1. Mathematical foundations, algorithmic principles and computing theory: Given a software task, the student will be able to choose an appropriate algorithm and the related data structures to efficiently implement that task. The student will be able to do an asymptotic analysis of the chosen algorithm to justify its time and space complexity. 
  2. Design and development skills: Given an appropriate set of software requirements, the student will be able to formulate a specification from which to design, implement and evaluate a computer-based system, process, component or program that satisfies the requirements. At a minimum, this must include the ability to design, implement and evaluate a software system using a high-level programming language, following the conventions for that language, and using good practices in software design to satisfy the given requirements.
  3. Ethics, professional conduct and lifelong learning: The student will behave consistently with the ACM Code of Ethics and Professional Conduct. This includes developing the capacity for lifelong learning in order to maintain professional competence.
  4. Communication skills: The student will be able to communicate effectively through oral presentations and written documentation, including well-organized presentation materials to explain software concepts to non-technically oriented clients, clearly written software specifications and requirements documents, design documents, and progress report.
  5. Teamwork skills: In order to contribute effectively and meaningfully to any team development effort, each team member must be able to think independently and arrive at creative and correct solutions to problems. This includes the ability to find and evaluate existing solutions to similar problems and adapt them to solve new problems. Each team member must be able to communicate ideas effectively to other team members and cooperate with them in integrating the solutions of subproblems to accomplish the team's common goal
In theory, then, a graduate should have all of these qualities, and our departmental assessment should tell us to what extent we are meeting our goals. We have done a fair job over the past several years of conducting mid-major assessment: after CS222 and CS224 (Advanced Programming and Algorithm Analysis), we use a formalized assessment process to determine if students appear to be making progress, particularly toward departmental objectives #1 and #2. For example, we were able to see no measurable difference in CS222 performance before and after our change in the intro course from a conventional CS1 approach in Java to a media-centric approach in Python, and we were able to push down some critical concepts earlier in the curriculum, like good naming and functional decomposition.

The upper division courses and capstone assessments have, unfortunately, been inadequate for any practical purpose. Put another way, the department agreed upon these objectives as guideposts, but we have not done due diligence to see if we are meeting them—much less making modifications to ease us toward better performance. I think a CS student in any program can look at this list and think of how conventional higher education privileges some aspects over others. Our own two-semester capstone course should be a proving ground for practically all of these, but we lack meaningful assessment data to draw defensible conclusions. Note that your particular question about a "tipping point" should be measurable if we had a clean mapping of these objectives to particular courses and curricular activities, but right now the best we have is anecdotes.

There's another, more holistic way to look at your question, and it tends to be the approach I favor despite its being arguably less measurable. Students who have taken my CS222 class should be familiar with the Dreyfus Model of Skill Acquisition. It is a descriptive model for any set of skills, where aptitude can be measured in categories of Novice, Advanced Beginner, Competent, Proficient, and Expert. Anyone can become a Novice with very little effort, and raising to Advanced Beginner generally takes some education and practice. However, a defining characteristic of Advanced Beginners is second-order ignorance. That is, they don't know what they don't know. This aligns with the psychological phenomenon of the Dunning-Kruger effect, where Advanced Beginners misclassify themselves as Experts. Overcoming second-order ignorance allows one to truly become Competent, which includes a measure of intellectual humility—a recognition of how much more there is to learn. That, then, is my shorter answer for what it means to earn a bachelors degree in Computer Science: students should have climbed the Peak of Mount Stupid and come back down to recognize the need for lifetime learning.

Incidentally, I learned about the Dreyfus Model by reading Andy Hunt's Pragmatic Thinking and Learning. He and I had an email exchange about it, and we agreed that, broadly speaking, we should expect a bachelor's degree to produce Competent skills, a master's degree to produce Proficient skills, and a doctoral degree to produce Expert skills. In practice, I don't think that higher education has assured that this is the case, but I think it's a worthy consideration if nothing else.

What makes an internship worthwhile?

This is another great question. Let the record show that I have never had the responsibility of deciding whether or not a student internship is worthy of course credit, so I am speaking from a position of educated opinion and not (as above) as an academic trying to do a competent job.

The conventional structure of higher education is one of analysis: the breaking down of complex wholes into manageable parts. You can see this from course and textbook organization all the way up to administrative organization. It is a convenient model for many reasons, although it falls into the fallacy that my analysis should work for you. All analyses are really design projects that have to be evaluated for fitness. Turns out, grouping academics by similarity of department works well for peer evaluation in conventional situations. It doesn't always work though, such as with interdisciplinary problems like the game design and development work that I do: some of that work looks like Computer Science, some looks like humanities, some looks like communications, education, art, etc. Who should evaluate whether or not I am fulfilling my scholarly role in such cases? The administrative structure imposed by the analysis becomes dysfunctional.

Let me bring this back to internships. Higher education structures are pretty well ossified. There are a few calls to enhance multidisciplinary education, such as our own campus' laudable immersive learning projects, but at the end of the day, it's the administrative structure that determines your budget, and it's the budget that allows you to operate. So, a good internship is one that exposes a student to a different analysis—a different way of breaking down a complex world into workable parts. In a way, it's like the idea of having two majors or picking up a minor that is different from your major: learn to see things in a different way.

There's a secondary goal of an internship, which is to engage in legitimate peripheral participation in a field where you would like to work. There can be enormous advantages to this, but I think a lot of them deal with the property I mentioned above: seeing people doing their practice teaches you about the culture of that practice. Students, by and large, are watching faculty do our practice, but our practice is really strange. There are relatively few people who do the work of higher education—especially in higher education. [rimshot]

Anyway, that's what entered my head when I read your question. Take it for what it's worth. If I had the chair's job of evaluating whether or not an internship was worth major credit, I would probably have to deal with things like authentic mentorship and learning objectives, but I think I can get away with my more epistemological stance for now.

What are your thoughts on AWS (Amazon Web Services). It seems like a lot of companies are transitioning to AWS, but do you think this is what we are transiting to as an industry or just the latest bandwagon a lot of people are jumping on?

This is not in my field of technical expertise, though I've read some and talked to alumni and friends doing this kind of work. It seems to me that there are great gains to be had by adopting microservice architectures, from a maintenance point of view in particular. Distributed processing can lead to significant performance gains. Also, there's a very high initial cost of setting up a proprietary, globally-distributed network of reliable servers. If someone else (Amazon, in this case) has already done that work for you, then it seems reasonable to rely on their expertise here. So, it appears to me to be a natural transition rather than simply a bandwagon. The people I know who are doing it are being very careful, from a business and technology point of view. Of course, there are implications for security and accountability as well.

As for me and my projects, we choose Firebase.

That's the end of my responses. I hope that sharing them here was of interest to my readers. Feel free to leave a comment to share your thoughts, as always.

Thursday, October 18, 2018

Gaslands crafting, Round 2: Cars and Templates

We had so much fun making our first round of Gaslands cars, we decided to do another round—and this time, even my wife joined in! A friend of hers responded to our request for spare Matchbox and Hot Wheels cars, and she generously gave us a huge bin of them. As before, almost all the modifications are based on 3D-printed weapons and bits from Fun Board Games.

We'll start with my cars, which I can tell you a little about, and then we'll run through some of the other family-crafted cars.

I started with a pair of cars intended to be used as a Miyazaki team, using the sample team from page 38 of the rulebook. This team is noticeably different from the rest in that it has no shooting weapons: all the cans are spent on driver perks and performance cars except for one dropping weapon. I wanted a clear team color scheme for this pair of cars, and I ended up going with simple racing stripes rather than more fanciful embellishments. The cars were originally going to be called "One-Stripe" and "Two-Stripe", but I decided to go with "Ichi" and "Ni" instead. (Also, now that I upload the photos, I see they are a bit warm. I don't think I want to re-shoot all the cars, so I'll just leave them as-is. It's only repainted toy cars, after all.)


Honestly, I'm not happy with them. While painting them, I was eager to see what I could do with the stripes, and so I did not pay enough attention to highlighting the blue. I also wanted them to look sleek and new, so I did not do any weathering on them. The result is just bland. This is a shame, since it turns out they are really fun to play as a team. The stripes make it hard to go back and touch up the blue, so I would have to really repaint the whole body to revise it.

My other two cars are not really designed to go together; they are just a pair that explore different things I wanted to try. I've read about modelers and miniature kit-bashers talk about using styrene to make custom pieces, and so I finally bought myself some styrene sheets and tubes. I built the mortar on this truck by cutting up two machine guns from my 3D-printed set, using the back of one as the back of the mortar and the front of the two others as struts. The barrel is a styrene tube with accents created from—you guessed it—a slightly larger styrene tube.

I didn't have much of an idea of how to paint it, so I figured I'd try camouflage. The tone isn't quite right, but I like to think the gearhead who was painting it didn't know any better anyway.

This last car of mine was just having some fun with styrene blade accents. The spikes on the wheels were from the set I bought, but the hood blades were hand-crafted. I also tricked out the exhaust a bit, after seeing how my son (below) had done his. All of our cars are much more tame than some of the post-apocalyptic stuff that hobbyists produce; it might be fun on my next set to do lots of armor plating and mesh, but for now, I'm happy making them look good and getting them to the table.

My wife decided to join in this round of car creation. Here are hers:

She was cautious about shades and weathering, but I think she did a fine job. The complementary orange and blue works really well on the white car in particular. Funny story: I saw a work-in-progress of the bottom car and told her I was excited to see someone do a white car. Turns out, it was just primer and she was planning to paint it a different color. My wife has only painted a handful of miniatures with me, such as Valeros for our Pathfinder Adventure Card Game campaign, but I think she has areal talent for it. Maybe we will do more couples' painting someday.

Next up are the cars by #1 Son (11), who had developed a Mishkin team and wanted them to have a uniform look.

In our first painting session, he had done the cars and the windshields in nigh-identical blacks, and I don't remember now if I commented on that or not. I think he did a great job bringing out the windshields, making them appear a different material from the rest of the car. He did ask me about lightning as he was getting into it, and I suggested a layer of white first, so that the blue would pop. I think he did a nice job with this, layering in multiple blues. Also, the edge highlighting on his cars is honestly better than my blue team.

The other funny story here is that I saw which cars he was working on, and he referred to them as "performance cars", which are a specific designation in Gaslands. I laughed and pointed out that these Matchbox cars were not sports cars, they were just sedans. He seemed to not really know the difference, but I thought about it, and it's not like we talk about cars. The rulebook doesn't have pictures. How else would someone infer what the rules were talking about? Anyway, I am not sure if it was in response to my comment or not, but he went in and added the extra exhaust pipes, making them out of nested styrene tubes similar to my mortar. I am pretty sure his idea for this technique came from my working on the mortar, but I ended up liking his exhaust pipes so much that I copied it on my orange car above (sans nested tubes).

Now we get to #2 Son (8), who was making a Slime team. He hasn't really read the rules, but he liked the idea of a team called Slime that focuses on causing chaos. His team is called "The Exploding Sharks," and here they are:

Notice the shark-tooth motif on the hoods. He was intentionally going for bold, crazy colors here. I do wonder what his cars might look like if he was trying to do a serious paint job. Sounds to me like we need to carve out more family painting time.

#3 Son (5) got involved as well, and I was really surprised that his cars were much less gun-encrusted than the first round.

It's kind of hard to look at them and make sense out of what he was trying to do, but it's all very deliberate. Indeed, it's interesting to watch him and his younger brother work with such attention and focus, yet not clearly without any plan. It is as if all the focus is going into an understanding of the materials as they are, rather than having any concern of the outcome-as-desired. Maybe I'm making that up or projecting, I don't know, but it makes me curious about children's art education philosophy.

Finally, #4 son (3) was happy to work with us to build and paint some cars as well.

Personally, I love the machine gun going sideways off the bus. We're putting machine guns on cars, so why not?

Here's a few pieces of terrain that I put together as well.

The top two are simply large templates cut from black craft foam and grey felt for oil slicks and smoke, respectively. The other two are mines and caltrops and required a bit more work. My wife picked up several small containers of beets at the grocery store, and the tops had a section of plain, clear plastic that looked just right for Gaslands' small template. The mines are buttons whose holes I filled with Milliput and then painted to look dirty and metallic. If you look closely, you can see that there's still some unfortunate putty texture to the top, and this was after I added several layers of gloss varnish to try to smooth out the surface. Good enough for tabletop work, but not as nice as I had hoped for.

The caltrops were the most labor intensive, but also the most fun. I took unused staples and clipped them in half with my wire cutters. This leaves two nice L-shaped bits, and so I glued these together with Goop. That was tedious, because I had to hold these tiny little asterisks still while the glue cured, but it was a good excuse to listen to a podcast. Once that was done, I decided that I wanted them to have a painted look rather than a raw metal look, so I primed them black and painted them a gunmetal color, with a wash to add grime and shade. Once that was done, my plan was to glue them to my small beet-top templates, but holding them next to the cars, I thought the scale looked wrong. I went back with my wire cutters and clipped off about a quarter of each leg. The resulting caltrops are what you see here, superglued to the templates.

The cars were actually completed sometime in early September or late August, judging from the dates on my photos. I started writing this post weeks ago, but despite my claim to the contrary, I was thinking about reshooting all the cars under better lighting condition. This means that the cars have been all over my painting desk since, and the semester has been beating me down, so I haven't done any painting since then. I have some pieces primed that I'm just not excited about, but I happened to receive a shipment earlier this week that is giving me some excitement to paint; watch this blog for future details.

My Miyazaki team has faced down my sons' Mishkin and Slime teams in two rounds of a planned three-round escalating tournament. So far, Mishkin has won each time, so the scores are 10-6-2. Essentially, Mishkin's tournament victory is a fait accompli, but we're still having fun. The escalating part of the season—where you add ten cans to each team between games—is the least interesting part to me, but I like the tournament format with random scenario selection. Turns out Mishkin can really crank out the points in Saturday Night Live.

Thanks for reading. As I said, the semester has been wearing me down, but I think we're transitioning into a quieter period, with all my conferences behind me and students switching into project mode. I have a research protocol that a colleague and I are working on that still needs to be submitted to IRB, and there's still plenty of work to do on our curriculum. I also just found out that I'll be teaching a new course next semester: our graduate-level introduction to software engineering. It's not what I would have chosen for myself, but I'll do my best to make it good. Expect a tilt toward agility.

Sunday, October 14, 2018

Notes from Meaningful Play 2018

One of many effective learning techniques that my students don't use is transcribing their notes. I was reminded of this when I took many pages of notes at the 2018 Meaningful Play conference. I decided it might be fun to share my list of odds and ends here. Perhaps there will be something here that the interested reader might learn from, or at the very least, it gives me a chance to create a digital archive that I can return to.

Tracy Fullerton mentioned a book called Wanderlust: A History of Walking. That could be interesting.

She also mentioned a Situational Game Design, whose abstract claims that it is about analyzing games as player experiences rather than systems. This could also be good, and it makes me wonder how it compares to Chris Bateman's writings about games as being constituted of player practices and with the systems approaches that I tend to like from folks like Koster, Cook, and Burgun.

Fullerton was making a case that games should be about interesting situations, and that this should include meditative play. She gave an example of a meditation-reward system in Walden: having Thoreau pause and look over a scenic setting rewarded the player with a little narration. To me, this raises the question: who is meditating, the player or the character? She gave several examples of games that included this kind of mechanism, but there weren't any compelling counterexamples given. It left me with a sense that maybe I didn't understand her, or maybe she assumed the listeners were already on the same wavelength. A particular call was made to move "beyond mechanics, beyond systems" and toward "a series of meaningful situations" in game design. The problem I have with this is that having Thoreau narrate his meditative experiences is system design. I didn't have these thoughts fully assembled in time for the Q&A, and I did not have a chance to talk with her later in the conference. I appreciate her sharing her work-in-progress ideas about where to take game design. That is, after all, a lot of what I do here.

Well Played has a CFP about intergenerational gameplay, and the idea is for the intergenerational players also coauthor an article about the experience. I'm thinking of trying this with my sons, probably the oldest one.

Ann Arbor is known as a fairy town, with little hidden fairy doors throughout the city. People geocache with these doors too. I need to tell my colleagues at Minnetrista about this, in case they don't know about it already.

The city of Brussels has a tour based on comic art. I need to send this to Easel Monster.

Is reading Finnegan's Wake worth it? Paul Darvasi says so, and he's one of the coolest scholars I know. He also said you can't "read" it, but rather you "study" it, time and time again. It sounds fascinating, maybe it's like the English literature version of Godel Escher Bach.

I heard someone refer to "gamification" as "punish by reward", referring to how it uses extrinsic motivations to diminish actual learning results. I need to keep that in mind for this workshop I've been asked to run at BSU about game-based learning.

Eric Zimmerman seemed to me to be separating games, systems, and play in his keynote. Part of what gave this away was a slide with those three words on it, and that he talked about them as separate. I had a hard time with this, since I see them as not just interconnected, but essentially the same—at least when done well. I asked Darvasi about this and he pointed me to Zimmerman's Ludic Century manifesto, which covers the same ground and, fortunately, is shorter than Finnegan's Wake. As with my earlier notes, I don't have all my ducks quite in a row here yet, but let me share the gist of it. This is informed, without a doubt, by Cockburn's Heart of Agile philosophy. There are systems in nature that are studied by natural scientists. Other systems arise from human behavior, and the intentional ones are studied by the sciences of the artificial. A good designer must recognize that the system they design fits into a bigger system, and that people have experiences before and after, and sort of parallel to, their systems. These systems are like software: we design them as specifications, but they have dynamic behaviors. We design them statically, we hope for the dynamic behavior we want. To design intentionally for humans requires accounting for human nature, which is playful; put another way, if you design for humans and you don't account for their playfulness, you are not doing good design. I think this is the right idea but I need to work on the articulation. I am very grateful to Andrew Peterson for listening to me as I tried to sort out my thoughts about this, and who constructively disagreed with me, and Mars Ashton, who said essentially, "Yes, it's all just design."

One of the best presentations I saw was by Sandra Danilovic. She organized a game jam for people with a variety of disabilities and welcomed them to make games about their disabilities if they desired, and she conducted a qualitative study around the event. One of her findings she called logopoiesis, which was essentially about the healing power of computational thinking (as I understand it from the presentation). I asked her if this particular factor was tied into the fact that these people were making games versus, say, film or poetry. Turns out she herself had background in a variety of arts, and she said two particular cases in her study dealt with the challenges of problem solving through programming and with the meditative act of arranging pixel art. These things she found in her analysis to be separable from the other characteristics. I thought this was fascinating: there is a lot of rhetoric about the power of computational thinking (more rhetoric than empiricism, but maybe that's unavoidable), and I have said for years that computing is really a new liberal art. This was the first time, though, that I have come across the therapeutic value of it, as related to its poesis (the making of a thing that did not exist before—a definition I had to check because it's not in my usual lexicon).

I learned about the game Night in the Woods, which sounded quite interesting. The more I heard about it, though, it also started sounding more and more nihilistic. I'm not sure if it should go on my to-play list or not. The speaker also recommended Wandersong, which I also didn't know much about. I seem to be behind the times on trendy indie games. Heck, I just finished Bard's Tale IV, a sequel to a game from thirty years ago, so "behind the times" may actually be generous.

Someone recommended I check out Mark Rosewater's Ten Things Every Game Needs. They said it was a good summary of ideas, well articulated though not groundbreaking. I did not write down who made this recommendation and cannot remember now. I just scrolled through, and it looks reasonable.

I had a great conversation with Andrew Peterson during a "dinner break" when neither of us felt like going and getting dinner. He shared with me several interesting ideas he uses in his game design class. First, he has completely "flipped" the class. They do readings and preparation on their own, and class sessions are almost entirely devoted to teams working on their prototypes. The teams are randomly assigned themes from a brainstormed list. His students have a week 12 ship date, at which point they have to have their materials sent off to Game Crafter. The physical prototype that arrives is what he then grades. This front-loads the work into the earlier part of the semester and allows for more slack time for the students in the last three weeks, when their other classes tend to build to fever pitch. Peterson also mentioned that as an instructional designer and game designer, what he likes to do is ask faculty what they don't like to teach from a course: identify the variables, determine which can be turned into a game. This also might be useful for me as I start prepping for my own campus presentation on games in learning.

I met Chris Totten, who said the most perfect and quotable thing over breakfast: Games should be good. He is clearly a like-minded individual.

Kate Edwards supposedly has an excellent talk about imposter syndrome. I think this must be it. I was at a table with some very talented people, all of whom had stories about how they themselves had been touched by imposter syndrome and how they knew some of their heroes also did. This is interesting by itself, but one also mentioned how he teaches his game design students about imposter syndrome and the Dunning-Kruger effect. He has them do a short jam of sorts to show off their skills to their cohort, after which it is easy for people to feel outclassed. He introduces these topics then, and he reminded our table that students generally are unaware of these phenomena. It made me think, I should do something like this in many of my classes as well.

Henry Petroski has a book about the pencil. That sounds amazing.

The Death and Life of Great American Cities sounds like a great book about urban planning. Some colleagues were very excited about this work, enough to make me think it could be worth looking into.

A friend told me about a talk that Jesse Schell gave at GDC about grant funding in which he set a $50 bill ablaze. I was able to reach out to him for the link, and I look forward to watching it later.

A speaker happened to mention that he uses one-page design documents successfully in an upper-division game design and development class for Computer Science majors. This surprised me, since the times I've tried it, I've found the assignment confounded by my students' lack of visual communications and document design skills. It sounds like they are doing the designs on whiteboards, but they're also iterating on them during the semester. I sent out an email asking for more information last night, and this morning—as I continue this lengthy post—I have already received a response. They have their students start with whiteboard designs and make them progressively more refined during the semester. I also see that I think we were using different terms for the same thing. They are drawing upon David Osorio's style, where a "one-page" is more of a pitch document, whereas I use the term drawing upon Stone Librande's, where it replaces a design document. What they're doing in Osorio's style, which incorporates images and two-dimensional design, I have my students do using Tim Ryan's concept document format, which privileges text.

One of the best talks I saw, both in terms of content and form, was by Jessica Hammer. Her work was inspired by research findings that students don't get better at giving feedback during the semester if the skill is not taught. Hammer presented her EOTA model for helping students give feedback to their peers after playtesting. "EOTA" is an acronym that walks through the kinds of feedback that should be given in sequence, the expansion being: Experience (I thought..., I felt...), Observations (I saw...), Theories (Therefore...), and Advice (You might...). She described a learning experience where the designer had to remain silent during the whole feedback process, although she amended that to include that the designer could say "Thank you." I look forward to reading her whole paper once the proceedings are up, because I think this is the kind of thing I can bring into many of my classes to help students learn how to give and receive feedback.

Another wonderful pedagogic structure she described was to have students write down their favorite snacks on a survey at the start of the semester. Then, when a team does really well, she can bring in those students' favorite treats in honor of their accomplishments. This is very clever, since it doesn't put anyone on the spot: the students who did well know it was them, and everyone gets to enjoy the treats. I need to keep this in mind as I'm planning next semester's courses, although maybe this will have to wait until Fall 2019 just due to the teaching I expect to be assigned for Spring.

A keynote speaker mentioned the concept design fixation, which is when one gets caught thinking of an object only for its conventional purpose. I jot this down here just because it's a useful term that I'm not sure I would have thought of, had I been looking for it.

I met Derek Hansen who is teaching cybersecurity and using games, so I just sent him some info about Social Startup Game, which Kaleb Stumbaugh and I created as a research and design project a few years ago. I had to look up where we presented our findings besides the S2ERC Showcase, and it turns out that was Meaningful Play 2016. Unfortunately, when you look for the proceedings of this conference, they are nowhere. I have emailed the conference organizers a few times over the past two years to ask about it, in part because I am so happy with the evaluation that Kaleb and I conducted. Still, however, no paper in the proceedings. A copy of the paper is available on the project site, however, so interested readers can check that out. (I forgot it was there, so in fact I just rebuilt the paper from LaTeX sources to send to Hansen. Oh well.)

A speaker strongly recommended Kurt Vonnegut's Player Piano, which I am sure I've never read. Maybe that would be a good piece of fiction to offset all the non-fiction building up in my reading list.

A keynote was given by Katherine Isbister, whose work on wearable technology was not previously known to me. I am sending her name to some friends studying HCI.

A speaker brought up the idea that karaoke-style games can help people learn to pitch correct. It made me wonder if there are good ones that my kids can use. Some of them clearly love to sing, but I find it hard to teach them to hear where the notes go.

Sissy's Magical Ponycorn Adventure. Gunman Taco Truck. Ian Schreiber has been kind enough to talk to me quite a bit about "fam-jams", or family game-jams, and these two games are great inspirations. I think I already wrote about how my oldest son made two complete games at the last Global Game Jam, whereas mine barely worked. It inspires me to find opportunities to do some more creative collaboration with my kids. Chief among his tips are that when working with your children, you should do what they say—not necessarily what they want, but what they say. Watch this blog for details.

I overheard a friend mention a book called something like "Just Keep Writing," and he said that although the book was about creative writing, it could just as easily be applied to games. I am not sure what the specific book was, but I'm sure he's right. It brings up a thought that came up all during the conference for me: I should make more.

In her keynote address Saturday morning, Diana Hughes from Age of Learning brought up mastery learning. This is the idea that students may not move forward in the curriculum until they have shown that they have mastered antecedent work. It struck me that this is essentially a tech tree or a skill tree in games: you cannot build the next thing until you build its predecessors.

I attended a workshop about The Agile Teacher. It is a game created to help teachers—particularly new teachers—to explore active learning techniques. The game presented each group with a context, ours being a mathematics seminar with 10 or more students. Each group presented their findings to the rest of the room. By design, groups were supposed to have a context in which no one at the table was an expert. The designers explained that they had seen cases where the one person at the table who is from that area would tend to dominate the conversation. That makes sense. However, I also noticed (and shared with the designers) a fascinating phenomenon: without pedagogical content knowledge, every group designed activities that represented stereotypical understandings of the domain. The groups that had drawn Math as their domain reverted to a high-school understanding of math as essentially computation. The groups that drew Science worked on techniques to help students memorize taxonomies. Computation and taxonomies are both parts of math and science, but they are ancillary. It's not exactly a flaw with their game, since it was doing what it was designed to do (namely, foster conversation), but it was an interesting phenomenon nonetheless. It made me think about how I have seen people at my institution pigeonhole other faculty because they think they understand the others' domain. Maybe it's an instance of good old, "You're a Computer Science Professor? Can you help me with my printer?"

The final session I attended was another workshop, this one given by the aforementioned Andrew Peterson about his game, enRolled. The game is designed for new college students, to get them thinking about the impacts of their decisions as students. His motivation was excellent: as an adjunct, he was handed a course where he was supposed to lecture new students about how to succeed, and he realized that lecturing about these topics was of limited value. Peterson then worked with his students to design the first incarnation of a game to convey similar ideas. The difference was that student would engage in meaningful and authentic discussion around how to codify things like drinking and studying as game mechanisms, whereas they were hesitant to do so in a dry lecture. Very clever. He said a few times, "The game sucks," pointing to its dependence on random events and lack of balance. However, he was also clear that it does what it was designed to do: foster important conversations. This is an interesting dovetailing into my previous notes, reflections, and conversations around what design really is. In this case, Peterson has a playful conversation generator that happens to be a game, and so its fitness function is not about balance but about authenticity of emergent conversation. The game can be purchased one-off from The Game Crafter, but he also mentioned that he hopes to run a Kickstarter to print many copies at once and therefore reduce the price.

There was one particular story that Andrew shared with me that I want to capture here, so that I can re-tell it to my game design students. I hope that he doesn't mind my doing so. When he was working with this students on enRolled, the task was to determine the relative negative points of drinking and drugs. Andrew started with the idea that drugs would be worth -10 and alcohol, -2. His students, however, disagreed, justified by the prevalence of alcohol abuse. Very few of them knew someone who dropped out of school because of drugs; they all knew several who had dropped out due to alcohol. What a great example of how game design gave rise to new insights and conversations!

That's the end of my notes. I've shared here almost everything within my pocket notebook, just skipping some of my notes about questions to ask presenters that turned out to be not worth sharing here. Other interesting things happened during the conference as well, but my goal here was not to write a conference report but only to assemble the thoughts that I wrote in my pocket notebook. (Variations on these notebooks have always been in my pocket since reading Pragmatic Thinking and Learning, by the way.) I am tired out from the conference and feeling a little apprehension at the coming week, knowing that I now have to catch up on a backlog of tasks. If you have made it this far through my notes, know that I'm happy to discuss any of these ideas with you in the comments or future communications. Many thanks to the organizers of the conference for such an inspiring event, and thanks to those attendees who shared their knowledge, wisdom, stories, and advice.

Saturday, October 6, 2018

Custom validation and scoring of the Creative Achievement Questionnaire in Qualtrics

The Creative Achievement Questionnaire (CAQ) is an instrument developed by Shelley Carson to measure a person's creative achievement. I came across it years ago when doing some pilot studies on the impact of creative achievement on learning Computer Science. In fact, for years I have been pointing nascent scholars to the paper by Carson et al. as an exemplar of careful attention to reliability and validity in social science.

A colleague and I thought it would be interesting to revisit some research questions around creativity and computer science education. I'll leave a discussion of the full study design for another time. For today, I want to focus on how I was able to adapt the CAQ for delivery in Qualtrics. Last time I used the CAQ, we deployed it on paper and scored by hand, which was fine for a small number of participants; now, we want to scale it upward. Additionally, we want each participant to know their CAQ score at the end. It took several hours of working with Qualtrics to figure out a way to do this.

The CAQ has several sections that look like this:
A. Visual Arts (painting, sculpture)
_0. I have no training or recognized talent in this area.
_1. I have taken lessons in this area.
_2. People have commented on my talent in this area.
_3. I have won a prize or prizes at a juried art show.
_4. I have had a showing of my work in a gallery.
_5. I have sold a piece of my work.
_6. My work has been critiqued in local publications.
*_7. My work has been critiqued in national publications.
The idea is that you mark the numbers that are applicable to you. Scoring a section in the simple case is easy: simply sum up the numbers of the items. For example, if you have taken painting lessons (1) and people have commented on your painting talent (2) then your score for this section is 3. If that's all there were to it, it would be simple to use Qualtrics' embedded scoring system, which allows for a numeric value to go with each option. The trick, however, is that asterisk on the last line: that means that you have to mark the space with the number of times this has happened and then multiply that by seven. So, if you have had your work critiqued in national publications twice, that contributes 14 to your CAQ score.

I put assembled a survey that looks like this:
The editor made it easy to add the text field to a checkbox, but what I need to be able to do is figure out what number was in the textbox and what numeric entry it is in the list of options, and then also to store that product—along with the sum of the other values—into an embedded variable.

I found the documentation for the Javascript API to be a bit opaque, in part because it assumes you already know their internal vocabulary for surveys. Matt Bloomfield's Medium article helped me to make some more sense out of how custom scripting in Qualtrics works. Using Chrome's developer tools, I was able to poke around in the preview view of the survey and find that the text field above has HTML like this:

<input aria-describedby="QR~QID7~8~VALIDATION" class="TextEntryBox InputText QR-QID7-8-TEXT QWatchTimer" data-runtime-textvalue="runtime.Choices.8.Text" id="QR~QID7~8~TEXT" name="QR~QID7~8~TEXT" style="width: 150px;" title="My work has been critiqued in national publications this many times:" type="text" value="" />

That id looks interesting, doesn't it? Its parent looks like this:

<label for="QR~QID7~8" id="QID7-8-label" class="MultipleAnswer" data-runtime-class-q-checked="runtime.Choices.8.Selected"><span>My work has been critiqued in national publications this many times:</span></label>

It appears that the text field shares an id prefix with its parent, and that I can identify a text field by looking for the suffix "~TEXT". Additional experimentation confirmed that the number just before that is the sequence number of the option within the question: this particular item is the eighth option, since CAQ items are numbered in true computer science fashion as 0–7.

After much experimentation, I was able to come up with one reusable Javascript function to score each section of the CAQ. It makes use of the fact that jQuery is built in but unconventionally invoked through a variable named, appropriately, jQuery. This allows me to search for an id by suffix, which I did not previously know was possible. This function was added to the "header" section of Qualtrics Look and Feel for the survey to ensure that it was included on each page.

 function scoreCaqSection(section, context) {  
  var selections = context.getSelectedChoices();  
  var score = 0;  
  for (var i = 0; i < selections.length; i++) {  
      var questionId = context.getQuestionInfo().QuestionID;  
      var selector = '[id$="' + questionId + '~' + selections[i] + '~TEXT"]';  
      // Check if there is a text entry associated with this question  
      var textField = jQuery(selector);  
      if (textField.val()) {  
       score += (selections[i] - 1) * textField.val();  
      } else {  
       score += selections[i] - 1;  
  Qualtrics.SurveyEngine.setEmbeddedData('CAQ.'+section, score); console.log('Score for ' + section + ': ' + score);  

The last line of the script is setting an embedded data variable, so by the end of the 10 parts of the CAQ, there will be embedded variables called CAQ.A, CAQ.B, etc., that hold the CAQ component scores. Each individual portion of the CAQ can therefore include an simple custom script like this:

 Qualtrics.SurveyEngine.addOnPageSubmit(function(type) {  
      scoreCaqSection('A', this);  

The call indicates what section of the CAQ is being scored, which allows for proper storage of the partial score, as well as the question object itself. The final page of the survey then can use this awkward embedded expression computation to get the final score:

$e{e://Field/CAQ.A + e://Field/CAQ.B + e://Field/CAQ.C + e://Field/CAQ.D + e://Field/CAQ.E + e://Field/CAQ.F + e://Field/CAQ.G + e://Field/CAQ.H + e://Field/CAQ.I + e://Field/CAQ.J}

Yes, I did work for a little while on automating that expression to contain a loop instead of hardcoded values, but I realized I was spending an order of magnitude more time on that than by just typing them all in.

The final piece of the puzzle was to ensure that what the users enter into the text fields is actually a number. Qualtrics provides custom validation options, but I was surprised that it does not have a built in option to check that a field is a number. I needed to combine this with the idea that a number needed to be in the field only if the option was checked. Qualtrics will ensure that if you type in a box, the option is checked, but not the other way around. The custom validation for part A therefore looks like this:

This is checking that either the option is checked and there is a valid number in the field, or the option is not selected. Most of the cases are fairly simple, but the portion with multiple starred items requires more individual steps. We currently have a GA stress-testing our Qualtrics implementation to ensure that the validation and computation are correct, and this frees us up to work some more on the IRB protocol.

This was my first experience with Qualtrics, since I usually go for the much simpler Google Forms when I need to collect some data. It clearly has some more advanced features, and the custom scripting option is powerful, although it was slow going for me to make sense out of it. There's relatively little chatter in the forums I could find about adding custom Javascript, and the perfunctory nature of the API documentation makes me think that this is not a frequently-used feature by Qualtrics' customers. Since it took me several hours of work to get this far, I wanted to take a few minutes this morning to write up what I have and what I learned. Perhaps then, next time future me needs to remember how to do this, I can simply end up back here and figure out what I used to know. It wouldn't be the first time my blog has helped me with that.

Friday, October 5, 2018

Specifications grading and wiggle room

I have been using specifications grading in my game programming class. I just finished evaluating my class' third mini-project, and now I am encountering some interesting problems. I want to share them here, in part to be able to review my notes later, and in part to see if my readers have any thoughts or advice.

First, a very brief background. Specifications Grading is the name of a technique popularized in the eponymous book by Linda Nilson. I actually used this approach several years ago in CS222 without giving it a clever name, but I abandoned it when I had a student tell me that he gave up on the course material because he knew he could not get to the level he wanted. That's just one case, and clearly from my plans for Game Programming, I decided to give it another shot. You can find all the specifications for the four Game Programming Mini-Projects on the projects page of the course site.

In the first two rounds of mini-projects, I had some cases where I ran into subtle problems with the specifications. One specification requires that students follow our established naming conventions. The problem arises, what to do if a student misnames one asset? Is one violation enough to say the whole specification was failed? Clearly not, but how many then? I am thinking about recasting such elements into something like "No more than two violations of the style guide." The problem then is, of course, that I have to actually look for them and keep track, doing the kind of bookkeeping that good specifications should make unnecessary.

A similar problem came up in the project report requirements, where I require that students describe the third-party assets they used and the licenses under which they are using them. Turns out, my students either were unbelievably lazy about this or really didn't understand the requirement. After the first Mini-Project, I encouraged them to take this more seriously; after the second Mini-Project, I realized I needed to intervene. I gave a class presentation about IP and licensing, including specific examples of violations from student work. I thought they should have learned this before a junior-level elective in college, but maybe they didn't; however, even in the third Mini-Project, I had students doing this wrong. I made the project report a low-grade specification: a student needs to have a well-formatted project report in order to get a C, but "well-formatted" includes all this licensing info. According to the specification, students who do not track the licenses properly should get a D. Is that right? Maybe, maybe not. I am thinking of separating the specifications for the project report to make this more generous to the students who really haven't yet internalized concepts of intellectual property.

Another area where students are having trouble is in working with Perforce. I sympathize: it took me some time to make sense out of it, and I had the benefit of working with many kinds of version control systems. It doesn't help that Perforce's nomenclature of "depot" and "workspace" is idiosyncratic. Having a working version of the project in our Perforce depot is a D-level requirement: fail that, and you fail the project. However, many students demoed games in class that are clearly not what they submitted. I am torn on this one: it's a clear, explicit D-level criterion that "Project is correctly configured on the depot so that a new client provides a runnable game." Students are turning in project reports with that box checked, but I doubt they have actually verified that this is the case. Indeed, one student even submitted a project report with that box checked and emailed me to say that he had trouble with the depot before submission. Hence, while I am sympathetic to the frustrations of learning new version control systems, I have very little tolerance for conscientiousness and none for deception.

If you look over the specifications, you will see that are simply for levels D, C, B, and A. In the report, students are supposed to tell me what grade they earned. My intention behind this was twofold: first, it would make them doublecheck the specifications and reflect on what they have done, and second, it would make my grading easier. I am surprised by how many students, in their reports, will make a claim like "I earned a B+" or "I earned at least a C because I worked hard on this." Neither of these are in line with the specifications system at all. It's really not clear to me where they get these ideas, if they are reading into rules something that's not there or, possibly more likely, they are not reading the rules at all.

My friend and colleague David Largent has been using Nilson-style Specifications Grading for several semesters, and so I look forward to picking his brain about some of these issues. He deploys a system called "Oops Bits" wherein students can get another chance if they misunderstand or misrepresent a specification, but I don't know much more about the system than that. I am thinking of sending out an email to my class to give them some kind of timeboxed period in which they can deploy an "Oops Bit", e.g. if they realized that the game the demoed was not properly submitted to the depot. The obvious negative here is that then there's no lesson really learned: I have to grade their work again, when they already claimed in their reports to have met that criterion.

As an aside, I require that the project reports themselves be written in either HTML or Markdown. I am surprised how few students are fluent with one or the other of these. Like understanding intellectual property, it seems to have become a major unexpected learning objective of the class that students understand plain text formats. I provided the students with a Markdown report template for Mini-Project 1, and yet many of them manage to create non-standard or nonsensical reports. I wonder if I can easily modify the Javascript that creates the specifications articulation on the course Web site to automatically generate downloadable Markdown templates for each separate project, which would potentially reduce students' copy-paste hassle and, ideally, provide a more convenient way for students to fill in the blanks and meet the report specs correctly.