Wednesday, May 16, 2018

What would I do in a semester? What would I do in year?

This evening, I have the pleasure of attending a meeting to discuss the state and future of the Virginia Ball Center for Creative Inquiry at Ball State University. Long-time readers will remember that I was  fellow in Spring 2012, mentoring the multidisciplinary undergraduate team Root Beer Float Studio who created Museum Assistant: Design an Exhibit. We transformed the upper floor of a mansion into an independent game development studio with a unique academic mission. This was the most important and defining experience of my career as a professor; there's a real sense in which everything I have done since then has been attempts and experiments in recreating parts of the truly immersive VBC experience. In preparation for tonight's meeting, I was sent the recently-completed external evaluation of the center. Reading this report reminded me how truly superlative the VBC is: it is truly exceptional, unique in higher education. I have been telling people since then that if I could run projects like that every semester, that's what I would do.

Along with the external evaluation, I was sent this question to consider:
If you were given an entire semester, or even an entire academic year, to teach/work on/investigate anything with students, what would it be? What would you do with your time? Why?
My first reaction when I saw this question was, "I would do what I did in Spring 2012, but I would do it even better." The overall scheme from 2012 was a good one: I designed a one-week intensive introduction to educational game design, and we moved very rapidly into an agile approach for building and testing prototypes. With the scheduling freedom that the VBC provides, we were able to couple our production work with reading groups and academic inquiry. I brought in speakers from across campus to talk about game design, games journalism, and storytelling. I spent most of my time embedded with the team, mixing roles of mentor, coach, and producer. Since 2012, I have learned even more about all of these roles, which is why I think I could do it even better now than I did then.

I went back and read the question again, and that's when I saw the "or even an entire academic year" clause. In one semester, a student at the VBC might earn 15 credit hours, or earn something like a minor in game design and development. This gives a good academic-conceptual bounding box for what students might be able to do after completing the semester: they should have a firm foundation, but they should not necessarily be expected to develop proficiency, maybe not even competency, let alone expertise (drawing upon the Dreyfus model definitions). In two semesters, that's potentially 30 credit hours, and it starts to look like a small major. There could be real opportunities to not only make grand mistakes, but to learn from them and then do something even better. I believe in the university as a "safe fail" environment—it's good that students can learn through their failures here, because that's essentially why they are here. With more time, though, you increase the chances that maybe you could make something with extrinsic value. Whether that means widespread dissemination, commercialization, or formation of LLCs, I don't know. Fundamentally, if I had a year at the VBC, I would follow the same kind of pattern I did before, but the scale and scope would be roughly doubled.

The question leads into my broad ideas for how higher education could be reformed. Fundamentally, the best part of undergraduate education is inquisitive students working with active scholars in interesting contexts. Everything else is artifice. A colleague and I wrote about this for an internal report some ten years ago, that in order to innovate, the university needs a skunkworks where students and faculty can engage in collaborative reflective practice. This kind of talk makes the bureaucrats nervous, but their concerns, though legitimate, are accidental to education: credit-hours, accreditation, core distributions, majors, etc. By contrast, the essential issue is guiding students to recognize truth and beauty. That's where I like to spend my effort.

In the next two days, I have the Security & Software Engineering Research Center Showcase, the VBC dinner, and a Google I/O Extended meetup. Should be a thought-provoking few days of "break"!

Thursday, May 10, 2018

A Reflection on Spring 2018 Human-Computer Interaction (CS345/545)

I started a narrative approach to a CS345/545 (Human-Computer Interaction) reflection yesterday, and it came out really negative. It was honest, but too negative—that's no way to be. I'm going to try again today, but with a different format, and see if I can make it both shorter and more constructive. Let's pull a trick from Sprint Retrospectives and start with...

What Went Well

Controlling scope. There's a lot that could be covered in an intro HCI class, and the conventional textbook approach sacrifices depth for breadth. Put another way, it sacrifices understanding for recognition. I wanted my course to center on a few fundamental principles, and ours ended up being Don Norman's Seven Principles of Design (from Design of Everyday Things: Revised and Expanded) and the Double Diamond design model. We also reviewed importance of model-view separation and layered software architectures, although this was not really in any more detail than I would cover in CS222. I had hoped to have more time to talk about software architectural issues, but seeing the students struggle with the other topics, I pulled back on this.

Focus on principles. Similar to the point above, I had to remind myself several times during the course that it is not really about how to design a user interface, but about the principles of human-computer interaction. There's a difference here, I believe: we could spend a lot of time on issues like font choice, spacing, the use of tools to aid in design. We didn't, though, which meant we could talk at a higher level of abstraction and not get lost in pixels and palettes.

Allow for failure the first time. The students completed a small project before Spring Break, a project that was essentially a small version of what we would do after break. It made them put the design principles into play within the double diamond context. They almost all did badly, from an objective point of view, but this was a success from a pedagogic point of view. This showed them that it's a different thing entirely to claim understanding vs. to apply knowledge in context.

Socratic Day. There was one day where I was feeling quite frustrated about my students' inability to show empathy for each other and for me, and so I ran about twenty minutes of class via the Socratic method, starting with the question, "What do you think I see?" We touched on a lot of interesting ground here. Interestingly, they did not really come up with the answer I had in mind, which was "The backs of laptops and the tops of heads." I don't think I've ever gone Full Socratic (tm) on my students before, but it's something I need to keep in mind, especially if I am feeling upset or disoriented.

The "A" Group. Despite my frustrations with the course, there was one team of guys who attended practically every meeting, most of them completing all assignments satisfactorily, paid attention and asked questions during class, followed instructions and applied what they read during class activities, and produced a good and thoughtful final project. None of them had significantly different prior experience from the rest of the class, and not all of them earned stellar grades in the prerequisite courses. This tells me that what I asked the students to do was on target for those students who were on point, if you don't mind mixing metaphors.

Many small assignments. I set up an aggressive schedule of reading and crafted in-class activities to support them. I needed to make sure students were keeping up with the pace, so I set up a series of assignments to be done before each class for the first half of the semester. This worked in terms of keeping people together: I could tell that almost all the participants in class had done the preparatory work. During Spring Break, as I reflected on what I had seen in the first half of the semester, I carried this model over to the second half as well: when there was a day that I needed students to have something particular prepared, I set up an assignment for it. The assignments were graded rather generously by an undergraduate grader, but that generosity was fine since the assignments were more about keeping up than mastery.

I learned. I think I mentioned in my course planning post that I was wonderfully surprised by the revisions in the new edition of DOET. One piece in particular that stood out, as someone interested in methodologies, was the double diamond model. I've never deployed that myself, so I figured I would use the semester to try to understand it. I gave a wrap-up presentation in the final week of classes where I explained my understanding and frustration with the model, putting it in contrast with Scrum and my spin on George Kembel's design thinking framework. I actually started planning out a blog post called, "The Double Diamond is Malarky," and in doing reading and preparation for that post, I came across a different visualization than UK Design Council's—this one from ThoughtWorks.
All the pieces fit together for me now: using this model, the iterative and incremental software development approach sits within the second diamond entirely. At first, I rejected this, since my predilection is to consider each iteration anew, with the possibility of pivoting on the problem completely. Then I realized, however, that this is exactly how I have been running my immersive learning projects! I use one semester with the Honors College to figure out what problem we can actually solve, and then that input is given to the Agile, cyclic development model of the Spring Studio course. Hooray for reflective writing!

What can be improved

One of the biggest surprises of the semester is that I was assigned to teach CS345/545 again in the Fall instead of a section of CS222. This means I have the opportunity to improve the course right away, while the ideas are fresh in my head—an opportunity for which I am grateful. Expect a "Summer Planning" post in the next few months as I sort things out. In the meantime, here are some things I can improve for next time.

Stow laptops or GTFO. That is, put your laptops away or get thine fanny out. Those blasted distraction machines are ruining our students. Attendance is not required for my class, and that's a fact I reminded them of many times. People clearly are engaged in something else and thinking that if they sit in class they will magically collect knowledge. It's ridiculous, it's infantile, and I'm done with it. The lingering question is whether or not I want to incentivize the use of paper notes or not. For example, I could offer a grade or something like an achievement for using paper notes, or I could ask them to keep a design log with their notebooks. I need to think about the logistics of this still, but one thing's for sure: the laptops are going away. 

A quick related thought: I had one guy this semester who, when I asked him and his chatty colleagues to close their laptops and join the group, did not, and instead sat in the back smugly with his laptop clearly open. The question then, is, what should I do in such a case? I don't want to play power games, that's just more juvenile nonsense that doesn't belong in the classroom. I am thinking of making a policy that I will simply leave if the policies aren't followed, which then makes it a matter of social pressure. I'm not sure how that will play out, but I feel like I need a plan so that I react appropriately.

Iterate on the final project. Now that I have a better understanding of the Double Diamond, I want to bring that out in class by having students complete short technical iterations within the context of the bigger design project. This will give them a valuable opportunity to assemble and test an artifact and get feedback about it, from both me and potential end-users. It seems simple enough, but getting this to fit into the calendar may be tricky. It's possible that a small project may not be necessary if instead we allow iteration within a bigger project.

External partnerships. Many years ago when I taught HCI with a focus on mobile app development, one of the best parts of the class was setting students up with external consultants. These were not clients but rather alumni, friends, and generous strangers who agreed to give students feedback on their work. You know how it goes, teachers: you can say the same thing a hundred times, but sometimes students won't hear it until it comes from someone else. This past semester, we were on track to have an interesting community partner for all the projects, but this fell apart in a sea of bureaucracy and red tape. As a result, the student projects were a bit "fakey". This had the immediate result that most (if not all) of the students did not conduct authentic evaluation at the end of the project. Many asked friends and family to evaluate their work, which is absolutely the worst thing to do. Setting up real partnerships would help here, as there would be someone else with skin in the game besides the students—someone with different objectives, not just getting a grade.

More check-ins on the principles. As part of the small and large projects, student teams had to submit project reports, both a draft and a final. The project reports are where students had to explain how their projects manifested Norman's seven principles. What I saw was, by and large, rationalization rather than principle-informed design. That is, students explained decisions they had already made, situating these within the principles, but it's pretty clear to me that they did not consider the principles before or while making the decisions—only after. I designed a final exam question to help students tease these ideas apart, but students who did a poor job in their project reports also misinterpreted the question itself and provided similarly superficial or unjustifiable responses. I should be able to craft additional discussions, assignments, or activities that help students frame their works-in-progress within the principles, which I hope will lead to a better understanding of them.

What still puzzles me

Graders. Since I knew I would have so many assignments, and it was going to be a busy semester, the department hired an undergraduate grader for me. She was a good student who I have worked with before and whom I trust. However, she could not attend classes, so she had a real outsider's view of what was going on. It's still a blind spot to me, if there were opportunities to give feedback to my students that I missed because she was handling the day-to-day assignments. I asked her to report anomalies to me, which she did for most of the semester and which led to some interventions, but this died off as the semester's pressure built.

Bad UIs and Lack of User-Centeredness. As I mentioned above, we focused on principles, but the fact is that some students developed some truly hideous interface designs. Some of these were bad because of design decisions that the students made, and these carried on into nonsensical UI choices; others were bad because the layout was just silly. A lot of students used JavaFX and SceneBuilder with the mistaken idea that because they have a tool to lay out elements, they must be doing it right—a myopic, developer-centered rather than user-centered perspective. The question for me, then, is whether there is a modicum of UI design knowledge that I can help students acquire that would actually help here. My intuition says "no", that if they don't have taste they cannot develop it in the middle of an already-packed semester. My intuition has been wrong before, though. The bigger question is how to get them to focus consistently and enthusiastically on the users. I am thinking of bringing in something like task modeling from Software for Use, which I had good luck with years ago, although that book is now comically dated, with examples drawn from Windows 95.

Empathy. I wrote earlier about a particular example of how one of my students failed to show empathy, but I think this is a bigger problem—as in, a really, really big problem. If you're a junior or senior in college, and you don't know how to build empathy, what in tarnation has been going on in your core curriculum and pre-college experience? What's the point of studying history, culture, or language if you cannot put yourself into someone else's perspective? If you get to an upper-level elective on HCI, and you do not know how to have empathy for others, is it actually possible to learn it? Is it my responsibility to teach it?

There's a related and troubling problem here related to student disabilities. The university rule is that students with disabilities should be reviewed by the Disability Services office, who will then develop accommodations for the student; faculty are given a form that indicates what accommodations are considered reasonable. In my experience, most of these are "Extra time on tests" or "Can take tests in a distraction-free environment." That's all well and good, but that office doesn't actually provide me the information I need about the disabilities that impact the work that I do. If I had a form that said, "Cannot empathize," well I would know I better not count such an assignment against the student—but as far as I know, that office has never produced such a form. It puts me into a bad situation where I have to guess at student disability, despite my having no training or expertise in this area. Yikes. There's a sense in which the lawyers are on my side: if an autistic student sued the university because they failed my course due to unsatisfactory completion of an empathy-related assignment, the university could say, "They didn't have an accommodation on file for their disability." This doesn't change the fact that such a thing seems impossible to have been filed in the first place. Maybe I'll check in with Disability Services over the Summer to chat about such cases.

Wrapping up

This post went much better than the previous one. It has helped me articulate some of my ideas that can feed forward into my planning process for Fall semester. Thanks for reading! As always, if you have questions or ideas, feel free to share them in the comments section below.

Wednesday, May 9, 2018

Collaboration Station Enhanced

I am just wrapping up work on a INSGC-sponsored enhancement to Collaboration Station. Regular readers may remember the game as the main product of my Spring 2015 Game Studio course—an educational multiplayer game about the International Space Station. The game was very impressive for a multidisciplinary undergraduate game studio project, but there were a few things I have wanted to clean up. INSGC agreed to fund a project with three major goals: to align the look and feel of the game more closely with the International Space Station; to improve the quality of materials available to teachers; and to bring the game to iOS in addition to Android. Since last summer, then, I've been spending a enormous amount of effort on...
The game is currently available on the Google Play Store, and we should be within days of having it on the App Store as it winds its way through the approval process. I wanted to record a few of my thoughts about the process here, including technical notes and personal notes, because hey, this is my blog for reflective practice, after all. What did you expect? Miniatures?

Project Overview

The original Collaboration Station was written in Java using PlayN. We used Bluetooth for local multiplayer, developing our own protocol by reverse-engineering some of the tricks used in SpaceTeam. In particular, we figured out how they had used Bluetooth device name mangling to figure out who else was playing the game in local range. This worked fine for our Android version of Collaboration Station, but we hit a barrier when we tried to build the iOS version, since the particular Bluetooth function we needed to call was blocked by the operating system. (I'm still not sure how SpaceTeam got around that one.) For Enhanced, I decided to go with Unreal Engine 4 and the GameDNA Realtime Database plug-in, which brings Firebase support to UE4. I learned about this integration as a result of some conversations at GDC 2016; given my experience with Firebase and the cross-platform builds facilitated by UE4, this seemed like a great match.

Building the core functionality and architecture of Collaboration Station Enhanced took longer than I expected. As I mentioned in my write-up of Fall's Game Programming class, I had hoped to have the basic pieces in place early in the Fall semester so that students could write their projects as ad hoc plug-ins to my system. In fact, I was only able to implement one sample minigame—memory—in the Fall and get roughly half of the multiplayer implementation working. Some of the delays came from the fact I was using a beta version of the Realtime Database plug-in, and I did have to work with the developers on some critical defects. I also ended up having to get the GameDNA Ultimate Mobile Kit plug-in to allow anonymous sign-in to Firebase; this may now be wrapped up in the other plug-in, but it was no great concern at the time since the budget had plenty of room for such expenses.
Experiment Sorting: Memory Minigame

Working with the Student Team

I hired three undergraduates through the grant to work with me in the Fall, using the job titles Game Artist, Game Programmer, and Game Educator. I put together a good team, but there's a sense in which I was not really ready for them for several weeks. I had hoped we could work together for sustained times during the week, but our schedules did not mesh well; there was only one one-hour block MWF afternoon that we could all meet together, so we settled on that. I put the Game Programmer on the task of writing the sequence puzzle ("Simon" game), but I didn't really have the underlying systems in place yet to piece everything together. It took him longer than I had estimated to finish this task, though, and so we never really had any painful merges. I gave the artist some odds and ends to work on, but we never had the right workflow. Technical problems in the studio led her to work elsewhere, but I didn't realize how badly this set us up: she was working mostly independently, and she never got into the workflow of being able to add things to the game herself. That is, she was off version control and outside of UE4. It wasn't until the very end of the project that I realized she was unable to run the game herself at all. A result of all this was that she spent a lot of time on assets that we just couldn't integrate, for technical and aesthetic reasons. The game educator was the one student who worked most independently, but in truth, I let him work perhaps too independently. We sought some feedback from education experts on campus, but he mostly developed the revised Web site and teacher resources by himself. Both of these things take careful design skills, and what he made is sufficient, but I think our lack of communication contributed to a lack of polish.

Looking back, I probably could have handled some of the communication and planning better, but there's not much I could have done to improve the rate at which I completed my tasks. The students did a reasonable job of keeping up with the tasks I could think of, but often I couldn't remember what I asked them to work on! Given that we didn't have much face-to-face time, tracking tasks in something like Trello or even just a spreadsheet could have helped. The main impediment, though, was that my mind was full enough of the work that I had to do on the project, plus my other professional obligations; it was just too much to try to remember where they were as well.
Exercise: Sequence Puzzle Minigame

Game Design 

We did accomplish two significant improvements to the game design. The previous sequence puzzle was based on four somewhat arbitrary ISS factoids, chosen mainly because we could develop good images and sounds for them. We revised this into an exercise minigame, where the sequence of actions represented three authentic exercises astronauts must perform in space. The other improvement was to the sliding tile game, which used to be a single image of a humanoid robot developed by NASA for use on the ISS, but it had no contextualization—a player would have no idea what they are looking at. Now, this game is "telescope alignment", and there are four random constellations chosen from the Southern Hemisphere. We hope to inspire the player to look up more information about them by showing constellations that will likely be unfamiliar.

One significant change between the original and the enhanced game is that the original divided the minigames into "science" and "maintenance" tasks, and each level had a goal based on how many of each kind of point were needed. In post-mortem discussions, we realized that this was somewhat artificial and didn't add to gameplay. In Collaboration Station Enhanced, the minigames all generate the same kind of points, and we rely on players' curiosity to try them all.
Task Selection Screen
The one improvement we didn't incorporate was better contextualization of all the minigames. My plan was to dress up the task selection screen with a short textual description and an animated demonstration for the mini game. As the semester charged toward its completion, however, we ended up having to do something much simpler here.

Technical Notes

One of my personal goals in working on this project was to get a better understanding of the C++ bindings for UE4 and their relationship to Blueprints. The memory minigame is a bit sloppy in this regard, but as I worked on other minigames, I had better ideas about how to separate concerns. My Game Programmer student didn't approach any C++, but this is a specific area where if we were collocated, I would have pushed him. Managing the logic for generating and matching sequences is straightforward in traditional text-based programming and ridiculously messy by comparison in Blueprints.

Speaking of which, the networking code is inelegant at best. It is split among a handful of Blueprints, and the dependencies are not always clear. At one point I tried to separate the Firebase logic from the rest of the multiplayer logic, but I'm not sure even where that boundary is any more. It works, but I'm not very happy about it. First, there's no unit tests: automated testing in Unreal Engine is not well supported—at least, not as I understand it—and this left me in a very slow write-deploy-test cycle. It made me afraid to refactor because of the time I would inevitably lose, the fear that leads to bad architecture. I suspect that, like the sequence puzzle logic, much of the multiplayer logic would have been much easier to express in C++. However, despite an email discussion claiming that the 1.0 release of the Realtime Database Plugin would have documented C++ support, it's still all Blueprints.

What is good about the networking code—besides the fact that it works of course—is that we can run multiplayer games over the Internet without needing to collect any private information. Anonymous sign-in is used together with a clever game code system that my students and I designed. This should scale up to the number of users we expect. If the app went viral, it would collapse, but it can handle dozens of games at once for sure. Well, as "for sure" as one can be without actually trying it, of course. The downside to the Firebase networking back-end is that you need to be able to connect to it, unlike the Bluetooth approach that doesn't require any other routing. On campus, we discovered that the normal, secure Wi-Fi is fine, but if we connect to the "bsu guest" network, something is filtering out the Firebase communications. This susprised me, since I thought they were all simply tunneled through port 80, but we did not have the time to figure out where exactly the packets were being filtered. We are keeping both the original and the enhanced versions on the store, though, so if someone is stuck in a school network situation where they cannot alter their filters, they can always go back to the Bluetooth-based solution in the original.

It was great to work on a project where all the assets already existed. I could focus on writing the logic and simply drop in the right pieces where they needed to go: backgrounds, sprite images, music, and sound effects, already made and ready to go. The time from programmer art to production art was very low. The only exception here were the screens themselves, where I would generally whip up the simplest possible UMG widget to test the flow, and only really design the screen once it was done. This worked well, so I didn't lose a lot of time fiddling with placements until I knew the screen was needed.
Telescope Alignment: The Sliding Tile Puzzle Minigame

Time = Money

I could not have completed this enhancement project without funding from the INSGC. This was a relatively small grant of $15,000. INSGC requires at least 50:50 cost-share from the organization, and I had worked this out with my department and the university prior to submitting the grant; the total cost of the project on the books was roughly $30,000. (Obviously, the 50:50 minimum is interpreted by the university as a maximum as well.) It turns out that for various reasons, we didn't use all the student wage allotment from the grant, and I was able to borrow several tablets from another office at the university, which meant I had a few hundred dollars of supply budget leftover as well. The total amount spent by INSGC ended up being on the order of $11k.

Many times while working on the project, though, I wondered: what did it really cost? I spent some time last summer working on the project, but we can chalk that up to professional development and preparation. So, for this back-of-the-envelope computation, we'll consider only the academic year costs. For those who don't know, faculty member's time is generally divided into quarters; my usual assignment is three units of teaching and one unit of research during any given semester. The college prefers I charge $12,500 for a course release, so let's use that as the cost of one course-worth of my time. I actually had negotiated a lesser rate for this project to allow it to fit within INSGC's financial constraints, but we're not computing the spent cost but the actual cost. In the Fall semester, I used almost all of my research time and a portion of my CS315 teaching time on this project, so let's call that $12,500 as a low-ball estimate. I worked on it several days over the Winter break as well, but I'll also put that in the "freebie" bin, even though the project would probably have failed without it. In the Spring, I spent practically all day (that is, roughly nine hours) on this project on Tuesdays and Thursdays. I also spent at least one to two hours Monday, Wednesday, and Friday afternoons. As an estimate, then, let's say I spent 3/5 of my week on this project, meaning roughly $60k cost. I've also been working on it since the semester ended to wend our way through the iOS process, but again, let's skip that. This means the total cost of my attention on the project, ignoring other project costs, is about $72,500, or more than double what the official budget states the project to cost.

The university would normally charge around 40% indirect costs for research projects, so if I had provided an honest budget for an agency to cover all of the costs, I would have needed a grant of over $100,000. I asked for the maximum amount that INSGC would provide for a project, and that's less than 15% of the actual project cost. I don't know of any agency that would have funded such a modest project for its actual cost. The fact remains, though, that without the small grant, I would not have been able to do this project at all.

I share this to exemplify a point that I've made before: research loses money. The academic research enterprise forces us to grossly underestimate our costs or to misrepresent what we are doing. The one piece of information I do not have is what it would cost for a private enterprise to do this work. I would love to hear from someone knowledgable about multiplayer game development what they would have charged to do the conversion. The general rule of thumb is "industry is more efficient," but I honestly don't know here how my expertise and cost compare to the games development industry. Of course, living in Muncie has its advantages: $100 grand goes a lot further here than on the West Coast.
Circuit Repair: Tile Rotation Puzzle

Wrapping Up

Collaboration Station Enhanced was a great project, and I'm glad to have worked on it. That said, I'll be glad to see the back of it—a lot of other things didn't get done while this dominated my mind the last academic year. However, I learned an awful lot about UE4 development, things I'm sure I would not have learned without being hip-deep in a development project. This investment already paid off in the Spring 2018 Game Studio, who also used UE4 but to make a very different kind of game—expect a post about that in the coming days.

It was fun to revisit this project and rebuild it, but at the same time, I felt creatively constrained. After all, I was not really creating something new from my imagination, interest, and passion: I was recreating a collaborative work but without the original collaborators. I am hopeful that my next project might allow me to deploy these skills in a new context. I do have one such proposal in the works, and it does have a more realistic budget. 

I will close by expressing my public gratitude to several people and organizations who made this project possible: Ball State University's Entrepreneurial Learning Office and Immersive Learning program, for funding the original release of Collaboration Station; Indiana Space Grant Consortium (INSGC) for funding the enhancement project; the Computer Science Department, for not just approving the project but for providing valuable space, computing resources, and staff support that allowed us to succeed; Byron O'Conner at the Ball State University Digital Corps for helping me understand the Apple development and deployment models; Kyle Parker at Ball State for generously loaning several devices that we used for development and testing; and Perforce and Epic for providing academic-friendly licenses for their Helix Core and Unreal Engine technologies, respectively.

Saturday, April 14, 2018

On Interdisciplinarity

Author's note: I was asked to write an executive summary about interdisciplinarity for an internal report regarding our strategic planning process. I share it here for your consideration and comment.

Interdisciplinarity is the response to the observation that problems don’t obey traditional disciplinary boundaries. We organize our higher education structures around disciplinary boundaries for a variety of practical and justifiable reasons, but such structures make it easy for us—and more importantly, our students—to fall prey to the fallacy that we understand the whole because we understand the parts. The real problems of the 21st century, such as ethical use of digital data, education reform, global climate change, and post-work economics can only be addressed by exploring the intersections of traditional academic domains.

In 1968, Melvin Conway observed that organizations are constrained to create systems that reflect their own internal communication patterns. This is clearly manifest in the conventional curricular structures of higher education in general and Ball State University in particular. These conventions long predate our contemporary understanding of how people learn. We know that individuals learn by connecting new knowledge into an active network of prior knowledge. We know that context matters—context that includes the place, time, community, and content. We know that learning happens when students bring all their senses and skills to bear on problems that they are motivated to solve, in teams, in connection with a network of experts, with rapid and honest feedback. Most importantly, we know that the world our students already inhabit is constantly connected, containing ubiquitous and chaotic information. An interdisciplinary approach to higher education is therefore not merely an option: it is an ethical necessity for any who think deeply about our role as educators.

A corollary of Conway’s observation is that we can change how we create educational systems by altering how we communicate with each other, and this can point us toward an enlightened future for higher education. By enshrining interdisciplinarity in our university, we align ourselves and our students toward addressing significant contemporary problems. We have taken important and pioneering steps through programs such as the Virginia B. Ball Center for Creative Inquiry and the Immersive Learning program. However, these are pushed to the periphery of the student experience rather than the center. We can instead embrace the challenge of facing interdisciplinary problems--as scholars, in our teaching, research, and service—and the strategic plan can shine light onto our path.

Monday, April 2, 2018

Burning down hours, burning up coffee

My game development studio decided, at the end of Sprint 2, to start tracking how many cups of coffee they consume from our communal Keurig each day. Truly, this is an exceptional team in terms of coffee consumption, and they recognized this might be an interesting bit of data to track. They finished Sprint 3 last Friday, and so today I tallied up the results on the coffee tracker, and I added a new vertical to the burndown chart:

The careful observer will note that I had a bit of trouble placing the dots, since we're measuring cups of coffee consumed on a different scale than hours burned down. Hours remaining are counted at the beginning of each MWF meeting, so the first data point is the total original estimate and steady leads to zero. For coffee, we're counting cups consumed each day, including the first and last days.

To facilitate smoother tracking for Sprint 4, I added a new right vertical to my template:
Time will tell if the right vertical axis values are correct.

Wednesday, March 28, 2018

Students' preference for discussion over prototyping, despite instruction to the contrary

Monday afternoon was a perplexing one, forcing me to look back at my goals and direction for a variety of reasons. What kicked it off was my one o'clock meeting with my HCI class. After Spring Break, we started in on our final project, and given their surprising reaction to our pre-break mini-project, I decided we would use the final project to gain a better understanding of the double diamond approach rather than try to introduce a different model. Briefly, before break, we did a quick run through the double diamond; in evaluating their results, it was clear that the vast majority of students did not invest the time to understand the context, let alone to identify a real problem. Indeed, what seemed to happen is that they chose to do something they could do rather than trying to solve a real problem. That shook me pretty hard—hard enough that I realized I couldn't let that be their broken understanding of the process.

We spent a week on the discovery phase, and I pushed them out into the field to talk to real human beings. Based on this, they had to make a few empathy maps and personas. They then interpreted these into journey maps, almost all of which were identical—not because of academic dishonesty but because of collective myopia. From this, they identified the problems they would work on, and that brings us to Monday: the first day of the "develop" phase, in which we would review and practice making low-fidelity prototypes.

I started by asking the class to list the tools of low-fidelity prototyping. They quickly came up with paper, markers/pens/crayons, and PowerPoint, along with other lightweight drawing tools not specifically designed for prototyping but certainly amenable to it. Their next answer was people, which surprised me but I think is appropriate. I added whiteboards, and I pointed out that one of the students had previously deployed a system specifically for UI prototyping (uxpressia by name, but that's just one example among many).

I created a second column, which—since I had sort of backed myself into a taxonomic corner—we called the "Anti-Tools" of low-fidelity prototyping. What sorts of things should we avoid? One student quickly mentioned code, which I agreed is exactly right: avoid code until it's the best tool. Another mentioned templates, which at first I didn't understand, but as he explained it, he was really talking about locking yourself in to particular approaches too early: a template is a reusable abstraction, but we don't know a priori that the given abstraction is appropriate. They paused here, and it was my chance to introduce two critical ideas; indeed, the primary reason for the exercise was for me to share these two points. I added brainstorming, which required me first to define the term—I am regularly frustrated by how students want to use this as a trendy synonym for "thinking." I briefly explained to them that brainstorming in a group will tend to push early convergence rather than divergence: that the best approach was to just start making. The second one I put up was related: analysis paralysis and discussion. A kissing cousin of brainstorming, I explained that I have seen practically every team I have mentored fall into this trap, thinking that sitting and talking about a problem will help us solve it. It won't. Primed by these observations, I returned to the positive column and suggested that timeboxing is one of the greatest tools of creative prototyping.

With that, they voted on a 15-minute timebox, I set the timer, and they got to work.

Or something that looked like work to them, anyway.

There was one group whose only discussion was about distributing index cards—I had told them a low-fi prototyping exercise was coming up, and so a few people brought supplies—and then they set to work, cutting, drawing, crumpling. Another group did a brief powwow before going in what appeared to be a similar direction, although that might be up for interpretation. The rest of the class, roughly 70%, engaged in discussion. One group took to the whiteboard to draw something like a flowchart, the rest sat in their clusters and discussed while they drew.

I observed all of this happening, of course, and as the timer kept ticking, I kept thinking, "Any moment now they will break and start actually prototyping, right?" At about ten minutes in, it was clear that this was not going to happen, so I wrote two questions on the front center board: What makes it a prototype? and What makes it a good prototype?

When the timer went off, I invited them to look at these questions. Honestly, I wasn't very hopeful in getting good answers, based on what I saw, but we went to the board anyway. In answer to the first, a student (from that first group) said, "It's testable." That's exactly right: a prototype has to be testable, otherwise it is something else. I pointed out a secondary part of the definition is that the prototype points to a possible future. I couldn't think of the word for this, but a student suggested, "portent." In retrospect, the word is "portentous," but still, that's a 10-cent word. We moved to the second question, and right away a student (from a different group) said, "It gives you good feedback." That's the right idea here too, in my opinion, although I tell you what I told them, that I prefer to frame it as, "It answers a design question."

This is interesting, isn't it? They seem to understand the theory. I asked them, then, how many of them had prototypes from the 15-minute timeboxed exercise? The students in that first group, they all raised their hands, and rightfully so. The only other hand that went up was from one student in a group of three. I asked them to dive into this: they had all worked together, on one artifact, in discussion, during the timebox, and only one of them characterized it as a prototype. I invited them to describe their disagreement, and one of them attempted to justify that what they made was a prototype because it had arrows and indications explaining to someone how it would work. It was clear to me, and I think to the rest of the class, that this was not a prototype at all, but some kind of sketch or schematic.

My next question to them was, "What did you notice that was different about how the groups worked?" They all seemed to recognize that the people who came up with prototypes used their fifteen minutes to create their prototypes, while everyone else engaged in discussion. I explained to them again how this phenomenon was something I had seen many times before, particularly on immersive learning teams, where I advise working on prototypes and instead, students talk to each other—at length, with no real output.

I ended class, then, with a challenge: first, that they actually follow the instructions I give; second, that perhaps they consider breaking their teams in half, with half doing timeboxed prototyping and half focusing on brainstorming and discussion, and compare the results at the end. Honestly, I would rather they do the first, but I'd be satisfied if they did the second.

As this post was bouncing around my head this morning, I realized that I have seen a similar phenomenon before, regularly, in my teaching. It is the phenomenon of CS222, where I tell the students that they need to start with CRC cards, then make a list of tasks, then use test-driven development to approach those tasks. From many years of teaching the course, I my best estimate of what actually happens is that teams get together, talk a bit, and then start programming. This comes, in part, from student essays admitting to it. This inevitably leads to failure of the two-week project or the first iteration, and I provide vociferous feedback about what is wrong and how to improve. The "how to improve" is, essentially, to follow the steps. Yet, students don't. Even to the third iteration, I regularly have 20-40% of teams who are still not following the steps that I have laid out and that they know they will be evaluated on. Whether or not they ever read the requirements is moot here, since they all look at the feedback I provided and claim to be seeking to improve; yet, if we judge intention by results, the real intention seems to be maintain the status quo rather than learn something new.

I am left with a burning question about how to push my role as a mentor. Should I be interrupting their 15-minute timebox to point out what I see, in the hopes of pushing them more quickly in the right direction, or do I need to let them make this little mistake so that they can learn from it? I am afraid of them treating me like an exam proctor: if only the proctor weren't here, we could just collude on the exam and get out of here. I have been thinking about how this applies in CS222 as well, and I have been thinking about having more formal check-in points; for example, if teams had to turn in their CRC cards two days into an iteration, it would show them that I mean business. However, it would also mean that they would do it because I was collecting it and not because they should in order to learn what is being discussed.

My next meeting with the HCI class is coming right up. I think I may begin class by asking them to share the processes they used to construct their prototypes, and perhaps I will push them a bit further into a root cause analysis, to consider why they didn't follow the instructions they were given.

Monday, March 19, 2018

You gotta put down the duckie

It has been an interesting semester in my undergraduate game design & development studio. I am sure I have already forgotten more than one story that I intended to blog about, but that's just how it goes sometimes. My teaching load is actually reduced by one this semester as I work with a small team on a re-release of Collaboration Station. An unexpected result has been that all of my research time (and more) has gone into that project, and so I haven't written as much as I would otherwise like about some of the amazing things happening this semester. However, something happened in my game development studio last week that I thought was particularly notable, and so I want to quickly frame the story and share what happened. [Edit: "Quickly", you know, for my writing. I've been working on this post for some time now...]

This semester's studio follows last semester's collaborative exploration with Minnetrista, and my team is working on a game that is fundamentally about finding fairies on Minnetrista's campus. We're building on a prototype that was created by a student in the Fall: she designed a single-player mobile game in which a player finds characters on the grounds. Her game was based on the theme of imagination and creativity, inviting the player to either accept or reject fantastic elements. I worked with Minnetrista staff and my student team to adapt this into a very different kind of game: we are designing an experience for groups rather than individuals. In particular, we envision groups of explorers led by a facilitator; we are making an app that the facilitator would use to help create a great experience for those they brought to Minnetrista. These visitor roles come from a combination of existing museum theory and the particular psychographic work of our partners at Minnetrista.

We captured some of these ideas in the Spring team's vision statement, which we have hanging on a very large poster in our studio:
We are making a geolocative, narrative-rich mobile app that helps facilitators engage with explorers at Minnetrista—an app that features the varied grounds of Minnetrista's campus and the early 20th-century fairies beloved of Elizabeth Ball. The app will bring people together to be creative and engage the group in imagination and reflection.
The wall-mounted vision statement
I want to take a moment to point out how very strange this design space is. One person has the app, but that person is using it to direct the experience of other people. It has taken my student team weeks to wrap their heads around this, and indeed, I think some still have not. Essentially, the story I want to tell is about how one student finally did.

For essentially the whole semester so far, we have been working on the experience of meeting a single fairy. I explained several times that we can sacrifice scope, but we cannot sacrifice quality: if we cannot make one good fairy-finding experience, then we cannot make three, or five, or ten. The team built a minimum viable product—a proof-of-concept to explore the technology stack, essentially—and we have completed two sprints. Unfortunately, each sprint, the team dropped the ball with respect to end-user playtesting; fortunately, I think they have finally learned their lesson! The point of this is that we had a fundamental design but no authentic testing of it.

This design involved having a group of people sing for the fairy at a particularly fantastic location, and the fairy would emerge in response to the singing to befriend the players. The team has considered using the microphone to respond to singing, using a timer, or relying on self-reported completion in order to know when the group was finished singing. Paper prototyping of this idea worked fairly well, although that's a story for another day (one of the many stories I want to capture if I can push other things off my plate long enough).

On Wednesday of last week, I was sitting with a student who has been actively involved in much of the lightweight prototyping process. He was wrestling with the scenario and whether or not it met our goal that it would "engage the player in meeting" a fairy. This led him to an epiphany. He realized that perhaps the app was more like tabletop roleplaying games than like conventional videogames: the facilitator was the dungeon master, and their group was the party. This gave him a new lens to consider the problems of experience design—a new, useful metaphor for framing the process. It seemed to me he had hit a point where he could now productively move forward. The fact that this took half a semester for the student who has done the most prototyping and design work on the whole team is further feedback about the strangeness of the design space, and it's also based on this that I think many members of the team may still harbor unproductive understandings of what the vision statement actually means.

That brings us to Friday of last week, when I sat with two students as they worked on design—one of them being the student mentioned above, who had been sketching screens based on the "dungeon master" metaphor. For a variety of reasons, we were looking at the fundamental group activity, replacing singing with something more like dancing. We talked about skipping as a whimsical and fun activity, and as we tried to describe the scene, the question came up, "What is the facilitator doing?" One of the students explained that if they were at the park with their mother, and they were skipping but their mother was not, they would stop. How, then, do we get the facilitator to participate in the activity as well, so that the whole group is enaged together?

Hoots knows the answer.


You gotta put down the duckie if you wanna play the saxophone.

You gotta put down the smartphone if you wanna facilitate group enjoyment.

Our design space just got even weirder. We are now investigating and designing ways to encourage the facilitator to put their phones away and join their group in fun and creative activities. I pointed out that it was sort of like pulling the trick that Undertale popularized, where the game should react to the fact that it is closed. In fact, we don't just want to react to someone turning off their mobile phone screen: we want to encourage and reward it. We're moving forward on two fronts: incorporating putting away your phone as part of meeting the fairy, and also finding ways to feed-forward the idea that, yes, this app is aware of and responds to its being closed.

Regular readers may remember the end result of last year's game design & development studio was Spirits at Prairie Creek Park [game site, blog post]. That is a game in which groups of people go to different locations at Prairie Creek Park and engage in real-world sensory activities such as touching, listening, or smelling. In that game, we have one person holding the app and directing the others—likely children—in what to do. There is a 30-second countdown during the activity, after which the smartphone-wielder enters the data about what their group found. In our (admittedly-limited) playtesting, we saw that the one with the smartphone would stand and watch the timer go down while their group engaged with the activity. At the time, we didn't talk about what that one person was doing, but you know what? Watching a timer is not really much fun. We didn't recognize this in our design analysis then either, though, that there's an opportunity for the facilitator to improve the overall group activity by pocketing their smartphone and joining in. It is fascinating to me that we never noticed this, but then again, that team also had other problems to deal with, including some significant technical hurdles to overcome.

If you know of other games that explore this design space, please share in the comments. My team and I would be glad to see how others have approached it.

Thanks for reading!