Saturday, October 15, 2022

What it means to be human

I was invited to give a talk in a series about "What it means to be human." I was asked particularly to address what it means to be human in light of technology. My talk was given last night, and what follows is a serialization of the notes I used. These notes may be useful to others, and I expect they will be useful to future-me when I need to come back and reconstruct these thoughts.

What does it mean to be human in light of technology?

Let's start by considering "technology" colloquially. When someone says "technology," most imagine modern computing technology such as smart phones. The wonders of these devices may lead someone to believe that with the right technology, anything is possible. It turns out, that is not true. While this may be my most esoteric point in this essay, I hope that it puts a strong foot forward.

One of the most foundational aspects of theoretical computer science is that not everything is computable. Here is an example that doesn't require you to know anything about programming other than that it is a thing people can do. Let's say you wanted to write a program that takes, as input, another program, and your program tells you whether this other program halts or not. This is called The Halting Problem, and it is not computable. That doesn't mean that it's hard or that someone hasn't thought of a solution yet: it means that we can prove that it cannot be done at all. This may sound familiar to mathematicians or those who have read Hofstadter's classic Gödel, Escher, Bach: An Eternal Golden Braid. In particular, it resonates with Gödel's Incompleteness Theorem, which proves that  for any sufficiently complex system of mathematics (such as algebra), there will be something that is true that is not expressible in that system. You can understand this by analogy by considering the problem of evaluating the sentence, "This statement is false." In a way, Gödel demonstrated that you can say something like that in mathematics. 

The important point here is that technology cannot do everything. Although this is well known among academic computer scientists, it's not the kind of thing you hear from a marketing department or a pundit who is trying to convince you that they have a panacea. Remember the original marketing campaign for the iPad? It is magic. That's a compelling message to sell products, but it's also a lie. The iPad does what it was designed to do. So does the marketing department.

Taking a step back, we can observe that humans have always been "technological" in that we are makers and users of tools. Some tools improve our day-to-day living, such as toothbrushes that are comfortable to hold. Others have had indelible impact on human civilization: writing, money, the printing press, the Internet. What has changed in modern times is the depth and complexity of the networks that support these tools. I was able to understand this more clearly thanks to my Internet friend Chris Bateman, who helped me collect these notes. In particular, I am drawing inspiration from some insights he shares in The Virtuous Cyborg, and I am giving an overview of some of the arguments that he explores more deeply in that text. 

Consider the case of a 18th-century pioneer who moves out to Indiana to make a new life for himself. Let's say he needs a wooden mallet like we might see on Townsends. Making the mallet requires having an axe. The axehead was likely procured from the blacksmith, who acquired the iron from someone who was in contact with the miners who dug it out of the earth. At the tip of the iceberg is a pioneer holding an axe, and if we delve down, we involve dozens of people. Now imagine that you buy a mallet at Lowe's: how many people are involved? At the tip of the iceberg, it's you and the mallet at the self-checkout. Start digging, and you'll see there are thousands, tens of thousands, probably millions of people involved in bringing you together. Consider the data moving over networks, the power that makes it possible, the politics that ensures fuel, the logistics and roads, the manufacturing processes, all the taxes collected at multiple levels, and everything that makes possible a Big Box store.

Even something as simple as a mallet is enmeshed in an incomprehensibly complex network of causes and relationships. This is modern technology. We tend to dismiss this observation as mere trivia and treat the mallet as a morally neutral object. Turns out, it's not.

To understand this point, let me get out of the handyman's toolshed and into something more comfortable to me: digital technology. When anyone writes software, they do it with an intention. There is a purpose behind this work, and that purpose becomes embedded in the thing. In philosophical terms, the tools possess moral agency. This is a powerful idea whose implications still have my own mind spinning.

Don't confuse moral agency for free will. An agent is a thing that takes on an active role to produce an effect. It doesn't have to "choose" to do this. Indeed, the argument I am advancing is that this happens necessarily and automatically. If I spend the weekend creating a game for a game jam, that thing has agency in producing an effect in the player. The same argument can be made for the mallet. Consider that if you have a mallet, pounding things becomes a reasonable answer to many problems. Putting it in technical language, the tool reconfigures the moral space around us. Mallets push us toward pounding things. Facebook pushes us toward doomscrolling. (Once again, I am indebted to Bateman for some of his excellent examples such as ultrasounds and drones, which produce significant effects in our moral decision-making.)

A story from this last week will illustrate my point. I was recently at the International Conference on Meaningful Play. One of the keynote speakers was Heidi Boisvert, who talked about her research investigating how people consume media. She records people as they watch screens, using cameras to capture the emotional responses. The reason she does this is that her collaborators, who work across a broad range of technologies, can design media that is more effective in promoting a social justice agenda. For our purposes, it does not matter whether you agree with her goals or not. My point is that the shifting of moral considerations is the goal of this work. It is not a conspiracy theory: it is modern technology.

There are brilliant people working for media and technology companies whose job it is to diminish your moral agency, although they would probably say "improve engagement" instead. Consider the use of intermittent variable rewards for example. We have known since B. F. Skinner's time that intermittent rewards are better than regular rewards for hooking someone into a desired behavior. Slot machines are the example par excellence, but we see this technique across all of our apps and services. Keep scrolling through Facebook, Instagram, your news feed, or your Netflix recommendations: you never know if the next thing is going to give you a dopamine hit, but maybe it's the next one, or maybe it's the next one. This is not just reward cards that give you a cup of coffee after buying ten. This is Jimmy John's giving you unpredictable surprise rewards, if only you'll buy just one more sandwich.

It is important to remember that there are ethical ways to develop, deploy, and use technology as well. This is a recurring theme in my classes, where I try to help students understand the importance of empathy in the design process, of evaluating your impact, and that there is a responsibility to act ethically. More broadly, not every micropayment is unethical, and technology-mediated engagement with a supportive and creative community can be a boon. However, it may not be obvious which technology is pushing us toward ethical behavior and which is not. This is why it is important that we recognize our human failure modes. We all succumb to intermittent variable rewards. We all fall victim to cognitive biases such as confirmation bias and fundamental attribution error. These, also, are part of being human. When we recognize that we have a nature, and we recognize that technology is not neutral, we position ourselves to consider how to live well with modern technology.

I conclude that our response must be to cultivate habitual and firm dispositions toward doing the good, lest the impact of technology overwhelm our capacity for moral agency. More succinctly, we should practice virtue.* I believe that the traditional cardinal virtues of prudence, fortitude, temperance, and justice are up to the task. Temperance ensures that we not spend inordinate time and resources in relationship with technology. Fortitude gives us the courage to avoid immoral uses of technology regardless of its popularity. Justice ensures that all are given their due in the production, use, and distribution of technology. Prudence remains the queen of the moral virtues, where our intellect powers the decisions that drive our actions.

Technology cannot do everything, and technology is not neutral. Yet, part of what it means to be human is to be integrated with technology. To live a good life requires that we be free to choose the right moral path, and we are best prepared for this if we arm ourselves with knowledge and cultivated virtue.

No comments:

Post a Comment