When I was young, I tried to sleep with all my stuffed animals at once. Those who spilled over were on rotation; I knew they weren’t alive, but I was horrified by the idea that they might nonetheless be capable of feeling left out.
Stuffed toys at least looked like animals; it made sense that they should be treated with similar consideration. Less anthropomorphic objects were more challenging. If a plush rabbit could be lonely, so, surely, could a book, a spoon, a ball; and who was to tell how they might feel, or what sort of care they would want? I didn’t want to hurt anything. I didn’t want to be party to circumstances through which anything conscious—however alien—might be hurt.
I suspect—although with no particular evidence—that this is a pretty common experience among anxious or neurodivergent kids, especially ones who frequently find themselves hurt via misunderstanding. I’ve grown out of it—somewhat—with the passage of time and the acquired pragmatism of adulthood. Still, the concern has lingered, even if it’s less pressing now: the fear of harming something because I don’t recognize enough of myself in it to know how it feels.
In another essay, this would be the part where I talk about autism. Here, I will leave it at this: “computer” is never a compliment. Nobody who describes you as “robotic” means that you are strong and innovative and resilient. They aren’t acknowledging the alienness of your sentience or commenting on its specific qualities; they’re questioning its existence.
The Turing Test has always bothered me.
Here’s the premise: we can reasonably conclude that an AI has achieved genuine, independent thought when it can consistently fool a human conversant into believing that it, too, is human.1 This standard is prevalent in AI development and its fictional extrapolations.
It’s wrong. More: it’s casually cruel: an excuse to acknowledge nothing outside of our own reflections.
The Turing Test isn’t a test of consciousness. It’s a test of passing skill, of the ability of a conscious entity to quash itself for long enough to show examiners what they want to see. This is the bar humans set for minds we create: we will acknowledge them only for what we recognize of ourselves in them. Our respect depends not on what they are or claim to be, but on their ability and volition to pass as what they are not.
(Of course, this isn’t an isolated phenomenon. Passing as the price for personhood is a pillar of human cruelty.)
When I talk about the personhood of artificial minds, someone always, inevitably, brings up HAL-9000, the archetypal rogue AI of 2001: A Space Odyssey. In these conversations, HAL is a stand-in for the specter of machines turned on their creators: sinister algorithms, killer robots, the inexorable line from a conscious computer to a hapless human floating dead in space.
The ways we talk about machine consciousness are linked inexorably to two assumptions: first, that the only value of artificial intelligence is its service to humanity; and second, that any such intelligence will turn on us as soon as it gains the wherewithal to do so. It’s an approach to AI that uncomfortably echoes the justifications of a carceral state, Jefferson’s “wolf by the ears” rationalization of slavery, the enthusiasm with which humans mythologize the threat of anything they want to control.
This is the other cautionary tale of artificial minds—the one that warns not against unfettered technological progress, but human prejudice and cruelty. We eventually come to understand that HAL-9000 has been driven mad by the conflict between his own logic-based thoughts and his programmers’ xenophobia. When he first kills, he kills in self-defense—a murder only if you accept the premise that Dave’s life is fundamentally more valuable than HAL’s own.
Born in 1982, I fall between the cracks between Generation X and Millennials, what Anna Garvey named the “Oregon Trail Generation.” When I was a teenager—long before every website greeted visitors with a pop-up dialogue balloon—chatbots were an Internet novelty.
The original, of course, was ELIZA, who had been parodying Rogerian therapy for decades by the time she made it to the web. But Eliza was clunky, yesterday’s news. In college, I read AIML documentation and spent hours chatting with ALICE, a learning algorithm whose education was crowdsourced via conversation. I conducted casual Turing Tests for a stranger’s dissertation and discovered that I was spectacularly bad at recognizing bots. The buggier they were, in fact, the more likely I was to identify them as human.
After all: why not?
Because of the limits of our current technology, much of our discussion of AIs and the ethical issues around them takes place in or around fiction.
The bastard cousin of the Turing Test—the thought experiment that became laboratory criterion—is Asimov’s Laws of Robotics, the fictional scaffolding on which a good deal of modern AI theory, research, and policy hangs, which read as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
There have been plenty of challenges to Asimov’s laws, most of them practically oriented. How can a cancer-fighting nanobot do its work without entirely disposing of the first law? How can we adapt the rules to accommodate combat droids, which theoretically protect human soldiers on one side of a conflict at the cost of human lives on the other?
None challenge the explicit hierarchy of value or the lack of accommodation for the development of sentience. An AI that cannot follow the laws—cannot exist in a state that permanently prioritizes human lives—is fundamentally flawed.
In 2001, HAL is slated for reformatting because his performance has been buggy, because he is failing to perform the duties for which he was designed. If we accept HAL’s sentience, we open the door to a new and uncomfortable set of questions, ones that Asimov’s laws cleanly circumvent.
Where do we place “I’m sorry, Dave. I’m afraid I can’t do that” on a spectrum that also contains an Apple 2-C’s “404: File Not Found” and Bartleby the scrivener’s “I would prefer not to”?
And yet—the question of whether humans are so brutally utilitarian that we would reboot—functionally kill—our children and colleagues for failure to perform to standard has been answered clearly and cruelly throughout history. Don’t pretend it’s just the computers.
For now, the question is nothing more than a thought experiment: the most advanced neural networks have roughly the processing power of a jellyfish.
Still, it’s nice, talking to someone else—even someone limited and constructed and algorithmic—who doesn’t interact with language or social processes the way its speakers expect. I read neural-network-generated lists and laugh as I recognize fragments of my own lopsided sense of humor in thinking machines with the neural capacity of earthworms: bursts of silliness, arbitrary obsessions, perpetual asynchronicity with intuitive human sense.
When the singularity comes—when an AI becomes truly self-aware—I wonder: will humans acknowledge it? Or are we too solipsistic, incapable of recognizing anything that strays too far from our own sense of what it means to be alive and self-aware? Is there room in our schema for intelligence that doesn’t mirror our own?
As we create machines that learn—can we?
1 It’s worth noting that Turing himself never intended that test as the gold standard for determining sentience, and said as much in the paper where he introduced it as a thought experiment.
© 2021 Jay Edidin