(Author Ada Hoffmann’s debut novel, The Outside, was released by Angry Robot Books on June 11, 2019, and can be found at all major booksellers.)
As an AI researcher, I find myself continually correcting people who joke that my programs will one day take over the world.
It is literally impossible for my programs to take over the world.
My programs write poetry. It’s not even very good poetry. They write by shuffling words around, comparing them to human sources, and then trying to measure their properties. After five years of PhD-level research, they still can’t write anything coherent. This is because they don’t “understand” – as a typical human defines that word – what they’re saying.
AI these days does some very impressive things, granted. It drives cars. It builds cars. It also suggests polite replies to your Gmail messages and guesses what you might want to buy on Amazon. And it does these things by being very, very good at recognizing patterns. Sometimes creepily good.
It’s no surprise that humans, confronted with this creepy accuracy, start to worry. We have lots of books and movies about AI rising up against humans, destroying us accidentally or developing their own consciousness and becoming a new, hyper-logical class of person that are functionally, Gods. (Heck, I’ve even written that story – The Outside, available 11th June from Angry Robot Books.)
But any AI researcher will tell you that these stories are still a very long way off, and in many cases they are functionally impossible.
Pattern recognition is very successful right now, but the other traits of “strong AI” that we see in science fiction – consciousness, understanding, thoughts, feelings, opinions – are problems that computers haven’t even begun to crack.
When an ad service can guess what product you’ll want, it’s not because the service is particularly intelligent. Rather, the “creepy” feeling comes from the fact that the service is recognizing straightforward patterns in private information about you. This comes from all the stuff that the apps and websites you use have access to, as detailed in their long, complex privacy policies that nobody really reads, which is all information that you wouldn’t likely divulge to a human.
That doesn’t mean that the service understands what it’s seeing from you, let alone develop a plan to break free and destroy the world. Really, it doesn’t even know what a “world” is.
This is not to say that AI is always safe. But the real dangers posed by AI are way less sexy than a robot uprising. They include things like racism being inadvertently encoded in systems that are supposed to make “unbiased” decisions about humans. Military robots being used to absolve humans of decisions about killing. And a shrinking job market in Western countries thanks to increasing automation.
There is certainly a science fiction aspect about these more realistic problems, but we are culturally and emotionally attached to the other more apocalytpic ideas. You can blame SF books and cinema for that (myself included, I guess!). But these stories exist for a reason. They are necessary. And, at their heart, they have nothing to do with computers. They are actually much, much older than technology.
They’re the same stories we’ve been telling for thousands of years, as a way of exploring our feelings about what it means to be human. They ask the tough questions to help us examine our own way of living.
For example: The story of an innocent AI who becomes conscious and interacts with her creator is the story of Pygmalion and it asks: What does it mean to create? In what sense are our creations “real”? In what sense do they have a life outside us? The story of a runaway AI that turns the whole world into paper clips is the story of The Sorcerer’s Apprentice. It asks: What happens when we are given power, without the wisdom to understand the consequences?
The questions go on and on. Stories about sentient sex robots discuss the idea of sex work, sex slavery, and the messy ways that human sexuality intersects with power. The story of mistreated robots who turn on their masters is the story of the Slave Revolt, which is both literature and literal history. It asks: Who do you have power over? How are they affected by that power? Do they deserve the power? Do you? (The very word “robot,” was first used in SF by Karel and Josef Čapek, and it originally meant “slave.”)
And Technological Singularity stories – the ones in which AI systems exponentially improve until their thinking ability has vastly surpassed anything a human can do – are stories about gods. They pose the questions we’ve long since asked: What does it mean for a being to be greater than we are? What would such a being look like? What would it think of us? Would it serve us, reward us, or punish us? Would it help us ascend to its level? Would it think of us at all? What do we deserve from such a being? Who are we ultimately beholden to?
The technical plausibility matters less than the questions it raises, and humans have been asking these questions since religion existed. If we cannot trust our traditional gods, then we simply turn to science tropes instead. We take our concerns about gods and consciousness and justice, and we dress them up in a robot (or alien) costume to make them palatable.
Whether they are benevolently guiding and improving humanity, meddling and manipulating, distant and uncaring, or in the process of judging whether we deserve to exist at all, super-intelligent AI are put into the roles that ancient humans would have assigned to powerful spirits, angels, demons, and gods. And the responses that we have as humans to these roles – from adoring obedience to violent rebellion – are as diverse as the authors who write them.
It doesn’t matter that these stories aren’t scientifically likely. They’re stories we have a need to tell. And they’re stories we will keep telling, with whatever symbols and characters make sense at the time, long after AI in the real world has its day.
Ada Hoffmann’s debut novel, The Outside, was released by Angry Robot Books in June 2019. She is also the author of the collection Monsters in My Mind and of dozens of speculative short stories and poems, as well as the Autistic Book Party #ownvoices review series. Her work has been long-listed for the BSFA Award for Shorter Fiction, the Rhysling Award, and the D Franklin Defying Doomsday Award.
Ada is a computer scientist at a university in southern Ontario, Canada, where she teaches computers to be creative and undergraduates to think computationally about the human mind. She has also worked professionally as a church soprano, free food distributor, and token autistic person. Ada is bisexual, genderfluid, polyamorous, and mentally ill. She lives with her primary partner Dave, her black cat Ninja, and various other animals and people.