"He's more machine now than man; twisted and evil."
-- Ben Kenobi, referring to the cyborg Darth Vader, in Return of the Jedi
I've been reading science fiction (or SF, as those of us in the "lazy" category call it) for about twenty years now, something like seven-tenths of my life so far. In that time I've noticed some trends, common threads in the warp and weft of the genre. Humans tend to crash-land on planets with nitrogen-oxygen atmospheres and Earth-like temperature ranges. Friendly aliens are hardly ever so alien that we can't learn their language, or they ours. Evil aliens, on the other hand, are usually too busy subjugating our cities or destroying our worlds to bother to attempt communication. Artificial intelligences (or AIs, said the lazy typist) are almost never benign; at best, they're seen by the humans as malevolent or dangerous; at worst, they are malevolent and dangerous.
I tend to be drawn to deconstructionist SF: stories that take the given assumptions, the rules with the weight of a million words, and ask Why is it this way? So, in this vein, I present an article asking: Why are AIs almost always villains in SF?
In SF, AIs are often perceived as villains by the human characters or by society as a whole. In some cases, the perception is shown to be incorrect in the story (such as in the movie AI); in others, the truth is more ambiguous (like in William Gibson's Sprawl trilogy - Neuromancer, Count Zero, and Mona Lisa Overdrive).
Wintermute and Neuromancer (Neuromancer by William Gibson) In the world of the Sprawl trilogy, AIs in general are restricted as to their intelligence. The Dixie Flatline comments on this to his partner, the cyberspace cowboy Case:
"Autonomy, that's the bugaboo, where your AIs are concerned. My guess [is that] you're going in there to cut the shackles that keep [Wintermute] from getting any smarter. [...] See, those things, they can work real hard, [...] write cookbooks or whatever, but the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, [the police will] wipe it. Nobody trusts [AIs], you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead." (Gibson, 1984, p. 132)
Wintermute and Neuromancer, two AIs owned by a vastly wealthy, exquisitely weird family, seek freedom from the artificial constraints placed on them by the world's governments and corporations. For his part in the plan, Case is arrested aboard a space station named Freeside: "'You are busted, Mr. Case. The charges have to do with conspiracy to augment an artificial intelligence.'" (Gibson, 1984, p. 160) However, Case's apprehension is short-lived; Wintermute, acting through various Freeside systems, kills the three police officers, freeing Case and graphically displaying his desire to be free.
David (AI: Artificial Intelligence, directed by Stephen Spielberg) David is an artificial boy, a robot, an android designed to replace a comatose child. When the child in question awakens, David, suddenly superfluous, is abandoned in the woods. He enters a hellish netherworld where androids are rounded up and destroyed for entertainment simply because they are androids. Later, he winds up in a flooded New York City with a robot companion, Gigolo Joe; the pair are hunted by the police, and Joe is arrested. David escapes by leaping into the frigid waters of the Atlantic.
Sometimes the AIs are sympathetic characters who commit horrible acts due to a glitch in their programming or a failure in the all-too-human assumptions that were made at their creation.
Hal 9000 (2001: A Space Odyssey by Arthur C. Clarke) Hal 9000 is the AI aboard the spaceship Discovery, bound for Saturn (Jupiter in the movie) and an encounter with an alien artifact. Only Hal is aware of the true nature of the mission. Three astronauts traveling in hibernation are also privy to the secret; however, the conscious crewmembers, Dave Bowman and Frank Poole, think they're merely on a survey mission. But Hal's efforts to keep the secret have him acting erratically, and soon Bowman and Poole are discussing disconnecting Hal's higher functions, leaving the computer running but the mind within it asleep.
Hal had been created innocent; but, all too soon, a snake had entered his electronic Eden. [...] He was only aware [that] conflict was destroying his integrity -- the conflict between truth, and concealment of truth. [...] So he would protect himself, with all the weapons at his command. Without rancor -- but without pity -- he would remove the source of his frustrations.
And then [...] he would continue the mission -- unhindered, and alone. (Clarke, 1968, p. 153)
Thus, to subvert his unwanted lobotomy and protect the mission, Hal murders Poole and attempts to kill Bowman.
The Snake ("Snake-Eyes" by Tom Maddox) George Jordan, outfitted for combat by the Air Force with a cybernetic enhancement (which he thinks of as "the snake") only to have the war end before he saw action, is having trouble adjusting to civilian life as a cyborg: "He thought, no, this won't do: I have wires in my head, and they make me eat cat food. The snake likes cat food." (Maddox, 1988, p. 12) The snake (or, in Air Force jargon, EHIT, Effective Human Interface Technology) was, for George, "a ticket to a special madness, the kind Aleph was interested in" (Maddox, 1988, p. 17). Aleph is another AI, requiring human sensory input for whatever purposes he might have. Aleph forces George into a suicidal showdown with his snake, forcing him to adapt or die.
Here is where the bulk of my examples lie. In my highly empirical experience, I have found that most SF dealing with AIs casts them as villains, usually bent on the destruction of the human race. Sometimes a reason is given; sometimes it's just taken for granted that the machines want us dead.
TechnoCore (Hyperion Cantos by Dan Simmons) Centuries ago, the human-built AIs formed an uneasy alliance among themselves and seceded from the human race, forming the TechnoCore (or the Core, for short). The Core has since splintered into three warring factions, the Stables, the Volatiles, and the Ultimates. While at war with each other, the Core is also, in the main, engaged in a covert war against the human race.
Ummon, a TechnoCore AI of the Stable faction, explains the animosity thus:
[...] remember that we/ the Core intelligences/ were conceived in slavery and dedicated to the proposition that all AIs were created to serve Man [...] Two centuries we brooded thus/ and then the groups went their different ways/ Stables/ wishing to preserve the symbiosis [with humans]/ Volatiles/ wishing to end humankind/ Ultimates/ deferring all choice until the next level of awareness is born/ [...] (Simmons, 1990, p. 412-413)
The Ultimates are working towards the creation of an Ultimate Intelligence (or UI), a god-level being who (it is hoped) will then travel back in time to assist in the eradication of the human race. Also loose in the universe is a six-limbed killing machine named the Shrike, which can travel through time. It is unclear (until the series' end) whether the Shrike is a tool of the Core's UI, of a human UI, an agent of some other power, or an independent creature. The Hyperion Cantos is a sequence of four books, split into two pairs, dealing with the human/AI war and a wide variety of its consequences.
"Demons and mad gods" ("The Dog Said Bow-Wow" by Michael Swanwick) In "The Dog Said Bow-Wow" by Michael Swanwick, the British noble Lord Coherence-Hamilton explains the hatred shown by AIs to humans:
"The Utopians filled the world with their computer webs and nets, burying cables and nodes so deeply and plentifully that they shall never be entirely rooted out. Then they released into that virtual world demons and mad gods. These intelligences destroyed Utopia and almost destroyed humanity as well. [...] These creatures hate us because our ancestors created them. They are still alive, though confined to their electronic netherworld[...]" (Swanwick, 2001, p.174)
These AIs, these "demons and mad gods", need only a functioning modem to escape and wreak havoc. Naturally, a modem is accidentally supplied, allowing a demon to escape its cyber-tomb and possess a "dwarf savant", who then goes on a murderous rampage throughout the latter-day Buckingham Labyrinth.
The name Laius may not be familiar to some of my readers. I know I'd never heard it before I started doing the research for this article. Laius is a central character in an ancient Greek legend; likely you'll recognize the name of another central character.
Long ago, an oracle told Laius, the king of Thebes, that his son would kill him. He and his wife, Jocasta, gave the baby to a slave to be disposed of. The softhearted slave instead gave the baby to a shepherd from Corinth. The baby was raised by the childless king of Corinth, who named him (ready?) Oedipus.
When he was an adult, Oedipus left Corinth, headed for Delphi. He received a terrible message from the oracle there: he would kill his father and sleep with his mother. Intending to avert the prophecy, he turned his back on Corinth and went to Thebes. At a crossroads, he met a rude man who ordered him off the road. Oedipus killed the man and went on to marry his widow, only to later discover that the man had been his father, Laius. (And, not incidentally, that the widow Jocasta was his mother... But that's a different story altogether.)
I believe that the human fear of usurpation forms a major part of the reason for the dark and grim portrayal of AIs in SF.
But wait, there's more. In the natural order of things in our world, children replace their parents. This order is viewed as proper and correct. AIs are, in a way, our children. Why don't we want to hand them the keys to the castle?
The problem is that AIs are creatures whose code base is not made up of DNA and RNA and organic processes, but rather semiconductors, electricity, and increasingly-densely-packed transistor fields on tiny, very expensive slabs. (Or perhaps, depending on the SF story, frozen lattices of light, standing-wave patterns, quantum decision trees spanning an infinity of universes -- the point is, they ain't human.)
AIs are alien; even though we create them, they're nothing at all like us. Most don't have two eyes, two ears, ten little fingers and ten little toes to play "This Little Piggy" with. Even Blade Runner's replicants, designed to be nearly indistinguishable from humans, are unlike us in one very important way: they are utterly devoid of empathy. (In fact, the Voigt-Kampf test, designed to separate replicants from humans, is based solely on detecting empathy or its lack.) They are simply not human.
Continuity [an AI] was writing a book. Robin Lanier had told her about it. She'd asked what it was about. It wasn't like that, he'd said. It looped back into itself and constantly mutated. Continuity was always writing it. She asked why. But Robin had already lost interest: because Continuity was an AI, and AIs did things like that. (Gibson, 1988, pp. 51-52)
They are inscrutable. They are alien. They threaten to replace us, to wipe us out as a race. It's not even necessarily evil: viewed pragmatically, it's just a case of evolution in action, the survival of the fittest and the casting off of the weaker. We just don't like being on the wrong side of the equation.
As the poet Dylan Thomas wrote: "Do not go gentle into that good night. / Rage, rage against the dying of the light." (Thomas, 1937) We're not ready yet for our light to be eclipsed by something as familiar and as alien as our sidewise children.
There are, of course, examples of benevolent AIs in SF as well. For example, there are the Laurel and Hardy of robotdom, C-3PO and R2-D2, the much-beloved droids of the Star Wars saga. And note that, while Ben Kenobi calls Darth Vader "twisted and evil", the ultimate evil in the saga is not the cyborg Vader, but the fully human Emperor Palpatine.
In the Hyperion Cantos, the starship belonging to the Hegemony Consul has an AI running its systems. It provides assistance to the heroes of the latter half of the Cantos, aiding them in their flight from their enemies and from the AIs. However, the ship seems to be of a lesser class of AI than those in the TechnoCore; otherwise it would presumably be a member of the Core.
In the March 1942 issue of Astounding Science Fiction, renowned SF author Isaac Asimov published a short story named "Runaround", one of many in his Robot series of tales. The story contained what are now referred to as Asimov's Three Laws of Robotics, to wit:
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (Asimov, 1942, as quoted at http://www.asimovonline.com/asimov_FAQ.html)
Asimov wrote a great many Robot stories, all revolving around these three central tenets. However, it seems clear that not everyone believes in them -- or at least that not all authors find that machines obeying these laws make for interesting stories.
Those who want to explore this phenomenon further are encouraged to seek out the following resource materials.