Archive for the artificial intelligence Category

More Human Than Human: Blade Runner and the Radical Ethics of A.I.

Posted in A.I., artificial intelligence, Blade Runner, Brain Science, Christianity, Consciousness, Descartes, Entertainment, Ethics, Film, Jesus, Morality, Neurology, Phillip K Dick, Philosophical and Religious Reflections, Philosophy of Mind, Pop Cultural Musings, Prometheus, Psychology, Religion, Ridley Scott, Science, Science fiction, Uncategorized with tags , , , , on April 27, 2012 by Uroboros

Blade Runner: What makes us human?

Self-consciousness is a secret, or at least its existence is predicated upon one. The privacy of subjective experience has mystified philosophers for centuries and dogged neuroscientists for decades. Science can, in principle, unravel every enigma in the universe, except perhaps for the one that’s happening in your head right now as you see and understand these words. Neurologists can give rich accounts of the visual processing happening in your occipital lobes and locate the cortical regions responsible for parsing the grammar and grasping the concepts. But they can’t objectively identify the ‘you’ part. There’s no neuron for ‘the self.’ No specific neural network which is essentially causing ‘you’ –with all your unique memories, interpretive quirks, and behavioral habits—to read these words have the particular experience you are having.

This problem is illustrated in debates about artificial intelligence. The goal is to create non-biological sentience with a subjective point-of-view, personal memories, and the ability to make choices. The Turing Test is a method for determining whether a machine is truly intelligent, as opposed to just blindly following a program and reacting algorithmically to stimuli. Basically, if a computer or a robot can convince enough people in a blind test that it is intelligent, then it is. That’s the test. The question is, what kind of behaviors and signs would a machine have to have in order to convince you that it’s self-aware?

Voight-Kampf Test

The 1982 film Blade Runner, based on Phillip K. Dick’s novel Do Androids Dream of Electric Sheep?, has a version of this called the Voight-Kampf test. The androids in the story, Nexus-6 Replicants, are so close to humans in appearance and behavior that it takes an intense psychological questionnaire coupled with a scan of retinal and other involuntary responses to determine the difference. A anomalous emotional reaction is symptomatic of artificial, as opposed to natural, intelligence. Rachel, the Tyrell corporation’s most state-of-the-art Replicant, can’t even tell she’s artificial. “How can it not know what it is?” asks Deckard, the bounty hunter charged with ‘retiring’ rogue Replicants. Tyrell says memory implants have given her a sense of self, a personal narrative context through which she views the world. The line between real and artificial humans, therefore, is far from clear. Rachel asks Deckard if he’s ever ‘retired’ a human by mistake. He says he hasn’t, but the fact that Rachel had to ask is telling. Would you want to take this test?

If you think about it, what makes you’re own inner subjectivity provable to others—and their subjectivity provable to you—are the weird kind of quirks, the idiosyncrasies which are unique to you and would be exceedingly difficult for a program to imitate convincingly. This is what philosophers call the problem of other minds. Self-consciousness is the kind of thing which, by its very nature, cannot be turned inside out and objectively verified. This is what Descartes meant by ‘I think, therefore I am.’ Your own mental experience is the only thing in the world you can be sure of. You could, in principle, be deluded about the appearance of the outer world. You think you’re looking at this computer screen, but who do you know you’re not dreaming or hallucinating or are part of Matrix-like simulation? According to Descartes’ premise, even the consciousness of others could be faked, but you cannot doubt the fact that you are thinking right now, because to doubt this proposition is to actually prove it. All we’re left with is our sense of self. We are thinking things.

Fembot Fatale

The Turing Test, however, rips the rug away from this certainty. If the only proof for intelligence is behavior which implies a mindful agent as its  source, are you sure you could prove you’re a mindful, intelligent being to others? Can you really prove it to yourself? Who’s testing who? Who’s fooling who?

The uncanny proposition hinted at in Blade Runner is that you, the protagonist of your own inner narrative, may actually be artificial, too. Like Rachel and the not-so-human-after-all Deckard, you may be an android and not know it. Your neural circuitry may not have evolved by pure accident. The physical substrate supporting your ‘sense of self’ may be the random by-product of natural selection, something that just blooms from the brain, like an oak grows out of an acorn—but ‘the you part’ has to be programmed in. The circuitry is hijacked by a cultural virus called language, and the hardware is transformed in order to house a being that maybe from this planet, but now lives in its own world. Seen this way, the thick walls of the Cartesian self thin out and become permeable—perforated by motivations and powers not your own, but ‘Society’s.’ Seen in this light, it’s not as hard to view yourself as a kind of robot programmed to behave in particular ways in order to serve purposes which are systematically hidden.

This perspective has interesting moral implications. The typical question prompted by A.I. debates is, if we can make a machine that feel and thinks, does it deserve to be treated with the same dignity as flesh and blood human beings? Can a Replicant have rights? I ask my students this question when we read Frankenstein, the first science fiction story. Two hundred years ago, Mary Shelley was already pondering the moral dilemma posed by A.I. Victor Frankenstein’s artificially-intelligent creation becomes a serial-killing monster precisely because his arrogant and myopic creator (the literary critic Harold Bloom famously called Victor a ‘moral idiot’) refuses to treat him with any dignity and respect. He sees his artificial son as a demon, a fiend, a wretch—never as a human being. That’s the tragedy of Shelley’s novel.

Robot, but doesn’t know it

In Blade Runner,the ‘real’ characters come off as cold and loveless, while the artificial ones turn out to be the most passionate and sympathetic. It’s an interesting inversion which suggests that what really makes us human isn’t something that’s reducible to neural wiring or a genetic coding—it isn’t something that can be measured or tested through retinal scans. Maybe the secret to ‘human nature’ is that it can produce the kind of self-awareness which empowers one to make moral decisions and treat other creatures, human and non-human, with dignity and respect. The radical uncertainty which surrounds selfhood, neurologically speaking, only heightens the ethical imperative. You don’t know the degree of consciousness in others, so why not assume other creatures are as sensitive as you are, and do unto others as you would have them do to you.

In other words, how would Jesus treat a Replicant?