Archive for the Neurology Category

What is language? What can we do with it, and what does it do to us?

Posted in 1984, 99%, anxiety, barriers to critical thinking, Big Brother, Brain Science, Consciousness, critical thinking, Dystopia, Dystopian, emotion, freedom, George Orwell, humanities, irrational, Jason Reynolds, limbic system, Moraine Valley Community College, Neurology, Newspeak, Nineteen Eighty-four, Orwell, paranoia, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, politics, Politics and Media, rational animal, Rationalization, rationalizing animal, reason, resistance to critical thinking, theory, theory of mind, thoughtcrime, Two Minutes Hate, Uncategorized, Uroboros, Zombies with tags , , , , , , , , , , , , , , , , , , , , , , , , , , on September 20, 2013 by Uroboros

In Orwell’s 1984, INGSOC’s totalitarian control of Oceania ultimately depends on Newspeak, the language the Party is working hard to develop and implement. Once in common use, Newspeak will eliminate the possibility of thoughtcrime, i.e. any idea that contradicts or questions absolute love for and devotion to Big Brother. Newspeak systematically scrubs away all those messy, gray areas from the English language, replacing them with a formal, logically-rigid system. For example, instead of having to decide whether to use ‘awesome,’ ‘fabulous,’ or ‘mind-blowingly stupendous’ to describe a situation, you would algorithmically deploy the Newspeak formula, which reduces the plethora of synonyms you could use to ‘good,’ ‘plusgood,’ or ‘doubleplusgood.’ Furthermore, all antonyms are reduced to ‘ungood,’ ‘plusungood,’ or ‘doubleplusungood.’Newspeak

Syme, a Party linguist, tells Winston, the novel’s rebellious protagonist, that the ultimate goal is to eliminate conscious thought from the speaking process altogether. The Newspeak term for it is ‘duckspeak‘—a more mechanical form of communication that doesn’t require higher-level cognitive functions, like having to pick the word that best expresses your feelings or creating a new one. That sense of freedom and creativity will simply cease to exist once Newspeak has finally displaced ‘Oldspeak.’ “The Revolution will be complete,” Syme tells Winston, “when the language is perfect.” The Proles and the Outer Party (95% of Oceania’s population) will become a mass of mindless duckspeakers, the linguistic equivalent of ‘philosophical zombies’.

Newspeak implies that cognition depends on language—that symbolic communication isn’t merely a neutral means for sending and receiving thoughts. Instead, the words and sentences we use actually influence the way we think about and perceive the world. While Orwell was obviously inspired by the propaganda techniques used by the dictators of his day, perhaps he was also familiar with Nietzsche’s “On Truth and Lying in a Non-Moral Sense” or the work of anthropologists like Boas and Sapir, all of whom embraced some form of what is now called linguistic relativism, a theory which argues for the reality of what Orwell proposed in fiction: we experience the world according to how our language lets us experience it.

Linguist Lera Boroditsky

Linguist Lera Boroditsky

Linguistic relativism is on the rise in the contemporary study of language. The work of, for example, Lera Boroditsky and Daniel Everett provide strong empirical data that supports (at least the weak version of) linguistic relativism, challenging the Chomskian paradigm, which posits a universalist account of how language is acquired, functions, and, by extension, relates to cognition and perception.

In my previous essay on the Uroboric model of mind, I asked about the connection between neuronal processes and symbolic systems: how can an abstract representation impact or determine the outcome of tangible physical processes? How can ionic thresholds in axons and the transmission of hormones across synaptic gaps depend upon the meaning of a symbol? Furthermore, how can we account for this in a naturalistic way that neither ignores the phenomena by defining them out of existence nor distorts the situation by positing physics-defying stuff? In short, how do we give an emergent account of the process?

StopFirst, we ask: what is language? Most linguists will say it means symbolic communication: in other words, information exchanges that utilize symbols. But what is a symbol? As you may recall from your grade school days, symbols are things that stand for, refer to, or evoke other things—for example, the red hexagonal shapes on street corners provokes your foot to press against the brake, or the letters s, t, o, and p each refer to particular sounds, which, when pronounced together, mean ‘put your foot on the brake.’ Simple enough, right? But the facility with which we use language, and with which we reflexively perceive that usage, belies both the complexity of the process and the powerful effects it has on our thinking.

Cognitive linguists and brain scientists have shown that much of our verbal processing happens unconsciously. Generally speaking, when we use language, words just seem to ‘come to mind’ or ‘show up’ in consciousness. We neither need to consciously think about the meaning of each and every word we use, nor do we have to analyze every variation of tone and inflection to understand things like sarcasm and irony. These complex appraisals and determinations are made subconsciously because certain sub-cortical and cortical systems have already processed the nonverbal signals, the formal symbols, and decoded their meaning. That’s what learning a language equips a brain to do, and we can even identify parts that make major contributions. Broca’s area, for example, is a region in the left frontal lobe that is integral to both language production and comprehension. If a stroke damages Broca’s area, the sufferer may lose the ability not only to produce speech, but to comprehend it as well.

Left-brain language regions

Left-brain language regions

Dr. Jill Bolte Taylor

Dr. Jill Bolte Taylor

One of the most publicized cases of sudden ‘language-less-ness’ is that of Dr. Jill Bolte Taylor, the Harvard brain scientist who, in 1996, happened to have a stroke in her left hemisphere, which impacted both the Broca’s and Wernicke’s areas of her brain. She couldn’t remember who she was. She couldn’t use language. Taylor compares it to dying and being reborn, to being an infant in a grown woman’s body. Her insights into a language-less reality shed light on how words and sentences impact cognition. She says she lost her inner voice, that chatter that goes on ‘in’ the head. She no longer organized her experiences in a categorical, analytic way. Reality no longer showed up to her with the same fine-grained detail: it wasn’t divided and subdivided, classified and prejudged in terms of past associations or future expectations, in terms of self and other, us vs. them, and so on. She no longer had an ‘I’ at the center of her experience. Once the left-brain’s anxious, anal-retentive chatter went offline, right-brain processes took over, and, Taylor claims, the world showed up as waves of energy in an interconnected web of reality. She says that, for her at least, it was actually quite pleasant. The world was present in a way that language had simply dialed down and filtered out. [Any of you who are familiar with monotheistic mysticism and/or mindfulness meditation are probably seeing connections to various religious rituals and the oceanic experiences she describes.]

This has profound implications for the study of consciousness. It illustrates how brain anatomy and neural function—purely physical mechanisms—are necessary to consciousness. Necessary, but not sufficient. While we need brain scientists to continue digging deep, locating and mapping the neuronal correlates of consciousness, we also need to factor in the other necessary part of the ‘mystery of consciousness.’ What linguistic relativism and the Bolte Taylor case suggest is that languages themselves, specific symbolic systems, also determine what consciousness is and how it works. It means not only do we need to identify the neuronal correlates of consciousness but the socio-cultural correlates as well. This means embracing an emergent model that can countenance complex systems and self-referential feedback dynamics.

OrwellOrwell understood this. He understood that rhetorical manipulation is a highly effective form of mind control and, therefore, reality construction. Orwell also knew that, if authoritarian regimes could use language to oppress people [20th century dictators actually used these tactics], then freedom and creativity also depend on language. If, that is, we use it self-consciously and critically, and the language itself has freedom and creativity built into it, and its users are vigilant in preserving that quality and refuse to become duckspeakers.

The Challenges of Teaching Critical Thinking

Posted in Consciousness, freedom, irrational, Neurology, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, rational animal, Rationalization, rationalizing animal, reason, Socrates with tags , , , , , , , , on September 6, 2013 by Uroboros
How much power does reason have?

How much power does reason have?

The other day in my critical thinking class, I asked my students about how much control they think they have over their emotions. It’s a crucial issue in the quest to become a better critical thinker. After all, irrational reactions and unfounded feelings are often the main barriers to logical inquiry and sound reasoning.

My argument was that emotions are primal, subconscious judgments our brains make of the environment. I don’t consciously have to order myself to be afraid of a snake and flinch or run. It’s an automatic response. If we feel fear or anger or sadness or joy, it’s because our subcortex has already evaluated the variables, fired up the glands, secreted the hormones, and signaled our organs and muscles to respond in particular way. All of this happens in the blink of an eye, in the interval of a heartbeat. We don’t really consciously choose how to feel about anything. We might be capable of controlling the actions that flow from our feelings—to stop ourselves from reacting this way or that-. But the feelings themselves persist, and you can’t wish them away anymore than you can wish away the rain. In short, our feelings occur to us.

Emotions happen.

I was surprised by how many students didn’t agree. Several claimed they can consciously modulate their feelings, even talk themselves into or out of feeling angry or sad or afraid or joyful if they desire. Part of me wanted to cry, “B.S.” If emotional management worked like that, there wouldn’t be billions spent each year on therapists and happy pills. But in the spirit of critical thinking, we put the idea on trial. In the end, I think most of the students came around to the notion that we have less conscious control over our feelings than we’d like to think, especially after I showed them a clip about marketing guru Clotaire Rapaille and his theory of the reptilian brain and how, in America, the cheese is always dead (seriously click the link and watch the clip—it’s fascinating).

But the initial reaction still puzzles me. Was it the youthful tendency to overestimate one’s abilities? Were they just being provocative, Socratic contrarians? Or is this indicative of a change? I don’t want to make a hasty generalization, but it prompts the question: is there a new psychological self-concept developing among this generation? Do some Millennials have a different phenomenological perspective when it comes to their emotions? Are the medicalization of mental issues and the proliferation of pharmaceutical remedies leading to a new attitude toward human psychology?

As a philosophical person, I’m curious about the history of how humans perceive their own psyches. Plato compared our primal motivations and emotional intuitions to wild horses that reason, the charioteer, tames and steers. Like Nietzsche, I’ve always thought Plato distorted and overrated our rational capacities. Hume said reason is ultimately the slave of our passions. But I’ve always wondered if that isn’t too fatalistic. I guess I lean more towards Hume’s assessment, but if I didn’t still believe in at least the spirit of Plato’s metaphor, then I wouldn’t be teaching critical thinking, right? I mean, what would be the point?

What do you think?

Orwell’s Two Minutes Hate: Terror Management and the Politics of Fear

Posted in 1984, 2012 Presidential election, Big Brother, Brain Science, Dystopia, Dystopian, Ernest Becker, freedom, Freud, hate, History, Ingsoc, Literature, mortality anxiety, Neurology, Nineteen Eighty-four, O'Bama, Orwell, politics, Politics and Media, Pop Cultural Musings, propaganda, psychoanalysis, Psychology, Romney, Terror Management Theory, thoughtcrime, Two Minutes Hate, Winston Smith with tags , , , , on May 17, 2012 by Uroboros

The opening chapter of Orwell’s dystopian nightmare Nineteen Eighty-four centers around the “Two Minutes Hate.” Winston Smith, the novel’s protagonist, describes pulling up a chair in front of the big telescreen, taking a seat among his Ministry of Truth co-workers, and participating in a ritual designed to reinforce party orthodoxy, Oceania’s version of Must-See-TV.

What follows is a wild display of enmity, precisely channeled and orchestrated by Ingsoc, the totalitarian rulers of Oceania. The chorus of hissing, squeaking, and screaming is focused on Goldstein, the ultimate enemy of the state, “the self-satisfied sheeplike face” that automatically “produced anger and fear” in everybody. Why? Goldstein stands for everything Ingsoc reviles. He demands peace and advocates “freedom of speech, freedom of press, freedom of assembly, freedom of thought.”

The Hate celebrates Ingsoc’s slogans—WAR IS PEACE, FREEDOM IS SLAVERY, and IGNORANCE IS STRENGTH—and helps stamp out thoughtcrime, i.e. the right to hold personal, unorthodox beliefs and value privacy, the very thing Winston secretly lives for. He’s actually a big fan of Goldstein. But even this devout intellectual heretic feels powerless to the overwhelming wave of emotion that ripples though the crowd and makes otherwise reserved and terse people start “leaping up and down…and shouting at the tops of their voices.”  Take a look at a cinematic interpretation of this.

The most horrific thing, Winston says, isn’t simply that he feels obliged to go along with it. It’s that even a true thoughtcriminal like himself finds it “impossible to avoid joining” the “hideous ecstasy of fear and vindictiveness, a desire to kill, to torture, to smash faces in with a sledgehammer.” Winston helplessly watches as his secret loathing for Big Brother, the face of the Party, becomes, for a brief, but terrifying moment, true adoration. This foreshadows the fate of his desperate revolt. In the end, Winston’s rebellion fails. He is destined to love Big Brother. The Two Minutes Hate gives us a disturbing glimpse into the psychological, and indeed physiological, means by which totalitarian control is possible. Orwell takes the reader right to the intersection of nature and nurture, where political propaganda sets its scalpel and goes to work, ‘healing’ us through the power of ‘proper’ beliefs—the pseudo-salvation of mind and body that comes from loving and hating the ‘right’ faces. Being an accepted member of your tribe, Orwell argues, is invariably linked to being fervently hostile towards the other tribe.

In this way, Orwell’s diagnosis of totalitarian tactics prefigures a recent breakthrough in social psychology called Terror Management Theory (TMT). The idea is rooted in anthropologist Ernest Becker’s seminal work The Denial Death, which proposed that all human behavior is instinctively shaped and influenced by the fear of death. Whether we realize it or not, our ‘mortality anxiety’—a quality that appears to be unique to our species—is such a potent and potentially debilitating force, we have to repress and distract ourselves from it. But as Freud says, the repressed always returns, slipping into our conscious minds and affecting our behavior in lots of weird ways. This anxiety, according to Becker, feeds back into our psyche and influences everything we think and do. Our social practices and institutions—from politics to religion to art—are systematic attempts to explain away and allay this fear, which is why we can lash out so viciously at those who seem to threaten or undermine our beliefs. We can’t let their existence weaken our psychological armor against the ultimate enemy, Death itself.

Researchers Sheldon Solomon and Jeff Greenberg decided to put Becker’s hypothesis to the test by devising clever psychological experiments to isolate and measure the anxiety factor. Time and time again, they found that when people were made to think about their own death, they reacted in a more hostile way to those who were perceived as an ideological other than they did when they were not asked to contemplate it. You can check out these weird but illuminating experiments here.

Terror Management Theory (TMT) can explain everything from the bloody sacrificial rites carried out by the Aztecs to the sudden and unquestioning support Pres. Bush received from many liberals after 9/11, people who on September 10th didn’t even think he’d legitimately won the office. The theory helps us grasp not only the irrational, cult-like power of charismatic leaders and the effectiveness of negative political ads, it presupposes a neurological basis for our susceptibility to the Love/Hate style of propaganda—how it taps into the way we’re wired and re-routes the circuitry so we become unwitting puppets to elitist agendas that don’t actually serve our interests. We become mouthpieces and pumping fists for the very forces that oppress us. In other words, you are not in control of your own beliefs and behavior, Big Brother has already gotten to your amygdala—the brain’s subcortical fear factory—and told you what to love and what to hate, the faces worth admiring and the faces that need to be smashed with a sledgehammer…or with a prejudicial slur or with a cruise missile.

Orwell may not have grasped the neurology (he predates the f-MRI technology that allows us to see the amygdala in action), but he certainly understood the psycho-dynamics of TMT, fifty years before it was empirically verified by Solomon and Greenberg. The hate, Winston explains, flows through the group “like an electric current, turning one even against one’s will into a grimacing, screaming lunatic.” And yet, since its a primitive instinct which has been manipulated by social conditioning, this hate is “an abstract undirected emotion which could be switched from one object to another,” like a flashlight. In other words, we love and/or hate by nature, but the particular objects of our adoration and enmity are learned. The question is, have you learned how to consciously control this dynamic? Or has Big Brother already beat you to the punch?

Tragically, Winston can’t choose who to love and who to hate, and this, Orwell implies, is the ultimate agenda of an effective totalitarian state, one of its defining properties and ultimately its most fundamental power. Nineteen Eighty-four‘s dystopian vision—unrelentingly bleak and terrifying—still resonates because the kind of manipulation it describes hasn’t gone away with fall of the Soviet Union. Its machinations have just grown more subtle and are all the more powerful and hideous for it.

Hate on the Left and Right

Orwell’s novel reminds us to step back from the histrionic media frenzies that pass for political discourse these days, take a rational breath, and ask ourselves: am I really in control of what I believe? Or am I motivated by fears I’m not even aware of? When I step into the booth and cast my ballot, am I making a conscious choice or has Big Brother already pushed the button for me?

Remember, Hitler initially gained power through the democratic process, which he then systematically dismantled. Do we really want to be free and rules ourselves or is there, as Freud argued in Group Psychology and the Analysis of the Ego (1922)  something deep within in that longs to be subjugated and dominated? Do you secretly like it when Big Brother mashes his political finger against your limbic button?

 Take a minute or two and think about it.

More Human Than Human: Blade Runner and the Radical Ethics of A.I.

Posted in A.I., artificial intelligence, Blade Runner, Brain Science, Christianity, Consciousness, Descartes, Entertainment, Ethics, Film, Jesus, Morality, Neurology, Phillip K Dick, Philosophical and Religious Reflections, Philosophy of Mind, Pop Cultural Musings, Prometheus, Psychology, Religion, Ridley Scott, Science, Science fiction, Uncategorized with tags , , , , on April 27, 2012 by Uroboros

Blade Runner: What makes us human?

Self-consciousness is a secret, or at least its existence is predicated upon one. The privacy of subjective experience has mystified philosophers for centuries and dogged neuroscientists for decades. Science can, in principle, unravel every enigma in the universe, except perhaps for the one that’s happening in your head right now as you see and understand these words. Neurologists can give rich accounts of the visual processing happening in your occipital lobes and locate the cortical regions responsible for parsing the grammar and grasping the concepts. But they can’t objectively identify the ‘you’ part. There’s no neuron for ‘the self.’ No specific neural network which is essentially causing ‘you’ –with all your unique memories, interpretive quirks, and behavioral habits—to read these words have the particular experience you are having.

This problem is illustrated in debates about artificial intelligence. The goal is to create non-biological sentience with a subjective point-of-view, personal memories, and the ability to make choices. The Turing Test is a method for determining whether a machine is truly intelligent, as opposed to just blindly following a program and reacting algorithmically to stimuli. Basically, if a computer or a robot can convince enough people in a blind test that it is intelligent, then it is. That’s the test. The question is, what kind of behaviors and signs would a machine have to have in order to convince you that it’s self-aware?

Voight-Kampf Test

The 1982 film Blade Runner, based on Phillip K. Dick’s novel Do Androids Dream of Electric Sheep?, has a version of this called the Voight-Kampf test. The androids in the story, Nexus-6 Replicants, are so close to humans in appearance and behavior that it takes an intense psychological questionnaire coupled with a scan of retinal and other involuntary responses to determine the difference. A anomalous emotional reaction is symptomatic of artificial, as opposed to natural, intelligence. Rachel, the Tyrell corporation’s most state-of-the-art Replicant, can’t even tell she’s artificial. “How can it not know what it is?” asks Deckard, the bounty hunter charged with ‘retiring’ rogue Replicants. Tyrell says memory implants have given her a sense of self, a personal narrative context through which she views the world. The line between real and artificial humans, therefore, is far from clear. Rachel asks Deckard if he’s ever ‘retired’ a human by mistake. He says he hasn’t, but the fact that Rachel had to ask is telling. Would you want to take this test?

If you think about it, what makes you’re own inner subjectivity provable to others—and their subjectivity provable to you—are the weird kind of quirks, the idiosyncrasies which are unique to you and would be exceedingly difficult for a program to imitate convincingly. This is what philosophers call the problem of other minds. Self-consciousness is the kind of thing which, by its very nature, cannot be turned inside out and objectively verified. This is what Descartes meant by ‘I think, therefore I am.’ Your own mental experience is the only thing in the world you can be sure of. You could, in principle, be deluded about the appearance of the outer world. You think you’re looking at this computer screen, but who do you know you’re not dreaming or hallucinating or are part of Matrix-like simulation? According to Descartes’ premise, even the consciousness of others could be faked, but you cannot doubt the fact that you are thinking right now, because to doubt this proposition is to actually prove it. All we’re left with is our sense of self. We are thinking things.

Fembot Fatale

The Turing Test, however, rips the rug away from this certainty. If the only proof for intelligence is behavior which implies a mindful agent as its  source, are you sure you could prove you’re a mindful, intelligent being to others? Can you really prove it to yourself? Who’s testing who? Who’s fooling who?

The uncanny proposition hinted at in Blade Runner is that you, the protagonist of your own inner narrative, may actually be artificial, too. Like Rachel and the not-so-human-after-all Deckard, you may be an android and not know it. Your neural circuitry may not have evolved by pure accident. The physical substrate supporting your ‘sense of self’ may be the random by-product of natural selection, something that just blooms from the brain, like an oak grows out of an acorn—but ‘the you part’ has to be programmed in. The circuitry is hijacked by a cultural virus called language, and the hardware is transformed in order to house a being that maybe from this planet, but now lives in its own world. Seen this way, the thick walls of the Cartesian self thin out and become permeable—perforated by motivations and powers not your own, but ‘Society’s.’ Seen in this light, it’s not as hard to view yourself as a kind of robot programmed to behave in particular ways in order to serve purposes which are systematically hidden.

This perspective has interesting moral implications. The typical question prompted by A.I. debates is, if we can make a machine that feel and thinks, does it deserve to be treated with the same dignity as flesh and blood human beings? Can a Replicant have rights? I ask my students this question when we read Frankenstein, the first science fiction story. Two hundred years ago, Mary Shelley was already pondering the moral dilemma posed by A.I. Victor Frankenstein’s artificially-intelligent creation becomes a serial-killing monster precisely because his arrogant and myopic creator (the literary critic Harold Bloom famously called Victor a ‘moral idiot’) refuses to treat him with any dignity and respect. He sees his artificial son as a demon, a fiend, a wretch—never as a human being. That’s the tragedy of Shelley’s novel.

Robot, but doesn’t know it

In Blade Runner,the ‘real’ characters come off as cold and loveless, while the artificial ones turn out to be the most passionate and sympathetic. It’s an interesting inversion which suggests that what really makes us human isn’t something that’s reducible to neural wiring or a genetic coding—it isn’t something that can be measured or tested through retinal scans. Maybe the secret to ‘human nature’ is that it can produce the kind of self-awareness which empowers one to make moral decisions and treat other creatures, human and non-human, with dignity and respect. The radical uncertainty which surrounds selfhood, neurologically speaking, only heightens the ethical imperative. You don’t know the degree of consciousness in others, so why not assume other creatures are as sensitive as you are, and do unto others as you would have them do to you.

In other words, how would Jesus treat a Replicant?

It’s Okay to Kill Zombies ‘Cause They Don’t Have Any Feelings.

Posted in Brain Science, Christianity, David Chalmers, Descartes, Entertainment, Ethics, Metaphysics, Morality, Neurology, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, Pop Cultural Musings, Psychology, The Walking Dead, Zombies with tags , , on March 10, 2012 by Uroboros

You’re sprinting and stumbling through a thick, dark forest. Gun cocked, finger on the trigger. You’re fleeing a zombie horde. You want to survive. They want to eat you. You trip on a rotten limb, tumbling to the ground. Looking up, you’re face-to-face with a zombie. It can’t move, though. A broken leg, severed arm. It’s basically a piece of animated flesh, writhing madly, but not a true threat. You can skirt by it, no problem. What do you do? 

Season Two of The Walking Dead has brought the zombicide issue to the fore. Is it ever wrong to kill zombies? On a practical, survival level, of course, the answer seems morally unambiguous: If a Walker is after you, self-defense necessitates doing what you have to do. 

Self-defense notwithstanding, let’s explore how the characters in TWD view what they’re doing. What’s their ethical stance? As in all zombie fiction, the dominant position is the kill’em all approach: the living dead aren’t people, which excuses or dismisses any moral qualms one may have about pumping a few shotgun rounds into the side of a Walker’s head. But TWD is too thoughtful a series to let this issue go unexamined. 

The existential and moral status of zombies themselves, which has lurked in the background of the series since Season One, moved front and center as we reached the climax of the middle of this season—brought to a head by Herschel, patriarch of the farm. As you’ll recall, Herschel doesn’t share the kill ’em all approach that Rick and company had pretty much taken for granted—and who could blame them? After what happened at their camp and in Atlanta, there’s been little time and reason to contemplate the possible personhood of the herds of Walkers chomping at the bit to kill them.

But, since farm life has slowed things down and afforded the time to think, the issue has slowly but surely lumbered and lunged out into the open. It was just one of the crises interwoven into the drama, but, by Episode Seven, the status of zombies became the key issue, the breaking point in the tension between the main characters and their hosts.

Rick and Herschel's Moral Debate

If you were like me, you couldn’t believe what Herschel was hiding was in the barn. At first, I was with the rest of the gang who thought he was either delusional or up to something sinister. It’s easy to react like Shane and dismiss Herschel’s view. A Walker is a Walker, and the only good Walker is a dead Walker. When Rick confronted him, however, the conviction in Herschel’s reasoning and ethical stance was interesting. From his perspective, a zombie is just a sick human being. What if zombiehood could be cured? What if someone comes up with a serum or antidote to the disease or whatever the TWD mythology eventually puts forth as the cause of the zombocalypse? Behind the evil eyes and pale, rotten skin, Herschel sees a human being waiting to be saved. If that’s your philosophy, then killing a zombie when you don’t have to is murder.    ‘Personhood’ is a tougher thing to verify than you might think. We all walk around assuming the people around us have a subjective awareness of the world—have feelings and memories and intelligence, the ability both to make decisions and be held responsible for them. This assumption frames one’s experience of reality. You can criticize or condemn your fellow human beings for their improprieties—but you don’t feel the same way towards your car or laptop if it let’s you down. You may, for a second or two, get angry at the laptop for freezing up—might even smack it a few times—but that’s just an instinctual projection of your own emotions. If you actually think your laptop is trying to undermine you, then I’ll post a link for the psychiatrist you need to consult.

It’s okay to hit computers because they don’t have any feelings (yet). But how do you know other people have feelings? Sure, they appear to—they have the body language and can speak about intentions and inner states—but that, too, could be just an appearance. After all, that’s just behavior. It could be a simulation of consciousness, a simulacrum of selfhood. You can’t get ‘inside’ somebody’s head and experience the world from their point of view. We don’t have Being John Malkovich portals into the subjectivity of others (yet). Philosophically and scientifically speaking, the only state of consciousness you can be sure of is your own.  

Rene Descartes, the father of modern philosophy, pointed this out in the 17th century, and it’s been a tantalizing issue ever since. When Descartes said cogito ergo sum—I  think, therefore I am—he was trying to establish a rock solid foundation for philosophy and science, but leave it to a Frenchman to lay an intellectual foundation in quicksand and produce the opposite of what he intended. The problem with cogito is that—unless you assume the same things Descartes did about God, language, and math—you can’t really be sure about the existence of other cogitos or even the world outside your own head. What one experiences could be like a dream or a fake reality conjured up by a Matrix-style evil genius. ‘I think, therefore I am’ opens up a Pandora’s jar of radical skepticism and solipsism.

So how do you know that other people are conscious like you and not ‘philosophical zombies,’ i.e. beings which behave like they’re conscious but are in fact only organic  machines without actual intelligence and free-will. Contemporary philosopher of mind David Chalmers has made a career of pointing out the deep quirk—the so-called ‘hard problem’—embedded in the modern concept of personhood. Scientifically-speaking, we can only observe and measure objective phenomena. So, what is ‘mind’ to a neurologist? It’s the product of brain states—it’s located in the synaptic mesh of neurons and electrochemical flow of hormones which happens inside the skull, a purely physical thing which can be observed with an fMRI machine.

This theory was dramatized in Episode Six of Season One by Dr. Jenner at the CDC facility. When he shows Grimes and the gang an actual transformation from human to Walker using (what looks like) an fMRI, Dr. Jenner claims the brain images represent all that one is—the sum total of your memories and dreams, the hopes and fears which define you as a person—and the death of the brain is the irrevocable end of that identity. What is revived through zombification  is not that person—it’s not even human. In other words, you are your brain. Brain dead equals you dead. The zombie that emerges may resemble you in some way—it may move its eyes and limbs as if  it’s a being with some kind of conscious intentions—but it’s not. At least, that’s Dr. Jenner’s theory, and, up until we meet Herschel, nobody on the show seems to disagree or question it.

Philosopher Thomas Nagel wrote a famous essay on the issue called “What Is It Like to Be a Bat?” which argued we shouldn’t reduce mindfulness to purely physical, objective descriptions because such descriptions, by definition, leave out the very thing we’re trying to understand, namely, what is it like to be that being, what it is like to have that mind. We’re right back in Descartes’ quicksand. The Copenhagen interpretation of quantum physics notwithstanding, we seem to be able to explain everything in nature, at least in principle, in physical, materialist terms, except for the very thing we’re using to explain everything else in nature, i.e. our own minds.

These days the debate has become divisive, even ideological. Which side are you on? Are you a materialist—do you believe the mind is either caused by brain states or so closely correlated to them as to be functionally indistinguishable—or are you still haunted by Descartes’ cogito and believe the mind is not just an illusory ghost in the machine? Do you believe there’s something irreducible to the self, maybe even soulful or spiritual? If you do, you’d be labeled a dualist, which, in contemporary philosophy of mind, is a euphemism for superstitious.                         

I think Herschel’s theory offers another way of approaching the problem, one that sidesteps the Cartesian quicksand. After all Herschel’s not interested in proving scientificallythat he’s right about zombiehood. For him, it’s a given: the creatures corralled in the barn aren’t soulless ghouls who can be exterminated with impunity. They’re family members and neighbors who happen to be sick and might someday be cured. He can’t kill them. What’s intriguing about his approach is how it bypasses the metaphysical problem in favor of the ethical question. If you can’t prove beyond a shadow of a doubt that zombies aren’t conscious—devoid of some sliver of humanity swirling around inside their skulls—then isn’t Herschel’s theory a more appropriate moral response, a more humane approach?

Zombies on leashes?

If a zombie attacks, and you can subdue it with out scattering its brains across the grass, then why not leash it and put it in the barn like Herschel did? It’s an ethically-complex question with implications that go beyond the do’s and dont’s of zombocalypse survival. It answers the question of consciousness and selfhood not by getting bogged down in the metaphysical quicksand, but by recognizing the ambiguous metaphysics and essentially saying, until you neurologists and philosophers get a better grip on the issue, we’re going to treat the zombie-other as if it’s a conscious being deserving of humane and dignified treatment. The show roots Herschel’s ethics in his religious beliefs, his faith. Agnostic or atheist viewers might find this a facile cop out, more a symptom of intellectual weakness than a sign of moral integrity. But I don’t think Herschel’s ethics should be dismissed as merely the product of old-timey superstitions. In a situation where there isn’t absolute certainty—where empirical observation and rational explanations can give you two valid, but logically irreconcilable descriptions—isn’t some kind of faith necessary? The zombie dilemma on The Walking Dead echoes the actual debate going on in neurology and philosophy of mind and reminds me of the lines from Albee’s Who’s Afraid of Virginia Wolf? about truth and illusion. We don’t know the difference…but we must carry on as though we did. Amen.

Herschel has decided to carry on as though the zombies are persons who deserve to be treated with some degree of dignity. His faith justifies his moral stance; it’s an act of religious compassion. Even if zombies seem like enemies, he must love them. If they terrify and enrage him, he must pull the beam from his own eye, judge not, and learn to care for his zombie brothers and sisters—in a way which doesn’t threaten the lives of his non-zombie kin, of course. Hence the leashes and barn accommodations. It may not be room and board at a cozy bed and breakfast, but it’s certainly more humane than Shane’s handgun or one of Darryl’s arrows.

There is something to a Sermon on the Mount ethical approach to such quandaries. If we can’t know with scientific certainty the objective nature of consciousness, we shouldn’t be so quick to jump to conclusions and endorse policies, especially violent ones, which depend on assumptions about subjectivity, or the lack there of. The greatest atrocities in history all begin with  dehumanizing the other—by drawing a line between ‘us’ and ‘them.’ Religious beliefs always cut both ways—sometimes they reinforce that line—they sharpen the blade—and sometimes they undermine it by redefining and expanding the definition of what counts as a human being—of who deserves to be treated with respect.

I mean, what Jesus would do to a zombie? Wait, didn’t Jesus become a zombie? (Sorry, couldn’t resist;)

That matters is how you treat the other, the stranger. I think it’s no accident that Herschel is a veterinarian and not a ‘human ‘doctor, which would’ve served his initial plot function—saving  Carl—just as well, if not better. As a vet, Herschel has to care about the pain and suffering of creatures whose states of mind he can’t know or prove. He  has to carry on just the same. What matters most is not trying to test and determine the degree to which a creature is conscious and then scaling your moral obligations in proportion to that measurement—after all, such a measurement may be in principle impossible—what matters is how you treat others in the absence of such evidence. In short, it depends on a kind of faith, a default assumption which necessitates hospitality, not hostility. In an uncertain world, it’s the right thing to do—not only what Jesus might do, but a logically-consistent, rationally-valid thing to do.

The implications are profound. The perspective we adopt, the stance we assume, defines how we relate to animals and the planet as a whole—to other human beings and ultimately oneself.

Of course, by Episode Eight, Herschel backs away from his radical ethical stance. In a state of despair, he regrets putting them in the barn—says it was his way of avoiding the grief over losing his wife. Maybe so. But something tells me that’s just the despair talking. Whether Herschel returns to his old perspective or embraces a kill ’em all approach, I don’t think the issue itself is dead and buried.

My hope is that it will be raised again, and it’ll have something to do with what Dr. Jenner whispered to Rick at the end of Season One. After all the suicidal doctor told Rick that all the survivors are carrying a latent form of the zombie virus. Maybe they’ll meet another scientist down the road who can cure the plague. If this scenario or something like it plays out, then the show will have to confront the zombies-are-people-too versus kill ’em all question again.        

The Science of Myth and the Myth of Science

Posted in anxiety, archetypes, barriers to critical thinking, Brain Science, collective unconscious, Consciousness, Creationism, critical thinking, emotion, God, History, humanities, irrational, Jung, Knowledge, limbic system, Maori, Myth, Mythology, Neurology, paranoia, Philosophical and Religious Reflections, psychoanalysis, Psychology, rational animal, Rationalization, rationalizing animal, reason, Religion, religious, Repression, resistance to critical thinking, Science, social psychology, terror, Terror Management Theory, theory, theory of mind, Uroboros, V.S. Ramachandran, William James with tags on February 3, 2012 by Uroboros

Years ago in a mythology course I taught, a student once came up to me after class with an annoyed look. We’d just covered the Maori creation myth, and something about it had gotten under his skin. According to the myth, father sky, Rangi, and mother earth, Papa, formed out of primordial chaos and tangled in a tight, erotic embrace. Their offspring decided to pry Rangi and Papa apart in order to escape and live on their own. With his ax, Tane, the forest god, finally separated Father Sky and Mother Earth, and in that space, life grew and flourished.

The broad strokes of this creation myth aren’t unique. Ancient Egyptian, Chinese, Greek, and Norse stories (just to name a few) relate life’s origins to the separation of giant primordial parents.

“How could people believe that?” the student asked, shaking his head. It wasn’t his perturbed incredulity that struck me. Often, students initially find stories from ancient cultures to be, well, weird. It was his condescension. For him, ‘myth’ meant not just ‘false,’ but ‘silly.’ In his defense, it’s what it means for most of us. When we want to sneer at strange, fantastical beliefs, we call them ‘myths.’

The term is synonymous with ‘false.’

‘Myth’ originally meant the exact opposite, though. The Ancient Greek root of mythos referred to life’s deepest truths, something discussed and contemplated with a sense of awe and reverence, not incredulity and disdain. Seen in this light, myths are the stories humans tell in order to explain the unknown and make sense of the world. My thesis is that humans are essentially myth-making creatures and will continue to be so—no matter how scientific our stories get.

Scowls form on some students’ faces when they hear a professor say that science is, on a certain level, still mythological. Scientists are still storytellers, though, trying to turn the unknown into the known. Ancient and modern storytellers have different ways of approaching the unknown—different notions about what counts as a valid explanation.

Today, people (tend to) prefer creation stories that fit the scientific paradigm that’s proved so successful in explaining and predicting natural phenomena. But in dismissing past explanations, we overlook some interesting similarities. Ancient and modern stories share what psychologist Carl Jung called archetypal patterns. Jung theorized that humans share underlying patterns of thought because we all inherit the same neurological equipment. The anatomical differences between an ancient human brain and, say, Darwin’s brain are negligible. Setting the obvious differences between the Maori story and Darwin’s theory aside for just a moment, there are archetypal similarities between these accounts.

Darwinism says life began in a kind of primordial soup where, over time, inorganic molecules organized into the first living cell, and then single-celled organisms eventually separated into multicellular organisms, and from that, thanks to genetic mutation and the pressure of natural selection, lifeforms diversified and flourished. The Big Bang has this underlying pattern too: a ‘primordial atom,’ containing all matter, exploded and separated into the cosmic forms we see today.

I think the key difference between ancient and modern creation stories is in the tendency to personify nature, or the lack there of. The modern scientific method tries remove the subjective factor from the equation. Once we stopped projecting our emotions upon ‘Mother Nature,’ we started telling different stories about how ‘she’ works.

Now scientists are investigating how myth-making itself works. Neurologists and evolutionary psychologists are exploring the biological basis of our ability to mythologize and the possible adaptive purposes informing our storytelling instinct. Let’s start by getting hypothetical and do a little ‘state of nature’ thought experiment. Imagine a prehistoric hunter startled by booming thunder. Now we know the meteorological explanation, but he doesn’t. He experiences what thunder feels like to him: anger. But who is angry?

The problem is addressed by the limbic system, the subcortical brain structure that initially processes emotion and memory. Potential dangers must be understood or anxiety will overwhelm the mind, rendering the hunter less able to cope and survive. The amygdala, the brain’s watchdog, primes the body for action—for fight or flight—while the hippocampus tries to associate feelings with memories in order to focus and better define both the stimuli and the appropriate response. This process is entirely unconscious—faster than the speed of consciousness.

The hippocampus recalls an experience of anger, perhaps one involving the hunter’s own father, and then the cerebral cortex, home of our higher cognitive capacities, gets involved. Somewhere in our cortical circuitry, probably in the angular gyrus, where neuroscientist VS Ramachandran says our metaphoric functions reside, storm images are cross-wired with paternal images. A myth is born: sky is father, earth is mother, and the cause-effect logic of storytelling in the brain’s left-hemisphere embellishes until the amygdala eases up, and the anxiety is relatively alleviated. At least the dread becomes more manageable. In neurochemical terms, the adrenaline and cortisol rush are balanced off and contained by dopamine, the calming effect of apparent knowledge, the pleasure of grasping what was once unknown.

From then on, thunder and lightning will be a little less terrifying. Now there is a story to help make sense of it. Storms are a sign of Father Sky’s anger. What do we do? We try to appease this force–to make amends. We honor the deity by singing and dancing. We sacrifice. Now we have myths and rituals. In short, we have a religion.

That’s why so many prehistoric people, who had no contact with one another, came to believe in primordial giants, and we are still not that far removed from this impulse. For example, why do we still name hurricanes? Sometimes, it’s just easier for us to handle nature if we make it a little more human. As neurologists point out, we are hardwired to pick up on patterns in the environment and attribute human-like qualities and intentions to them. Philosophers and psychologists call this penchant for projecting anthropomorphic agency a theory of mind. English teachers call it personification, an imaginative, poetic skill.

This is why dismissive, condescending attitudes toward myth-making frustrate me. The metaphoric-mythic instinct has been, and still is, a tremendous boon to our own self-understanding, without which science, as we know it, probably wouldn’t have evolved. I came to this conclusion while pondering a profound historical fact: no culture in human history ever made the intellectual leap to objective theories first. Human beings start to know the unknown by projecting what they’re already familiar with onto it.

It’s an a priori instinct. We can’t help it.

Modern science helps make us more conscious of this tendency. The scientific method gives us a way of testing our imaginative leaps—our deeply held intuitions about how the world works—so we can come up with more reliable and practical explanations. The mythological method, in turn, reminds us to be critical of any theory which claims to have achieved pure, unassailable objectivity—to have removed, once and for all, the tendency to unconsciously impose our own assumptions and biases on the interpretation of facts. The ability to do that is just as much a myth as the ‘myths’ such claims supposedly debunk. I’ll paraphrase William James here: The truth is always more complex and complicated than the theories which aim to capture it. Just study the history of modern science—the evolution of theories and paradigms over the last 350 years especially—to see evidence for the asymmetrical relationship between beliefs, justifications, and the ever-elusive Truth.

Laid-back, self-aware scientists have no problem admitting the limitations built into the empirical method itself: Scientific conclusions are implicitly provisional. A theory is true for now. The beauty and power of science hinges upon this point—the self-correcting mechanism, the openness to other possibilities. Otherwise, it’s no longer the scientific method at work. It’s politicized dogma peddling. It’s blind mythologizing.

The recent research into the neurology and psychology of myth-making is fascinating. It enhances our understanding of what a myth is: a story imbued with such emotional power and resonance that how it actually lines up with reality is often an afterthought. But what’s equally fascinating to me, is the mythologizing which still informs our science-making.

I think it’s, of course, dangerous to believe blindly in myths, to accredit stories without testing them against experience and empirical evidence. I also believe it’s dangerous to behold scientific theories as somehow above and beyond the mythological instinct. Like the interconnected swirl of the yin-yang, science and myth need each other, and that relationship should be as balanced and transparent as possible.

Uroboros. A universal symbol of balance and immortality.

Uroboros. A universal symbol of balance and immortality.