Archive for the Philosophy of Mind Category

Fatal Curiosity: Nietzsche, Lovecraft, and the Terror of the Known

Posted in Consciousness, Existentialism, Gothic, Horror, irrational, Literature, Lovecraft, Lovecraftian, Metaphor, Metaphysics, Myth, Nietzsche, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, Pop Cultural Musings, Pop culture, Prometheus, Psychology, rationalizing animal, Religion, religious, Repression, resistance to critical thinking, short story, Speculative fiction, terror, Uncategorized with tags , , , , , , on October 30, 2013 by Uroboros

Once upon a time, in some out of the way corner of that universe which is dispersed into numberless twinkling solar systems, there was a star upon which clever beasts invented knowing. That was the most arrogant and mendacious minute of ‘world history,’ but nevertheless, it was only a minute. After nature had drawn a few breaths, the star cooled and congealed, and the clever beasts had to die.

Friedrich Nietzsche (1844-1900)

Friedrich Nietzsche (1844-1900)

If you’re a fan, you might think this an excerpt from an H.P. Lovecraft story, one of his twisted tales about erudite, curious men who learn too much about the nature of reality and are either destroyed or deeply damaged by what they discover. But this is actually the opening to Nietzsche’s essay “On Truth and Lies in an Extra-moral Sense” (1873), a biting critique of the epistemological pretentiousness he finds running rampant through Western philosophy. Nietzsche is an iconoclastic philosopher, hammering away at venerated ideas, slashing through sacred assumptions. He gleefully turns traditional theories on their heads, challenging our beliefs, disturbing our values—an intellectual calling that has much in common with H.P. Lovecraft’s literary mission. His favorite theme is what he calls cosmic indifferentism. If Lovecraft has a philosophy, it is this: the universe was not created by a divine intelligence who infused it with an inherent purpose that is compatible with humanity’s most cherished existential desires. The cosmos is utterly indifferent to the human condition, and all of his horrific monsters are metaphors for this indifference.

Nietzsche and Lovecraft are both preoccupied with the crises this conundrum generates.

H.P. Lovecraft (1890-1937)

H.P. Lovecraft (1890-1937)

“What does man actually know about himself?” Nietzsche asks, “Does nature not conceal most things from him?” With an ironic tone meant to provoke his readers, he waxes prophetic: “And woe to that fatal curiosity which might one day have the power to peer out and down through a crack in the chamber of consciousness.” In Lovecraft’s “From Beyond” (1934) this ‘fatal curiosity’ is personified in the scientist Crawford Tillinghast. “What do we know of the world and the universe about us?” Tillinghast asks his friend, the story’s unnamed narrator. “Our means of receiving impressions are absurdly few, and our notions of surrounding objects infinitely narrow. We see things only as we are constructed to see them, and can gain no idea of their absolute nature.” His Promethean quest is to build a machine that lets humans transcend the inherent limitations of our innate perceptual apparatus, see beyond the veil of appearances, and experience reality in the raw. From a Nietzschean perspective, Tillinghast wants to undo the effect of a primitive but deceptively potent technology: language.

In “On Truth and Lie in an Extra-moral Sense,” Nietzsche says symbolic communication is the means by which we transform vivid, moment-to-moment impressions of reality into “less colorful, cooler concepts” that feel “solid, more universal, better known, and more human than the immediately perceived world.” We believe in universal, objective truths because, once filtered through our linguistic schema, the anomalies, exceptions, and border-cases have been marginalized, ignored, and repressed. What is left are generic conceptual properties through which we perceive and describe our experiences. “Truths are illusions,” Nietzsche argues, “which we have forgotten are illusions.” We use concepts to determine whether or not our perceptions, our beliefs, are true, but all concepts, all words, are “metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins.” [For more analysis of this theory of language, read my essay on the subject.]

Furthermore, this process happens unconsciously: the way our nervous system instinctually works guarantees that what we perceive consciously is a filtered picture, not reality in the raw. As a result, we overlook our own creative input and act as if some natural or supernatural authority ‘out there’ puts these words in our heads and compels us to believe in them. Lovecraft has a similar assessment. In “Supernatural Horror in Literature” (1927), his essay on the nature and merits of Gothic and weird storytelling, he says the kind of metaphoric thinking that leads to supernatural beliefs is “virtually permanent so far as the subconscious mind and inner instincts are concerned…there is an actual physiological fixation of the old instincts in our nervous tissue,” hence our innate propensity to perceive superhuman and supernatural causes when confronting the unknown. Nietzsche puts it like this: “All that we actually know about these laws of nature is what we ourselves bring to them…we produce these representations in and from ourselves with the same necessity with which the spider spins.” This, of course, applies to religious dogmas and theological speculations, too.

From Beyond (1986 film adaptation)

From Beyond (1986 film adaptation)

In “From Beyond,” Crawford Tillinghast wants to see “things which no breathing creature has yet seen…overleap time, space, and dimensions, and…peer to the bottom of creation.” The terror is in what slips through the rift and runs amok in this dimension. His scientific triumph quickly becomes a horrific nightmare, one that echoes Nietzsche’s caveat about attaining transgressive knowledge: “If but for an instant [humans] could escape from the prison walls” of belief, our “‘self consciousness’ would be immediately destroyed.”

Here in lies the source of our conundrum, the existential absurdity, the Scylla and Charybdis created by our inherent curiosity: we need to attain knowledge to better ensure our chances of fitting our ecological conditions and passing our genes along to the next generation, and yet, this very drive can bring about our own destruction. It’s not simply that we can unwittingly discover fatal forces. It’s when the pursuit of knowledge moves beyond seeking the information needed to survive and gets recast in terms of discovering values and laws that supposedly pertain to the nature of the cosmos itself. Nietzsche and Lovercraft agree this inevitably leads to existential despair because either we continue to confuse our anthropomorphic projections with the structure of reality itself, and keep wallowing in delusion and ignorance as a result, or we swallow the nihilistic pill and accept that we live in an indifferent cosmos that always manages to wriggle out of even our most clear-headed attempts to grasp and control it. So it’s a question of what’s worse: the terror of the unknown or the terror of the known?

Nietzsche is optimistic about the existential implications of this dilemma. There is a third option worth pursuing: in a godless, meaningless universe, we have poetic license to become superhuman creatures capable of creating the values and meanings we need and want. I don’t know if Lovecraft is confident enough in human potential to endorse Nietzsche’s remedy, though. If the words of Francis Thurston, the protagonist from his most influential story, “The Call of Cthulhu” (1928), are any indication of his beliefs, then Lovecraft doesn’t think our epistemological quest will turn out well:

“[S]ome day the piecing together of dissociated knowledge will open up such terrifying vistas of reality…we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.”

"Cthulhu Rising" by_Somniturne

“Cthulhu Rising” by_Somniturne

Reflections on The Walking Dead

Posted in Apocalypse, Brain Science, Consciousness, Descartes, emotion, Ethics, Existentialism, God, Horror, humanities, Metaphor, Metaphysics, Monster, Monsters, Morality, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, Pop Cultural Musings, Pop culture, Psychology, Religion, religious, Science, State of nature, terror, The Walking Dead, theory of mind, Zombies with tags , , , on October 19, 2013 by Uroboros

walking deadWARNING: SPOILERS. The Walking Dead’s violent, post-apocalyptic setting always makes me wonder: what kind of person would I be under circumstances like that? Given what one has to do in order to survive, could I still look at myself in the mirror and recognize the person gazing back at me? Would I even want to?

Critics sometimes complain about the show’s pacing and quieter, more reflective scenarios, but the writers should be applauded for slowing the story down, developing the characters, and exploring the thematic implications of their struggles. The Walking Dead knows how to alternate between terror—the dreaded threat of the unseen, the lurking menace yet to be revealed—and horror, the moment when the monster lunges from the bushes and takes a bite. Utilizing this key dynamic means including lots of slower, quieter scenes. Setting up psychological conflicts and tweaking character arcs enhances the terror because we are more invested in the outcomes—we care about what is lurking around the corner, and, when the horror is finally unleashed, the gore is all the more terrifying because we know more about the victims. It’s a refreshing change of pace from the hyperactivity you get in shows like American Horror Story, a series that flows like a sugar rush—sleek, Gothic concoctions for the Ritalin Generation.

The slow-burn approach also allows viewers to reflect on the shows themes, like the existential and moral status of the Walkers themselves. During Season Two, Herschel didn’t share the kill ’em all approach that Rick and company had pretty much taken for granted—and who could blame them? After what happened in Atlanta in Season One, there was little reason to contemplate the possible personhood of the Walkers chomping at the bit to eat them. But, when farm life slowed things down and gave characters more time to reflect on their situation, the issue slowly but surely lumbered out into the open and became the turning point of the season.

Rick and Herschel's Moral Debate

Rick and Herschel’s Moral Debate

When Rick confronted Herschel about hiding his zombified relatives in the barn, the conviction in Herschel’s moral reasoning was hard to dismiss. From his perspective, a zombie was just a sick human being: behind the blank eyes and pale, rotting skin, Herschel saw a human being waiting to be saved. After all, what if zombiehood could be cured? If that’s your philosophy, then killing a zombie when you don’t have to would be murder. By the end of Season Two, of course, we learn that everybody is infected and thus destined to be a zombie. We’re all the Walking Dead, so to speak. In Season Three, even the duplicitous, devious Governor struggles with the issue. As much as we grow to hate him as a brutal tyrant, he’s also a loving father who can’t let go of his daughter. She’s not just a zombie to him. In the Season Four opener, the issue resurfaced again with Tyreese’s ambivalence about having to kill Walkers all day at the prison fence and then later when Carl rebuked the other kids for naming them. “They’re not people, and they’re not pets,” he tells them. “Don’t name them.” This is after Rick warned him about getting too attached to the pig, which he’d named Violet. To Carl, animals are more like people than Walkers are.

‘Personhood’ is a sticky philosophical issue. We all walk around assuming other people also have a subjective awareness of the world—have feelings and memories and intelligence, can make decisions and be held responsible for them. This assumption, which philosophers call ‘theory of mind,’ frames our experience of reality. But, some philosophers are quick to ask: how do you know others really have feelings and intelligent intentions? Sure, they have the body language and can speak about their inner states, but couldn’t that be mere appearance? After all, that’s just behavior. It could be a simulation of consciousness, a simulacrum of selfhood. You can’t get ‘inside’ somebody’s head and experience the world from their point of view. We don’t have Being John Malkovich portals into the subjectivity of others (yet). Philosophically and scientifically speaking, the only state of consciousness you can be sure of is your own.

That was what Rene Descartes, the highly influential 17th century philosopher, meant when he said cogito ergo sum—I think, therefore I am. He was trying to establish a foundation for modern philosophy and science by basing it on the one thing in the world everyone can be absolutely certain of, i.e. one’s own consciousness, which in turn has the rational capacities to understand the clock-like machinations of the physical world. Descartes, therefore, posits a dualistic metaphysics with physical stuff on side of the ontological divide and mental stuff on the other. Minds can use brains and bodies to get around and know a world made up of mindless stuff. Only humans and God have souls and can ‘know’ what is happening, can understand what is going on.  Zombie girl

The problem with Descartes’ cogito is that—unless you assume the same things Descartes did about God and math—you can’t really be sure about the existence of other cogitos or even the world outside your own head. You could be dreaming or in a fake reality conjured up by a Matrix-style evil genius. ‘I think, therefore I am’ opens up a Pandora’s jar of radical skepticism and solipsism. How do you really know that others aren’t ‘philosophical zombies,’ i.e. beings that behave like they’re conscious but are really only organic machines without subjective experiences and free-will? This is what some philosophers call the ‘hard problem:’ how do brain states generated by the synaptic mesh of neurons and the electrochemical flow inside the skull—purely physical processes that can be observed objectively with an fMRI machine—cause or correlate to subjective awareness—to feelings, images, and ideas that can’t be seen in an fMRI?

This theory was dramatized during Season One by Dr. Jenner when he showed an fMRI rendered transformation from human to Walker. He said the brain holds the sum total of the memories and dreams, the hopes and fears that make you who you are—and the death of the brain is the irrevocable end of that identity. What is revived through zombification is not that person—it’s not even human. In other words, you are your brain. The zombie that emerges may resemble you in some uncanny way—but it’s not really you. That’s of course most characters’ default theory until we meet Herschel and get an alternative perspective. He’s not interested in scientifically or philosophically ‘proving’ the personhood of Walkers. They’re family members and neighbors who happen to be sick and might someday be cured. He can’t kill them. What’s intriguing is how his response bypasses the metaphysical problem and goes right to the ethical question. If you can’t prove beyond a shadow of a doubt that zombies aren’t conscious—that there isn’t some sliver of humanity swirling around inside those rotting skulls—then isn’t Herschel’s theory a more appropriate moral response, a more humane approach?

What matters most, from this perspective, is how you treat the other, the stranger. It’s no accident that Herschel is a veterinarian and not a ‘human ‘doctor, which would’ve served his initial plot function—saving Carl—just as well, if not better. As a vet, Herschel has to care about the pain and suffering of creatures whose states of mind he can’t know or prove. What matters isn’t testing and determining the degree to which a creature is conscious and then scaling your moral obligations in proportion to that measurement—after all, such a measurement may be in principle impossible—what matters is how you treat others in the absence of such evidence. In short, it depends on a kind of faith, a default assumption that necessitates hospitality, not hostility. The perspective one adopts, the stance one assumes, defines how we relate to animals and the planet as a whole—to other human beings and ultimately oneself.

The Walking Dead

The Walking Dead

I think this is one of the most relevant and potent themes in The Walking Dead, and I was glad to see it re-emerge in the Season Four opener. In future episodes, it will be interesting to see how they explore it, especially through Carl and Tyreese. I’ll be focused on how they react to the Walkers: how they manage their feelings and control themselves in the crises to come. Walkers are like uncanny mirrors in which characters can glimpse otherwise hidden aspects of their own minds. What do Tyreese and Carl see when they look into the seemingly-soulless eyes of a Walker, and what does that say about the state of their souls? Will they lose themselves? If they do, can they come back?

Sublimity and the Brightside of Being Terrorized

Posted in Consciousness, conspiracy, critical thinking, emotion, Enlightenment, Ethics, Existentialism, fiction, freedom, Freud, God, Gothic, Horror, humanities, Literature, Lovecraft, Lovecraftian, Morality, nihilism, paranoia, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, psychoanalysis, Psychology, rational animal, reason, Religion, religious, Romanticism, superheroes, terror, Terror Management Theory, The Walking Dead, theory, theory of mind, Uroboros, Zombies with tags , , , , , , , , , , , , , , on October 6, 2013 by Uroboros
http://en.wikipedia.org/wiki/The_Sleep_of_Reason_Produces_Monsters

Goya’s The Sleep of Reason Produces Monsters

We live in a terrorized age. At the dawn of the 21st century, the world is not only coping with the constant threat of violent extremism, we face global warming, potential pandemic diseases, economic uncertainty, Middle Eastern conflicts, the debilitating consequences of partisan politics, and so on. The list grows each time you click on the news. Fear seems to be infecting the collective consciousness like a virus, resulting in a culture of anxiety and a rising tide of helplessness, despair, and anger. In the U.S.,  symptoms of this chronic unease can be seen in the proliferation of apocalyptic paranoia and conspiracy theories coupled with the record sales of both weapons and tickets for Hollywood’s superhero blockbusters, fables that reflect post-9/11 fears and the desire for a hero to sweep in and save us.

That’s why I want to take the time to analyze some complex but important concepts like the sublime, the Gothic, and the uncanny, ideas which, I believe, can help people get a rational grip on the forces that terrorize the soul. Let’s begin with the sublime.

18c philosopher Immanuel Kant

18C Philosopher Immanuel Kant

The word is Latin in origin and means rising up to meet a threshold. To Enlightenment thinkers, it referred to those experiences that challenged or transcended the limits of thought, to overwhelming forces that left humans feeling vulnerable and in need of paternal protection. Edmund Burke, one of the great theorists of the sublime, distinguished this feeling from the experience of beauty. The beautiful is tame, pleasant. It comes from the recognition of order, the harmony of symmetrical form, as in the appreciation of a flower or a healthy human body. You can behold them without being unnerved, without feeling subtly terrorized. Beautiful things speak of a universe with intrinsic meaning, tucking the mind into a world that is hospitable to human endeavors. Contrast this with the awe and astonishment one feels when contemplating the dimensions of a starry sky or a rugged, mist-wreathed mountain. From a distance, of course, they can appear ‘beautiful,’ but, as Immanuel Kant points out in Observations on the Feeling of the Beautiful and Sublime, it is a different kind of pleasure because it contains a “certain dread, or melancholy, in some cases merely the quiet wonder; and in still others with a beauty completely pervading a sublime plan.”

This description captures the ambivalence in sublime experiences, moments where we are at once paradoxically terrified and fascinated by the same thing. It is important here to distinguish ‘terror’ from ‘horror.’ Terror is the experience of danger at a safe distance, the potential of a threat, as opposed to horror, which refers to imminent dangers that actually threaten our existence. If I’m standing on the shore, staring out across a vast, breathtaking sea, entranced by the hissing surf, terror is the goose-pimply, weirded-out feeling I get while contemplating the dimensions and unfathomable power before me. Horror would be what I feel if a tsunami reared up and came crashing in. There’s nothing sublime in horror. It’s too intense to allow for the odd mix of pleasure and fear, no gap in the feeling for some kind of deeper revelation to emerge.

Friedrich's Monk by the Sea

Friedrich’s Monk by the Sea

While Burke located the power of the sublime in the external world, in the recognition of an authority ‘out there,’ Kant has a more sophisticated take. Without digging too deeply into the jargon-laden minutia of his critique, suffice it to say that Kant ‘subjectivizes’ the concept, locating the sublime in the mind itself. I interpret Kant as pointing to a recursive, self-referential quality in the heart of the sublime, an openness that stimulates our imagination in profound ways. When contemplating stormy seas and dark skies, we experience our both nervous system’s anxious reaction to the environment along with a weird sense of wonder and awe. Beneath this thrill, however, is a humbling sense of futility and isolation in the face of the Infinite, in the awesome cycles that evaporate seas, crush mountains, and dissolve stars without a care in the cosmos as to any ‘meaning’ they may have to us. Rising up to the threshold of consciousness is the haunting suspicion that the universe is a harsh place devoid of a predetermined purpose that validates its existence. These contradictory feelings give rise to a self-awareness of the ambivalence itself, allowing ‘meta-cognitive’ processes to emerge. This is the mind’s means of understanding the fissure and trying to close the gap in a meaningful way.

Furthermore, by experiencing forms and magnitudes that stagger and disturb the imagination, the mind can actually grasp its own liberation from the deterministic workings of nature, from the blind mechanisms of a clockwork universe. In his Critique of Judgment, Kant says “the irresistibility of [nature’s] power certainly makes us, considered as natural beings, recognize our physical powerlessness, but at the same time it reveals a capacity for judging ourselves as independent of nature and a superiority over nature…whereby the humanity in our person remains undemeaned even though the human being must submit to that dominion.” One is now thinking about their own thinking, after all, reflecting upon the complexity of the subject-object feedback loop, which, I assert, is the very dynamic that makes self-consciousness and freedom possible in the first place. We can’t feel terrorized by life’s machinations if we aren’t somehow psychologically distant from them, and this gap entails our ability to think intelligently and make decisions about how best to react to our feelings.

Van Gogh's Starry Night

Van Gogh’s Starry Night

I think this is in line with Kant’s claim that the sublime is symbolic of our moral freedom—an aesthetic validation of our ethical intentions and existential purposes over and above our biological inclinations and physical limitations. We are autonomous creatures who can trust our capacity to understand the cosmos and govern ourselves precisely because we are also capable of being terrorized by a universe that appears indifferent to our hopes and dreams. Seen in this light, the sublime is like a secularized burning bush, an enlightened version of God coming out of the whirlwind and parting seas. It is a more mature way of getting in touch with and listening to the divine, a reasonable basis for faith.

My faith is in the dawn of a post-Terrorized Age. What Kant’s critique of the sublime teaches me is that, paradoxically, we need to be terrorized in order to get there. The concept of the sublime allows us to reflect on our fears in order to resist their potentially debilitating, destructive effects. The antidote is in the poison, so to speak. The sublime elevates these feelings: the more sublime the terror, the freer you are, the more moral you can be. So, may you live in terrifying times.

Friedrich's Wanderer above the Sea of Fog

Friedrich’s Wanderer above the Sea of Fog

What is language? What can we do with it, and what does it do to us?

Posted in 1984, 99%, anxiety, barriers to critical thinking, Big Brother, Brain Science, Consciousness, critical thinking, Dystopia, Dystopian, emotion, freedom, George Orwell, humanities, irrational, Jason Reynolds, limbic system, Moraine Valley Community College, Neurology, Newspeak, Nineteen Eighty-four, Orwell, paranoia, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, politics, Politics and Media, rational animal, Rationalization, rationalizing animal, reason, resistance to critical thinking, theory, theory of mind, thoughtcrime, Two Minutes Hate, Uncategorized, Uroboros, Zombies with tags , , , , , , , , , , , , , , , , , , , , , , , , , , on September 20, 2013 by Uroboros

In Orwell’s 1984, INGSOC’s totalitarian control of Oceania ultimately depends on Newspeak, the language the Party is working hard to develop and implement. Once in common use, Newspeak will eliminate the possibility of thoughtcrime, i.e. any idea that contradicts or questions absolute love for and devotion to Big Brother. Newspeak systematically scrubs away all those messy, gray areas from the English language, replacing them with a formal, logically-rigid system. For example, instead of having to decide whether to use ‘awesome,’ ‘fabulous,’ or ‘mind-blowingly stupendous’ to describe a situation, you would algorithmically deploy the Newspeak formula, which reduces the plethora of synonyms you could use to ‘good,’ ‘plusgood,’ or ‘doubleplusgood.’ Furthermore, all antonyms are reduced to ‘ungood,’ ‘plusungood,’ or ‘doubleplusungood.’Newspeak

Syme, a Party linguist, tells Winston, the novel’s rebellious protagonist, that the ultimate goal is to eliminate conscious thought from the speaking process altogether. The Newspeak term for it is ‘duckspeak‘—a more mechanical form of communication that doesn’t require higher-level cognitive functions, like having to pick the word that best expresses your feelings or creating a new one. That sense of freedom and creativity will simply cease to exist once Newspeak has finally displaced ‘Oldspeak.’ “The Revolution will be complete,” Syme tells Winston, “when the language is perfect.” The Proles and the Outer Party (95% of Oceania’s population) will become a mass of mindless duckspeakers, the linguistic equivalent of ‘philosophical zombies’.

Newspeak implies that cognition depends on language—that symbolic communication isn’t merely a neutral means for sending and receiving thoughts. Instead, the words and sentences we use actually influence the way we think about and perceive the world. While Orwell was obviously inspired by the propaganda techniques used by the dictators of his day, perhaps he was also familiar with Nietzsche’s “On Truth and Lying in a Non-Moral Sense” or the work of anthropologists like Boas and Sapir, all of whom embraced some form of what is now called linguistic relativism, a theory which argues for the reality of what Orwell proposed in fiction: we experience the world according to how our language lets us experience it.

Linguist Lera Boroditsky

Linguist Lera Boroditsky

Linguistic relativism is on the rise in the contemporary study of language. The work of, for example, Lera Boroditsky and Daniel Everett provide strong empirical data that supports (at least the weak version of) linguistic relativism, challenging the Chomskian paradigm, which posits a universalist account of how language is acquired, functions, and, by extension, relates to cognition and perception.

In my previous essay on the Uroboric model of mind, I asked about the connection between neuronal processes and symbolic systems: how can an abstract representation impact or determine the outcome of tangible physical processes? How can ionic thresholds in axons and the transmission of hormones across synaptic gaps depend upon the meaning of a symbol? Furthermore, how can we account for this in a naturalistic way that neither ignores the phenomena by defining them out of existence nor distorts the situation by positing physics-defying stuff? In short, how do we give an emergent account of the process?

StopFirst, we ask: what is language? Most linguists will say it means symbolic communication: in other words, information exchanges that utilize symbols. But what is a symbol? As you may recall from your grade school days, symbols are things that stand for, refer to, or evoke other things—for example, the red hexagonal shapes on street corners provokes your foot to press against the brake, or the letters s, t, o, and p each refer to particular sounds, which, when pronounced together, mean ‘put your foot on the brake.’ Simple enough, right? But the facility with which we use language, and with which we reflexively perceive that usage, belies both the complexity of the process and the powerful effects it has on our thinking.

Cognitive linguists and brain scientists have shown that much of our verbal processing happens unconsciously. Generally speaking, when we use language, words just seem to ‘come to mind’ or ‘show up’ in consciousness. We neither need to consciously think about the meaning of each and every word we use, nor do we have to analyze every variation of tone and inflection to understand things like sarcasm and irony. These complex appraisals and determinations are made subconsciously because certain sub-cortical and cortical systems have already processed the nonverbal signals, the formal symbols, and decoded their meaning. That’s what learning a language equips a brain to do, and we can even identify parts that make major contributions. Broca’s area, for example, is a region in the left frontal lobe that is integral to both language production and comprehension. If a stroke damages Broca’s area, the sufferer may lose the ability not only to produce speech, but to comprehend it as well.

Left-brain language regions

Left-brain language regions

Dr. Jill Bolte Taylor

Dr. Jill Bolte Taylor

One of the most publicized cases of sudden ‘language-less-ness’ is that of Dr. Jill Bolte Taylor, the Harvard brain scientist who, in 1996, happened to have a stroke in her left hemisphere, which impacted both the Broca’s and Wernicke’s areas of her brain. She couldn’t remember who she was. She couldn’t use language. Taylor compares it to dying and being reborn, to being an infant in a grown woman’s body. Her insights into a language-less reality shed light on how words and sentences impact cognition. She says she lost her inner voice, that chatter that goes on ‘in’ the head. She no longer organized her experiences in a categorical, analytic way. Reality no longer showed up to her with the same fine-grained detail: it wasn’t divided and subdivided, classified and prejudged in terms of past associations or future expectations, in terms of self and other, us vs. them, and so on. She no longer had an ‘I’ at the center of her experience. Once the left-brain’s anxious, anal-retentive chatter went offline, right-brain processes took over, and, Taylor claims, the world showed up as waves of energy in an interconnected web of reality. She says that, for her at least, it was actually quite pleasant. The world was present in a way that language had simply dialed down and filtered out. [Any of you who are familiar with monotheistic mysticism and/or mindfulness meditation are probably seeing connections to various religious rituals and the oceanic experiences she describes.]

This has profound implications for the study of consciousness. It illustrates how brain anatomy and neural function—purely physical mechanisms—are necessary to consciousness. Necessary, but not sufficient. While we need brain scientists to continue digging deep, locating and mapping the neuronal correlates of consciousness, we also need to factor in the other necessary part of the ‘mystery of consciousness.’ What linguistic relativism and the Bolte Taylor case suggest is that languages themselves, specific symbolic systems, also determine what consciousness is and how it works. It means not only do we need to identify the neuronal correlates of consciousness but the socio-cultural correlates as well. This means embracing an emergent model that can countenance complex systems and self-referential feedback dynamics.

OrwellOrwell understood this. He understood that rhetorical manipulation is a highly effective form of mind control and, therefore, reality construction. Orwell also knew that, if authoritarian regimes could use language to oppress people [20th century dictators actually used these tactics], then freedom and creativity also depend on language. If, that is, we use it self-consciously and critically, and the language itself has freedom and creativity built into it, and its users are vigilant in preserving that quality and refuse to become duckspeakers.

The Challenges of Teaching Critical Thinking

Posted in Consciousness, freedom, irrational, Neurology, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, rational animal, Rationalization, rationalizing animal, reason, Socrates with tags , , , , , , , , on September 6, 2013 by Uroboros
How much power does reason have?

How much power does reason have?

The other day in my critical thinking class, I asked my students about how much control they think they have over their emotions. It’s a crucial issue in the quest to become a better critical thinker. After all, irrational reactions and unfounded feelings are often the main barriers to logical inquiry and sound reasoning.

My argument was that emotions are primal, subconscious judgments our brains make of the environment. I don’t consciously have to order myself to be afraid of a snake and flinch or run. It’s an automatic response. If we feel fear or anger or sadness or joy, it’s because our subcortex has already evaluated the variables, fired up the glands, secreted the hormones, and signaled our organs and muscles to respond in particular way. All of this happens in the blink of an eye, in the interval of a heartbeat. We don’t really consciously choose how to feel about anything. We might be capable of controlling the actions that flow from our feelings—to stop ourselves from reacting this way or that-. But the feelings themselves persist, and you can’t wish them away anymore than you can wish away the rain. In short, our feelings occur to us.

Emotions happen.

I was surprised by how many students didn’t agree. Several claimed they can consciously modulate their feelings, even talk themselves into or out of feeling angry or sad or afraid or joyful if they desire. Part of me wanted to cry, “B.S.” If emotional management worked like that, there wouldn’t be billions spent each year on therapists and happy pills. But in the spirit of critical thinking, we put the idea on trial. In the end, I think most of the students came around to the notion that we have less conscious control over our feelings than we’d like to think, especially after I showed them a clip about marketing guru Clotaire Rapaille and his theory of the reptilian brain and how, in America, the cheese is always dead (seriously click the link and watch the clip—it’s fascinating).

But the initial reaction still puzzles me. Was it the youthful tendency to overestimate one’s abilities? Were they just being provocative, Socratic contrarians? Or is this indicative of a change? I don’t want to make a hasty generalization, but it prompts the question: is there a new psychological self-concept developing among this generation? Do some Millennials have a different phenomenological perspective when it comes to their emotions? Are the medicalization of mental issues and the proliferation of pharmaceutical remedies leading to a new attitude toward human psychology?

As a philosophical person, I’m curious about the history of how humans perceive their own psyches. Plato compared our primal motivations and emotional intuitions to wild horses that reason, the charioteer, tames and steers. Like Nietzsche, I’ve always thought Plato distorted and overrated our rational capacities. Hume said reason is ultimately the slave of our passions. But I’ve always wondered if that isn’t too fatalistic. I guess I lean more towards Hume’s assessment, but if I didn’t still believe in at least the spirit of Plato’s metaphor, then I wouldn’t be teaching critical thinking, right? I mean, what would be the point?

What do you think?

More Human Than Human: Blade Runner and the Radical Ethics of A.I.

Posted in A.I., artificial intelligence, Blade Runner, Brain Science, Christianity, Consciousness, Descartes, Entertainment, Ethics, Film, Jesus, Morality, Neurology, Phillip K Dick, Philosophical and Religious Reflections, Philosophy of Mind, Pop Cultural Musings, Prometheus, Psychology, Religion, Ridley Scott, Science, Science fiction, Uncategorized with tags , , , , on April 27, 2012 by Uroboros

Blade Runner: What makes us human?

Self-consciousness is a secret, or at least its existence is predicated upon one. The privacy of subjective experience has mystified philosophers for centuries and dogged neuroscientists for decades. Science can, in principle, unravel every enigma in the universe, except perhaps for the one that’s happening in your head right now as you see and understand these words. Neurologists can give rich accounts of the visual processing happening in your occipital lobes and locate the cortical regions responsible for parsing the grammar and grasping the concepts. But they can’t objectively identify the ‘you’ part. There’s no neuron for ‘the self.’ No specific neural network which is essentially causing ‘you’ –with all your unique memories, interpretive quirks, and behavioral habits—to read these words have the particular experience you are having.

This problem is illustrated in debates about artificial intelligence. The goal is to create non-biological sentience with a subjective point-of-view, personal memories, and the ability to make choices. The Turing Test is a method for determining whether a machine is truly intelligent, as opposed to just blindly following a program and reacting algorithmically to stimuli. Basically, if a computer or a robot can convince enough people in a blind test that it is intelligent, then it is. That’s the test. The question is, what kind of behaviors and signs would a machine have to have in order to convince you that it’s self-aware?

Voight-Kampf Test

The 1982 film Blade Runner, based on Phillip K. Dick’s novel Do Androids Dream of Electric Sheep?, has a version of this called the Voight-Kampf test. The androids in the story, Nexus-6 Replicants, are so close to humans in appearance and behavior that it takes an intense psychological questionnaire coupled with a scan of retinal and other involuntary responses to determine the difference. A anomalous emotional reaction is symptomatic of artificial, as opposed to natural, intelligence. Rachel, the Tyrell corporation’s most state-of-the-art Replicant, can’t even tell she’s artificial. “How can it not know what it is?” asks Deckard, the bounty hunter charged with ‘retiring’ rogue Replicants. Tyrell says memory implants have given her a sense of self, a personal narrative context through which she views the world. The line between real and artificial humans, therefore, is far from clear. Rachel asks Deckard if he’s ever ‘retired’ a human by mistake. He says he hasn’t, but the fact that Rachel had to ask is telling. Would you want to take this test?

If you think about it, what makes you’re own inner subjectivity provable to others—and their subjectivity provable to you—are the weird kind of quirks, the idiosyncrasies which are unique to you and would be exceedingly difficult for a program to imitate convincingly. This is what philosophers call the problem of other minds. Self-consciousness is the kind of thing which, by its very nature, cannot be turned inside out and objectively verified. This is what Descartes meant by ‘I think, therefore I am.’ Your own mental experience is the only thing in the world you can be sure of. You could, in principle, be deluded about the appearance of the outer world. You think you’re looking at this computer screen, but who do you know you’re not dreaming or hallucinating or are part of Matrix-like simulation? According to Descartes’ premise, even the consciousness of others could be faked, but you cannot doubt the fact that you are thinking right now, because to doubt this proposition is to actually prove it. All we’re left with is our sense of self. We are thinking things.

Fembot Fatale

The Turing Test, however, rips the rug away from this certainty. If the only proof for intelligence is behavior which implies a mindful agent as its  source, are you sure you could prove you’re a mindful, intelligent being to others? Can you really prove it to yourself? Who’s testing who? Who’s fooling who?

The uncanny proposition hinted at in Blade Runner is that you, the protagonist of your own inner narrative, may actually be artificial, too. Like Rachel and the not-so-human-after-all Deckard, you may be an android and not know it. Your neural circuitry may not have evolved by pure accident. The physical substrate supporting your ‘sense of self’ may be the random by-product of natural selection, something that just blooms from the brain, like an oak grows out of an acorn—but ‘the you part’ has to be programmed in. The circuitry is hijacked by a cultural virus called language, and the hardware is transformed in order to house a being that maybe from this planet, but now lives in its own world. Seen this way, the thick walls of the Cartesian self thin out and become permeable—perforated by motivations and powers not your own, but ‘Society’s.’ Seen in this light, it’s not as hard to view yourself as a kind of robot programmed to behave in particular ways in order to serve purposes which are systematically hidden.

This perspective has interesting moral implications. The typical question prompted by A.I. debates is, if we can make a machine that feel and thinks, does it deserve to be treated with the same dignity as flesh and blood human beings? Can a Replicant have rights? I ask my students this question when we read Frankenstein, the first science fiction story. Two hundred years ago, Mary Shelley was already pondering the moral dilemma posed by A.I. Victor Frankenstein’s artificially-intelligent creation becomes a serial-killing monster precisely because his arrogant and myopic creator (the literary critic Harold Bloom famously called Victor a ‘moral idiot’) refuses to treat him with any dignity and respect. He sees his artificial son as a demon, a fiend, a wretch—never as a human being. That’s the tragedy of Shelley’s novel.

Robot, but doesn’t know it

In Blade Runner,the ‘real’ characters come off as cold and loveless, while the artificial ones turn out to be the most passionate and sympathetic. It’s an interesting inversion which suggests that what really makes us human isn’t something that’s reducible to neural wiring or a genetic coding—it isn’t something that can be measured or tested through retinal scans. Maybe the secret to ‘human nature’ is that it can produce the kind of self-awareness which empowers one to make moral decisions and treat other creatures, human and non-human, with dignity and respect. The radical uncertainty which surrounds selfhood, neurologically speaking, only heightens the ethical imperative. You don’t know the degree of consciousness in others, so why not assume other creatures are as sensitive as you are, and do unto others as you would have them do to you.

In other words, how would Jesus treat a Replicant?

It’s Okay to Kill Zombies ‘Cause They Don’t Have Any Feelings.

Posted in Brain Science, Christianity, David Chalmers, Descartes, Entertainment, Ethics, Metaphysics, Morality, Neurology, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, Pop Cultural Musings, Psychology, The Walking Dead, Zombies with tags , , on March 10, 2012 by Uroboros

You’re sprinting and stumbling through a thick, dark forest. Gun cocked, finger on the trigger. You’re fleeing a zombie horde. You want to survive. They want to eat you. You trip on a rotten limb, tumbling to the ground. Looking up, you’re face-to-face with a zombie. It can’t move, though. A broken leg, severed arm. It’s basically a piece of animated flesh, writhing madly, but not a true threat. You can skirt by it, no problem. What do you do? 

Season Two of The Walking Dead has brought the zombicide issue to the fore. Is it ever wrong to kill zombies? On a practical, survival level, of course, the answer seems morally unambiguous: If a Walker is after you, self-defense necessitates doing what you have to do. 

Self-defense notwithstanding, let’s explore how the characters in TWD view what they’re doing. What’s their ethical stance? As in all zombie fiction, the dominant position is the kill’em all approach: the living dead aren’t people, which excuses or dismisses any moral qualms one may have about pumping a few shotgun rounds into the side of a Walker’s head. But TWD is too thoughtful a series to let this issue go unexamined. 

The existential and moral status of zombies themselves, which has lurked in the background of the series since Season One, moved front and center as we reached the climax of the middle of this season—brought to a head by Herschel, patriarch of the farm. As you’ll recall, Herschel doesn’t share the kill ’em all approach that Rick and company had pretty much taken for granted—and who could blame them? After what happened at their camp and in Atlanta, there’s been little time and reason to contemplate the possible personhood of the herds of Walkers chomping at the bit to kill them.

But, since farm life has slowed things down and afforded the time to think, the issue has slowly but surely lumbered and lunged out into the open. It was just one of the crises interwoven into the drama, but, by Episode Seven, the status of zombies became the key issue, the breaking point in the tension between the main characters and their hosts.

Rick and Herschel's Moral Debate

If you were like me, you couldn’t believe what Herschel was hiding was in the barn. At first, I was with the rest of the gang who thought he was either delusional or up to something sinister. It’s easy to react like Shane and dismiss Herschel’s view. A Walker is a Walker, and the only good Walker is a dead Walker. When Rick confronted him, however, the conviction in Herschel’s reasoning and ethical stance was interesting. From his perspective, a zombie is just a sick human being. What if zombiehood could be cured? What if someone comes up with a serum or antidote to the disease or whatever the TWD mythology eventually puts forth as the cause of the zombocalypse? Behind the evil eyes and pale, rotten skin, Herschel sees a human being waiting to be saved. If that’s your philosophy, then killing a zombie when you don’t have to is murder.    ‘Personhood’ is a tougher thing to verify than you might think. We all walk around assuming the people around us have a subjective awareness of the world—have feelings and memories and intelligence, the ability both to make decisions and be held responsible for them. This assumption frames one’s experience of reality. You can criticize or condemn your fellow human beings for their improprieties—but you don’t feel the same way towards your car or laptop if it let’s you down. You may, for a second or two, get angry at the laptop for freezing up—might even smack it a few times—but that’s just an instinctual projection of your own emotions. If you actually think your laptop is trying to undermine you, then I’ll post a link for the psychiatrist you need to consult.

It’s okay to hit computers because they don’t have any feelings (yet). But how do you know other people have feelings? Sure, they appear to—they have the body language and can speak about intentions and inner states—but that, too, could be just an appearance. After all, that’s just behavior. It could be a simulation of consciousness, a simulacrum of selfhood. You can’t get ‘inside’ somebody’s head and experience the world from their point of view. We don’t have Being John Malkovich portals into the subjectivity of others (yet). Philosophically and scientifically speaking, the only state of consciousness you can be sure of is your own.  

Rene Descartes, the father of modern philosophy, pointed this out in the 17th century, and it’s been a tantalizing issue ever since. When Descartes said cogito ergo sum—I  think, therefore I am—he was trying to establish a rock solid foundation for philosophy and science, but leave it to a Frenchman to lay an intellectual foundation in quicksand and produce the opposite of what he intended. The problem with cogito is that—unless you assume the same things Descartes did about God, language, and math—you can’t really be sure about the existence of other cogitos or even the world outside your own head. What one experiences could be like a dream or a fake reality conjured up by a Matrix-style evil genius. ‘I think, therefore I am’ opens up a Pandora’s jar of radical skepticism and solipsism.

So how do you know that other people are conscious like you and not ‘philosophical zombies,’ i.e. beings which behave like they’re conscious but are in fact only organic  machines without actual intelligence and free-will. Contemporary philosopher of mind David Chalmers has made a career of pointing out the deep quirk—the so-called ‘hard problem’—embedded in the modern concept of personhood. Scientifically-speaking, we can only observe and measure objective phenomena. So, what is ‘mind’ to a neurologist? It’s the product of brain states—it’s located in the synaptic mesh of neurons and electrochemical flow of hormones which happens inside the skull, a purely physical thing which can be observed with an fMRI machine.

This theory was dramatized in Episode Six of Season One by Dr. Jenner at the CDC facility. When he shows Grimes and the gang an actual transformation from human to Walker using (what looks like) an fMRI, Dr. Jenner claims the brain images represent all that one is—the sum total of your memories and dreams, the hopes and fears which define you as a person—and the death of the brain is the irrevocable end of that identity. What is revived through zombification  is not that person—it’s not even human. In other words, you are your brain. Brain dead equals you dead. The zombie that emerges may resemble you in some way—it may move its eyes and limbs as if  it’s a being with some kind of conscious intentions—but it’s not. At least, that’s Dr. Jenner’s theory, and, up until we meet Herschel, nobody on the show seems to disagree or question it.

Philosopher Thomas Nagel wrote a famous essay on the issue called “What Is It Like to Be a Bat?” which argued we shouldn’t reduce mindfulness to purely physical, objective descriptions because such descriptions, by definition, leave out the very thing we’re trying to understand, namely, what is it like to be that being, what it is like to have that mind. We’re right back in Descartes’ quicksand. The Copenhagen interpretation of quantum physics notwithstanding, we seem to be able to explain everything in nature, at least in principle, in physical, materialist terms, except for the very thing we’re using to explain everything else in nature, i.e. our own minds.

These days the debate has become divisive, even ideological. Which side are you on? Are you a materialist—do you believe the mind is either caused by brain states or so closely correlated to them as to be functionally indistinguishable—or are you still haunted by Descartes’ cogito and believe the mind is not just an illusory ghost in the machine? Do you believe there’s something irreducible to the self, maybe even soulful or spiritual? If you do, you’d be labeled a dualist, which, in contemporary philosophy of mind, is a euphemism for superstitious.                         

I think Herschel’s theory offers another way of approaching the problem, one that sidesteps the Cartesian quicksand. After all Herschel’s not interested in proving scientificallythat he’s right about zombiehood. For him, it’s a given: the creatures corralled in the barn aren’t soulless ghouls who can be exterminated with impunity. They’re family members and neighbors who happen to be sick and might someday be cured. He can’t kill them. What’s intriguing about his approach is how it bypasses the metaphysical problem in favor of the ethical question. If you can’t prove beyond a shadow of a doubt that zombies aren’t conscious—devoid of some sliver of humanity swirling around inside their skulls—then isn’t Herschel’s theory a more appropriate moral response, a more humane approach?

Zombies on leashes?

If a zombie attacks, and you can subdue it with out scattering its brains across the grass, then why not leash it and put it in the barn like Herschel did? It’s an ethically-complex question with implications that go beyond the do’s and dont’s of zombocalypse survival. It answers the question of consciousness and selfhood not by getting bogged down in the metaphysical quicksand, but by recognizing the ambiguous metaphysics and essentially saying, until you neurologists and philosophers get a better grip on the issue, we’re going to treat the zombie-other as if it’s a conscious being deserving of humane and dignified treatment. The show roots Herschel’s ethics in his religious beliefs, his faith. Agnostic or atheist viewers might find this a facile cop out, more a symptom of intellectual weakness than a sign of moral integrity. But I don’t think Herschel’s ethics should be dismissed as merely the product of old-timey superstitions. In a situation where there isn’t absolute certainty—where empirical observation and rational explanations can give you two valid, but logically irreconcilable descriptions—isn’t some kind of faith necessary? The zombie dilemma on The Walking Dead echoes the actual debate going on in neurology and philosophy of mind and reminds me of the lines from Albee’s Who’s Afraid of Virginia Wolf? about truth and illusion. We don’t know the difference…but we must carry on as though we did. Amen.

Herschel has decided to carry on as though the zombies are persons who deserve to be treated with some degree of dignity. His faith justifies his moral stance; it’s an act of religious compassion. Even if zombies seem like enemies, he must love them. If they terrify and enrage him, he must pull the beam from his own eye, judge not, and learn to care for his zombie brothers and sisters—in a way which doesn’t threaten the lives of his non-zombie kin, of course. Hence the leashes and barn accommodations. It may not be room and board at a cozy bed and breakfast, but it’s certainly more humane than Shane’s handgun or one of Darryl’s arrows.

There is something to a Sermon on the Mount ethical approach to such quandaries. If we can’t know with scientific certainty the objective nature of consciousness, we shouldn’t be so quick to jump to conclusions and endorse policies, especially violent ones, which depend on assumptions about subjectivity, or the lack there of. The greatest atrocities in history all begin with  dehumanizing the other—by drawing a line between ‘us’ and ‘them.’ Religious beliefs always cut both ways—sometimes they reinforce that line—they sharpen the blade—and sometimes they undermine it by redefining and expanding the definition of what counts as a human being—of who deserves to be treated with respect.

I mean, what Jesus would do to a zombie? Wait, didn’t Jesus become a zombie? (Sorry, couldn’t resist;)

That matters is how you treat the other, the stranger. I think it’s no accident that Herschel is a veterinarian and not a ‘human ‘doctor, which would’ve served his initial plot function—saving  Carl—just as well, if not better. As a vet, Herschel has to care about the pain and suffering of creatures whose states of mind he can’t know or prove. He  has to carry on just the same. What matters most is not trying to test and determine the degree to which a creature is conscious and then scaling your moral obligations in proportion to that measurement—after all, such a measurement may be in principle impossible—what matters is how you treat others in the absence of such evidence. In short, it depends on a kind of faith, a default assumption which necessitates hospitality, not hostility. In an uncertain world, it’s the right thing to do—not only what Jesus might do, but a logically-consistent, rationally-valid thing to do.

The implications are profound. The perspective we adopt, the stance we assume, defines how we relate to animals and the planet as a whole—to other human beings and ultimately oneself.

Of course, by Episode Eight, Herschel backs away from his radical ethical stance. In a state of despair, he regrets putting them in the barn—says it was his way of avoiding the grief over losing his wife. Maybe so. But something tells me that’s just the despair talking. Whether Herschel returns to his old perspective or embraces a kill ’em all approach, I don’t think the issue itself is dead and buried.

My hope is that it will be raised again, and it’ll have something to do with what Dr. Jenner whispered to Rick at the end of Season One. After all the suicidal doctor told Rick that all the survivors are carrying a latent form of the zombie virus. Maybe they’ll meet another scientist down the road who can cure the plague. If this scenario or something like it plays out, then the show will have to confront the zombies-are-people-too versus kill ’em all question again.        

%d bloggers like this: