Archive for the Consciousness Category

Fatal Curiosity: Nietzsche, Lovecraft, and the Terror of the Known

Posted in Consciousness, Existentialism, Gothic, Horror, irrational, Literature, Lovecraft, Lovecraftian, Metaphor, Metaphysics, Myth, Nietzsche, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, Pop Cultural Musings, Pop culture, Prometheus, Psychology, rationalizing animal, Religion, religious, Repression, resistance to critical thinking, short story, Speculative fiction, terror, Uncategorized with tags , , , , , , on October 30, 2013 by Uroboros

Once upon a time, in some out of the way corner of that universe which is dispersed into numberless twinkling solar systems, there was a star upon which clever beasts invented knowing. That was the most arrogant and mendacious minute of ‘world history,’ but nevertheless, it was only a minute. After nature had drawn a few breaths, the star cooled and congealed, and the clever beasts had to die.

Friedrich Nietzsche (1844-1900)

Friedrich Nietzsche (1844-1900)

If you’re a fan, you might think this an excerpt from an H.P. Lovecraft story, one of his twisted tales about erudite, curious men who learn too much about the nature of reality and are either destroyed or deeply damaged by what they discover. But this is actually the opening to Nietzsche’s essay “On Truth and Lies in an Extra-moral Sense” (1873), a biting critique of the epistemological pretentiousness he finds running rampant through Western philosophy. Nietzsche is an iconoclastic philosopher, hammering away at venerated ideas, slashing through sacred assumptions. He gleefully turns traditional theories on their heads, challenging our beliefs, disturbing our values—an intellectual calling that has much in common with H.P. Lovecraft’s literary mission. His favorite theme is what he calls cosmic indifferentism. If Lovecraft has a philosophy, it is this: the universe was not created by a divine intelligence who infused it with an inherent purpose that is compatible with humanity’s most cherished existential desires. The cosmos is utterly indifferent to the human condition, and all of his horrific monsters are metaphors for this indifference.

Nietzsche and Lovecraft are both preoccupied with the crises this conundrum generates.

H.P. Lovecraft (1890-1937)

H.P. Lovecraft (1890-1937)

“What does man actually know about himself?” Nietzsche asks, “Does nature not conceal most things from him?” With an ironic tone meant to provoke his readers, he waxes prophetic: “And woe to that fatal curiosity which might one day have the power to peer out and down through a crack in the chamber of consciousness.” In Lovecraft’s “From Beyond” (1934) this ‘fatal curiosity’ is personified in the scientist Crawford Tillinghast. “What do we know of the world and the universe about us?” Tillinghast asks his friend, the story’s unnamed narrator. “Our means of receiving impressions are absurdly few, and our notions of surrounding objects infinitely narrow. We see things only as we are constructed to see them, and can gain no idea of their absolute nature.” His Promethean quest is to build a machine that lets humans transcend the inherent limitations of our innate perceptual apparatus, see beyond the veil of appearances, and experience reality in the raw. From a Nietzschean perspective, Tillinghast wants to undo the effect of a primitive but deceptively potent technology: language.

In “On Truth and Lie in an Extra-moral Sense,” Nietzsche says symbolic communication is the means by which we transform vivid, moment-to-moment impressions of reality into “less colorful, cooler concepts” that feel “solid, more universal, better known, and more human than the immediately perceived world.” We believe in universal, objective truths because, once filtered through our linguistic schema, the anomalies, exceptions, and border-cases have been marginalized, ignored, and repressed. What is left are generic conceptual properties through which we perceive and describe our experiences. “Truths are illusions,” Nietzsche argues, “which we have forgotten are illusions.” We use concepts to determine whether or not our perceptions, our beliefs, are true, but all concepts, all words, are “metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins.” [For more analysis of this theory of language, read my essay on the subject.]

Furthermore, this process happens unconsciously: the way our nervous system instinctually works guarantees that what we perceive consciously is a filtered picture, not reality in the raw. As a result, we overlook our own creative input and act as if some natural or supernatural authority ‘out there’ puts these words in our heads and compels us to believe in them. Lovecraft has a similar assessment. In “Supernatural Horror in Literature” (1927), his essay on the nature and merits of Gothic and weird storytelling, he says the kind of metaphoric thinking that leads to supernatural beliefs is “virtually permanent so far as the subconscious mind and inner instincts are concerned…there is an actual physiological fixation of the old instincts in our nervous tissue,” hence our innate propensity to perceive superhuman and supernatural causes when confronting the unknown. Nietzsche puts it like this: “All that we actually know about these laws of nature is what we ourselves bring to them…we produce these representations in and from ourselves with the same necessity with which the spider spins.” This, of course, applies to religious dogmas and theological speculations, too.

From Beyond (1986 film adaptation)

From Beyond (1986 film adaptation)

In “From Beyond,” Crawford Tillinghast wants to see “things which no breathing creature has yet seen…overleap time, space, and dimensions, and…peer to the bottom of creation.” The terror is in what slips through the rift and runs amok in this dimension. His scientific triumph quickly becomes a horrific nightmare, one that echoes Nietzsche’s caveat about attaining transgressive knowledge: “If but for an instant [humans] could escape from the prison walls” of belief, our “‘self consciousness’ would be immediately destroyed.”

Here in lies the source of our conundrum, the existential absurdity, the Scylla and Charybdis created by our inherent curiosity: we need to attain knowledge to better ensure our chances of fitting our ecological conditions and passing our genes along to the next generation, and yet, this very drive can bring about our own destruction. It’s not simply that we can unwittingly discover fatal forces. It’s when the pursuit of knowledge moves beyond seeking the information needed to survive and gets recast in terms of discovering values and laws that supposedly pertain to the nature of the cosmos itself. Nietzsche and Lovercraft agree this inevitably leads to existential despair because either we continue to confuse our anthropomorphic projections with the structure of reality itself, and keep wallowing in delusion and ignorance as a result, or we swallow the nihilistic pill and accept that we live in an indifferent cosmos that always manages to wriggle out of even our most clear-headed attempts to grasp and control it. So it’s a question of what’s worse: the terror of the unknown or the terror of the known?

Nietzsche is optimistic about the existential implications of this dilemma. There is a third option worth pursuing: in a godless, meaningless universe, we have poetic license to become superhuman creatures capable of creating the values and meanings we need and want. I don’t know if Lovecraft is confident enough in human potential to endorse Nietzsche’s remedy, though. If the words of Francis Thurston, the protagonist from his most influential story, “The Call of Cthulhu” (1928), are any indication of his beliefs, then Lovecraft doesn’t think our epistemological quest will turn out well:

“[S]ome day the piecing together of dissociated knowledge will open up such terrifying vistas of reality…we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.”

"Cthulhu Rising" by_Somniturne

“Cthulhu Rising” by_Somniturne

Advertisements

The Philosophy of Decomposition: Poe and the Perversity of the Gothic Mind

Posted in Ancient Greek, anxiety, Aristotle, barriers to critical thinking, Christianity, Consciousness, ecology, emotion, Enlightenment, Ethics, fiction, French Revolution, Freud, God, Goth, Gothic, Horror, horror fiction, irrational, Jesus, Literature, Morality, Philosophy, psychoanalysis, Psychology, rational animal, Religion, religious, Repression, resistance to critical thinking, Romanticism, Science, Speculative fiction, terror, tragedy, Uroboros, Writing with tags , , , , , , , , , , , , , , , , , , , , , , , , , , on October 27, 2013 by Uroboros

Whether you think Edgar Allan Poe’s stories are expertly-crafted explorations of the dark side of human nature or morbid, overwrought  melodramas, there is no doubt his work has had a tremendous impact on Western culture. Probably his most important contribution, apart from establishing the contemporary short story format and inventing the detective genre, is revitalizing the Gothic genre and pushing horror fiction in a more philosophically interesting direction. His stories are so enduring and influential because of the conceptual depth he added to generic tropes, redefining literature in the process. He accomplished this feat by perverting the Gothic.

Edgar Allan Poe (1809-49), Master of Gothic literature

Edgar Allan Poe (1809-49), Master of Gothic literature

By the time Poe arrived on the scene, Gothic fiction had already fossilized and become fodder for self-parody. What started with the fantastic absurdities of Horace Walpole’s The Castle of Otranto (1764) and culminating in the speculative complexity of Anne Radcliffe’s Mysteries of Udolpho (1794) had eventually led to Northanger Abbey (1817), Jane Austin’s metafictional send up of what had become pretty stale conventions by then: crumbling castles, tormented heroines, supernatural entities, and family curses. Although the external trappings of Gothic plots may have fallen into ruin, its themes remained relevant. According to Joyce Carol Oates, a master of the genre in her own right, Gothic fiction explores the fragmentation of the alienated mind by inscrutable historical and biological forces that can overwhelm one’s ability to rationally understand the world and make intelligent choices, a critical antidote to naïve utopian visions of the future inspired by the Enlightenment and of particular interest to American culture, the intellectual basis of which is rooted in the rational pursuit of happiness. ‘Gothic’ suggests the fear of something primal and regressive that threatens to undermine mental and social stability. In order to be a culturally relevant again, though, Gothic literature needed a writer who could reanimate its tropes. It needed a morbid, hypersensitive, and arrogant genius named Edgar Allan Poe.

Poe’s key twist is turning the tropes inward and starting with the macabre landscape within—“the terror of the soul,” he calls it. By the 1830s, Poe is focused on composing short fiction, crafting tightly-constructed tales, rendered in dense, pompous prose, spewing from the cracked psyches of unreliable narrators. This is the dark heart of many of his best stories: “Ligeia” (1838), “William Wilson” (1839),  “The Black Cat” (1843), “The Tell-Tale Heart” (1843), and “The Cask of Amontillado” (1846), just to name a few (of course, his most accomplished story, “The Fall of the House of Usher” (1839), flips this dynamic: an unnamed and relatively reasonable narrator details the psychic disintegration of Roderick Usher). Poe’s disturbed, epistemologically-challenged protagonists aren’t the true innovation. Marlowe and Shakespeare pioneered that literary territory centuries before. The element that Poe adds—the novelty that both revitalizes and Americanizes the Gothic—is, what Poe himself calls, “the spirit of the perverseness.”

-d328znhThe narrator in “The Black Cat” puts forth this concept to explain his violent deeds. He says perversity is “one of the primitive impulses of the human heart—one of the indivisible primary faculties…which give direction to the character of Man.” What is its function? It is the “unfathomable longing of the soul to vex itself,” the narrator says, “a perpetual inclination, in the teeth of our best judgment” to commit a “vile or a silly action” precisely because we believe it to be ‘vile’ or ‘silly.’ In “The Imp of the Perverse” (1845), the narrator claims that perversity is “a radical, primitive, irreducible sentiment,” so deep and pervasive, that it is ultimately immune to the prescriptions of the analytical mind. In other words, Poe identified the disruptive and neurotic effects of ‘the Unconscious’ half a century before Freud burst onto the scene.

While these narrators claim that philosophers have ignored man’s irrational inclinations, we shouldn’t assume Poe, himself a well-read scholar, wasn’t influenced by obvious precursors to ‘the spirit of perverseness,’ namely Aristotle and St. Augustine. In the Nicomachean Ethics, Aristotle posits his theory of akrasia, the vice of incontinence, i.e. the inability to control oneself and do the virtuous thing even when one knows it is the right choice. This is his corrective to the Socratic-Platonic dictum that to know the good is to do the good: no one willingly does evil. To Aristotle, this is a distorted view of the human condition. We can know theoretically what the virtuous choice is—wisdom Aristotle calls sophiabut that doesn’t automatically compel us to have phronesisor practical wisdom, which is the ability to do the good. In other words, there is a gap between knowledge and action, a notion that surfaces again in Aristotle’s Poetics. In his analysis of drama, Aristotle identifies hamartia as a key characteristic of the tragic hero, referring to the flaws in judgment that lead to a character’s ultimate downfall. An archery metaphor that means “to miss the mark,” hamartia becomes the main word New Testament writers use to translate the Jewish concept of sin into Greek (they weren’t the first to do this: writers of the Septuagint, the 2C BCE Greek translation of Hebrew scripture, had already made this move). By the fifth century CE, St. Augustine, the most influential Christian theologian of late-antiquity, formulates his doctrine of original sin, describing humanity’s lack of self-control as innate, embodied depravity. For Augustine, when Adam and Eve disobeyed God, they condemned their progeny to bondage, chaining the human spirit to this corrupt, uncontrollable, and ultimately decaying flesh. Only Christ’s sacrifice and God’s loving grace, Augustine assures us, can liberate the spirit from this prison.

This is part of the philosophical lineage behind perverseness, despite his narrators’ claims to the contrary. There is, however, some truth to the critique if seen from a mid-19C perspective. From Descartes right through to Locke, ‘Reason‘ is heralded as humanity’s salvation (of course, Hume and Rousseau poke skeptical holes in 18C Europeans’ over-inflated, self-aggrandizing mythology. Kant manages to salvage some of the optimism, but has to sacrifice key epistemic conceits in the process). But enlightened humanistic confidence looks like hubris to Romantic writers and artists, especially in the wake of the French Revolution and the international traumas it spawned. This is the mindset Poe resonates with: one that is highly skeptical of the ‘Man-is-the-rational-animal’ mythos. Anyone familiar with his biography can see why he gravitates toward a dark worldview. As a critic, he loves savaging fellow writers whose dispositions strike him as too sunny, and as a storyteller, his characters often confront—sometimes ironically, sometimes tragically—the limits of reason, a capacity Poe calls (I think with a tongue-in-cheek ambivalence) ‘ratiocination.’

Dark reflections of a perverse mind

Dark reflections of a perverse mind

The ‘spirit of perverseness’ implies that neither divine ‘Grace’ nor humanistic ‘Reason’ can save us from a life of terror and suffering, especially when we ignore and repress our essential sinfulness. Whether you view history through a biblical or Darwinian lens, one thing is clear: humans aren’t naturally inclined to seek rational knowledge anymore than we are given to loving and respecting each other universally. Modern cognitive science and psychology have shown us that the mind evolved to assist in feeding, procreation, and, of course, to protect the body from danger—not to seek objective truths. It evolved to help us band together in small tribal circles, fearing and even hating those who exist outside that circle. Over time we’ve been able to grasp how much better life would be if only we could rationally control ourselves and universally respect each other—and yet “in the teeth of our best judgment” we still can’t stop ourselves from committing vile and silly actions. Self-sabotage, Poe seems to argue, is our default setting.

Poe shifts Gothic terror from foggy graveyards and dark abbeys to broken brains and twisted minds. The true threats aren’t really lurking ‘out there.’ They’re stirring and bubbling from within, perturbing and overwhelming the soul, often with horrifying results. A Gothic mind lives in a Gothicized world—personifying its surroundings in terms of its own anxious and alienated disposition. ‘Evil’ only appears to be ‘out there.’ As literary and ecological theorist Timothy Morton points out, evil isn’t in the eye of the beholder. Evil is the eye of beholder who frets over the corruption of the world without considering the perverseness generated by his own perceptual apparatus. It’s an Uroboric feedback loop that, left to its own devices, will spin out of control and crumble to pieces. The most disturbing implication of Poe-etic perversity is the sense of helplessness it evokes. Even when his characters are perceptive enough to diagnose their own disorders, they are incapable of stopping the Gothic effect. This is how I interpret the narrator’s ruminations in “The Fall of the House of Usher:”

 What was it…that so unnerved me in the contemplation of the House of Usher? It was a mystery all insoluble; nor could I grapple with the shadowy fancies that crowded upon me as I pondered. I was forced to fall back upon the unsatisfactory conclusion, that while, beyond doubt, there are combinations of very simple natural objects which have the power of thus affecting us, still the analysis of this power lies among considerations beyond our depth. It was possible, I reflected, that a mere different arrangement of the particulars of the scene, of the details of the picture, would be sufficient to modify, or perhaps to annihilate its capacity for sorrowful impression…There can be no doubt that the consciousness of the rapid increase of my superstition…served mainly to accelerate the increase itself. Such, I have long known, is the paradoxical law of all sentiments having terror as a basis. And it might have been for this reason only, that, when I again uplifted my eyes to the house itself, from its image in the pool, there grew in my mind a strange fancy…so ridiculous, indeed, that I but mention it to show the vivid force of the sensations which oppressed me. I had so worked upon my imagination as really to believe that about the whole mansion and domain there hung an atmosphere peculiar to themselves and their immediate vicinity—an atmosphere which had no affinity with the air of heaven, but which had reeked up from the decayed trees, and the gray wall, and the silent tarn—a pestilent and mystic vapour, dull, sluggish, faintly discernible, and leaden-hued…

Fall of the House of Usher (1839)

Fall of the House of Usher (1839)

Reflections on The Walking Dead

Posted in Apocalypse, Brain Science, Consciousness, Descartes, emotion, Ethics, Existentialism, God, Horror, humanities, Metaphor, Metaphysics, Monster, Monsters, Morality, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, Pop Cultural Musings, Pop culture, Psychology, Religion, religious, Science, State of nature, terror, The Walking Dead, theory of mind, Zombies with tags , , , on October 19, 2013 by Uroboros

walking deadWARNING: SPOILERS. The Walking Dead’s violent, post-apocalyptic setting always makes me wonder: what kind of person would I be under circumstances like that? Given what one has to do in order to survive, could I still look at myself in the mirror and recognize the person gazing back at me? Would I even want to?

Critics sometimes complain about the show’s pacing and quieter, more reflective scenarios, but the writers should be applauded for slowing the story down, developing the characters, and exploring the thematic implications of their struggles. The Walking Dead knows how to alternate between terror—the dreaded threat of the unseen, the lurking menace yet to be revealed—and horror, the moment when the monster lunges from the bushes and takes a bite. Utilizing this key dynamic means including lots of slower, quieter scenes. Setting up psychological conflicts and tweaking character arcs enhances the terror because we are more invested in the outcomes—we care about what is lurking around the corner, and, when the horror is finally unleashed, the gore is all the more terrifying because we know more about the victims. It’s a refreshing change of pace from the hyperactivity you get in shows like American Horror Story, a series that flows like a sugar rush—sleek, Gothic concoctions for the Ritalin Generation.

The slow-burn approach also allows viewers to reflect on the shows themes, like the existential and moral status of the Walkers themselves. During Season Two, Herschel didn’t share the kill ’em all approach that Rick and company had pretty much taken for granted—and who could blame them? After what happened in Atlanta in Season One, there was little reason to contemplate the possible personhood of the Walkers chomping at the bit to eat them. But, when farm life slowed things down and gave characters more time to reflect on their situation, the issue slowly but surely lumbered out into the open and became the turning point of the season.

Rick and Herschel's Moral Debate

Rick and Herschel’s Moral Debate

When Rick confronted Herschel about hiding his zombified relatives in the barn, the conviction in Herschel’s moral reasoning was hard to dismiss. From his perspective, a zombie was just a sick human being: behind the blank eyes and pale, rotting skin, Herschel saw a human being waiting to be saved. After all, what if zombiehood could be cured? If that’s your philosophy, then killing a zombie when you don’t have to would be murder. By the end of Season Two, of course, we learn that everybody is infected and thus destined to be a zombie. We’re all the Walking Dead, so to speak. In Season Three, even the duplicitous, devious Governor struggles with the issue. As much as we grow to hate him as a brutal tyrant, he’s also a loving father who can’t let go of his daughter. She’s not just a zombie to him. In the Season Four opener, the issue resurfaced again with Tyreese’s ambivalence about having to kill Walkers all day at the prison fence and then later when Carl rebuked the other kids for naming them. “They’re not people, and they’re not pets,” he tells them. “Don’t name them.” This is after Rick warned him about getting too attached to the pig, which he’d named Violet. To Carl, animals are more like people than Walkers are.

‘Personhood’ is a sticky philosophical issue. We all walk around assuming other people also have a subjective awareness of the world—have feelings and memories and intelligence, can make decisions and be held responsible for them. This assumption, which philosophers call ‘theory of mind,’ frames our experience of reality. But, some philosophers are quick to ask: how do you know others really have feelings and intelligent intentions? Sure, they have the body language and can speak about their inner states, but couldn’t that be mere appearance? After all, that’s just behavior. It could be a simulation of consciousness, a simulacrum of selfhood. You can’t get ‘inside’ somebody’s head and experience the world from their point of view. We don’t have Being John Malkovich portals into the subjectivity of others (yet). Philosophically and scientifically speaking, the only state of consciousness you can be sure of is your own.

That was what Rene Descartes, the highly influential 17th century philosopher, meant when he said cogito ergo sum—I think, therefore I am. He was trying to establish a foundation for modern philosophy and science by basing it on the one thing in the world everyone can be absolutely certain of, i.e. one’s own consciousness, which in turn has the rational capacities to understand the clock-like machinations of the physical world. Descartes, therefore, posits a dualistic metaphysics with physical stuff on side of the ontological divide and mental stuff on the other. Minds can use brains and bodies to get around and know a world made up of mindless stuff. Only humans and God have souls and can ‘know’ what is happening, can understand what is going on.  Zombie girl

The problem with Descartes’ cogito is that—unless you assume the same things Descartes did about God and math—you can’t really be sure about the existence of other cogitos or even the world outside your own head. You could be dreaming or in a fake reality conjured up by a Matrix-style evil genius. ‘I think, therefore I am’ opens up a Pandora’s jar of radical skepticism and solipsism. How do you really know that others aren’t ‘philosophical zombies,’ i.e. beings that behave like they’re conscious but are really only organic machines without subjective experiences and free-will? This is what some philosophers call the ‘hard problem:’ how do brain states generated by the synaptic mesh of neurons and the electrochemical flow inside the skull—purely physical processes that can be observed objectively with an fMRI machine—cause or correlate to subjective awareness—to feelings, images, and ideas that can’t be seen in an fMRI?

This theory was dramatized during Season One by Dr. Jenner when he showed an fMRI rendered transformation from human to Walker. He said the brain holds the sum total of the memories and dreams, the hopes and fears that make you who you are—and the death of the brain is the irrevocable end of that identity. What is revived through zombification is not that person—it’s not even human. In other words, you are your brain. The zombie that emerges may resemble you in some uncanny way—but it’s not really you. That’s of course most characters’ default theory until we meet Herschel and get an alternative perspective. He’s not interested in scientifically or philosophically ‘proving’ the personhood of Walkers. They’re family members and neighbors who happen to be sick and might someday be cured. He can’t kill them. What’s intriguing is how his response bypasses the metaphysical problem and goes right to the ethical question. If you can’t prove beyond a shadow of a doubt that zombies aren’t conscious—that there isn’t some sliver of humanity swirling around inside those rotting skulls—then isn’t Herschel’s theory a more appropriate moral response, a more humane approach?

What matters most, from this perspective, is how you treat the other, the stranger. It’s no accident that Herschel is a veterinarian and not a ‘human ‘doctor, which would’ve served his initial plot function—saving Carl—just as well, if not better. As a vet, Herschel has to care about the pain and suffering of creatures whose states of mind he can’t know or prove. What matters isn’t testing and determining the degree to which a creature is conscious and then scaling your moral obligations in proportion to that measurement—after all, such a measurement may be in principle impossible—what matters is how you treat others in the absence of such evidence. In short, it depends on a kind of faith, a default assumption that necessitates hospitality, not hostility. The perspective one adopts, the stance one assumes, defines how we relate to animals and the planet as a whole—to other human beings and ultimately oneself.

The Walking Dead

The Walking Dead

I think this is one of the most relevant and potent themes in The Walking Dead, and I was glad to see it re-emerge in the Season Four opener. In future episodes, it will be interesting to see how they explore it, especially through Carl and Tyreese. I’ll be focused on how they react to the Walkers: how they manage their feelings and control themselves in the crises to come. Walkers are like uncanny mirrors in which characters can glimpse otherwise hidden aspects of their own minds. What do Tyreese and Carl see when they look into the seemingly-soulless eyes of a Walker, and what does that say about the state of their souls? Will they lose themselves? If they do, can they come back?

Sublimity and the Brightside of Being Terrorized

Posted in Consciousness, conspiracy, critical thinking, emotion, Enlightenment, Ethics, Existentialism, fiction, freedom, Freud, God, Gothic, Horror, humanities, Literature, Lovecraft, Lovecraftian, Morality, nihilism, paranoia, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, psychoanalysis, Psychology, rational animal, reason, Religion, religious, Romanticism, superheroes, terror, Terror Management Theory, The Walking Dead, theory, theory of mind, Uroboros, Zombies with tags , , , , , , , , , , , , , , on October 6, 2013 by Uroboros
http://en.wikipedia.org/wiki/The_Sleep_of_Reason_Produces_Monsters

Goya’s The Sleep of Reason Produces Monsters

We live in a terrorized age. At the dawn of the 21st century, the world is not only coping with the constant threat of violent extremism, we face global warming, potential pandemic diseases, economic uncertainty, Middle Eastern conflicts, the debilitating consequences of partisan politics, and so on. The list grows each time you click on the news. Fear seems to be infecting the collective consciousness like a virus, resulting in a culture of anxiety and a rising tide of helplessness, despair, and anger. In the U.S.,  symptoms of this chronic unease can be seen in the proliferation of apocalyptic paranoia and conspiracy theories coupled with the record sales of both weapons and tickets for Hollywood’s superhero blockbusters, fables that reflect post-9/11 fears and the desire for a hero to sweep in and save us.

That’s why I want to take the time to analyze some complex but important concepts like the sublime, the Gothic, and the uncanny, ideas which, I believe, can help people get a rational grip on the forces that terrorize the soul. Let’s begin with the sublime.

18c philosopher Immanuel Kant

18C Philosopher Immanuel Kant

The word is Latin in origin and means rising up to meet a threshold. To Enlightenment thinkers, it referred to those experiences that challenged or transcended the limits of thought, to overwhelming forces that left humans feeling vulnerable and in need of paternal protection. Edmund Burke, one of the great theorists of the sublime, distinguished this feeling from the experience of beauty. The beautiful is tame, pleasant. It comes from the recognition of order, the harmony of symmetrical form, as in the appreciation of a flower or a healthy human body. You can behold them without being unnerved, without feeling subtly terrorized. Beautiful things speak of a universe with intrinsic meaning, tucking the mind into a world that is hospitable to human endeavors. Contrast this with the awe and astonishment one feels when contemplating the dimensions of a starry sky or a rugged, mist-wreathed mountain. From a distance, of course, they can appear ‘beautiful,’ but, as Immanuel Kant points out in Observations on the Feeling of the Beautiful and Sublime, it is a different kind of pleasure because it contains a “certain dread, or melancholy, in some cases merely the quiet wonder; and in still others with a beauty completely pervading a sublime plan.”

This description captures the ambivalence in sublime experiences, moments where we are at once paradoxically terrified and fascinated by the same thing. It is important here to distinguish ‘terror’ from ‘horror.’ Terror is the experience of danger at a safe distance, the potential of a threat, as opposed to horror, which refers to imminent dangers that actually threaten our existence. If I’m standing on the shore, staring out across a vast, breathtaking sea, entranced by the hissing surf, terror is the goose-pimply, weirded-out feeling I get while contemplating the dimensions and unfathomable power before me. Horror would be what I feel if a tsunami reared up and came crashing in. There’s nothing sublime in horror. It’s too intense to allow for the odd mix of pleasure and fear, no gap in the feeling for some kind of deeper revelation to emerge.

Friedrich's Monk by the Sea

Friedrich’s Monk by the Sea

While Burke located the power of the sublime in the external world, in the recognition of an authority ‘out there,’ Kant has a more sophisticated take. Without digging too deeply into the jargon-laden minutia of his critique, suffice it to say that Kant ‘subjectivizes’ the concept, locating the sublime in the mind itself. I interpret Kant as pointing to a recursive, self-referential quality in the heart of the sublime, an openness that stimulates our imagination in profound ways. When contemplating stormy seas and dark skies, we experience our both nervous system’s anxious reaction to the environment along with a weird sense of wonder and awe. Beneath this thrill, however, is a humbling sense of futility and isolation in the face of the Infinite, in the awesome cycles that evaporate seas, crush mountains, and dissolve stars without a care in the cosmos as to any ‘meaning’ they may have to us. Rising up to the threshold of consciousness is the haunting suspicion that the universe is a harsh place devoid of a predetermined purpose that validates its existence. These contradictory feelings give rise to a self-awareness of the ambivalence itself, allowing ‘meta-cognitive’ processes to emerge. This is the mind’s means of understanding the fissure and trying to close the gap in a meaningful way.

Furthermore, by experiencing forms and magnitudes that stagger and disturb the imagination, the mind can actually grasp its own liberation from the deterministic workings of nature, from the blind mechanisms of a clockwork universe. In his Critique of Judgment, Kant says “the irresistibility of [nature’s] power certainly makes us, considered as natural beings, recognize our physical powerlessness, but at the same time it reveals a capacity for judging ourselves as independent of nature and a superiority over nature…whereby the humanity in our person remains undemeaned even though the human being must submit to that dominion.” One is now thinking about their own thinking, after all, reflecting upon the complexity of the subject-object feedback loop, which, I assert, is the very dynamic that makes self-consciousness and freedom possible in the first place. We can’t feel terrorized by life’s machinations if we aren’t somehow psychologically distant from them, and this gap entails our ability to think intelligently and make decisions about how best to react to our feelings.

Van Gogh's Starry Night

Van Gogh’s Starry Night

I think this is in line with Kant’s claim that the sublime is symbolic of our moral freedom—an aesthetic validation of our ethical intentions and existential purposes over and above our biological inclinations and physical limitations. We are autonomous creatures who can trust our capacity to understand the cosmos and govern ourselves precisely because we are also capable of being terrorized by a universe that appears indifferent to our hopes and dreams. Seen in this light, the sublime is like a secularized burning bush, an enlightened version of God coming out of the whirlwind and parting seas. It is a more mature way of getting in touch with and listening to the divine, a reasonable basis for faith.

My faith is in the dawn of a post-Terrorized Age. What Kant’s critique of the sublime teaches me is that, paradoxically, we need to be terrorized in order to get there. The concept of the sublime allows us to reflect on our fears in order to resist their potentially debilitating, destructive effects. The antidote is in the poison, so to speak. The sublime elevates these feelings: the more sublime the terror, the freer you are, the more moral you can be. So, may you live in terrifying times.

Friedrich's Wanderer above the Sea of Fog

Friedrich’s Wanderer above the Sea of Fog

What is language? What can we do with it, and what does it do to us?

Posted in 1984, 99%, anxiety, barriers to critical thinking, Big Brother, Brain Science, Consciousness, critical thinking, Dystopia, Dystopian, emotion, freedom, George Orwell, humanities, irrational, Jason Reynolds, limbic system, Moraine Valley Community College, Neurology, Newspeak, Nineteen Eighty-four, Orwell, paranoia, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, politics, Politics and Media, rational animal, Rationalization, rationalizing animal, reason, resistance to critical thinking, theory, theory of mind, thoughtcrime, Two Minutes Hate, Uncategorized, Uroboros, Zombies with tags , , , , , , , , , , , , , , , , , , , , , , , , , , on September 20, 2013 by Uroboros

In Orwell’s 1984, INGSOC’s totalitarian control of Oceania ultimately depends on Newspeak, the language the Party is working hard to develop and implement. Once in common use, Newspeak will eliminate the possibility of thoughtcrime, i.e. any idea that contradicts or questions absolute love for and devotion to Big Brother. Newspeak systematically scrubs away all those messy, gray areas from the English language, replacing them with a formal, logically-rigid system. For example, instead of having to decide whether to use ‘awesome,’ ‘fabulous,’ or ‘mind-blowingly stupendous’ to describe a situation, you would algorithmically deploy the Newspeak formula, which reduces the plethora of synonyms you could use to ‘good,’ ‘plusgood,’ or ‘doubleplusgood.’ Furthermore, all antonyms are reduced to ‘ungood,’ ‘plusungood,’ or ‘doubleplusungood.’Newspeak

Syme, a Party linguist, tells Winston, the novel’s rebellious protagonist, that the ultimate goal is to eliminate conscious thought from the speaking process altogether. The Newspeak term for it is ‘duckspeak‘—a more mechanical form of communication that doesn’t require higher-level cognitive functions, like having to pick the word that best expresses your feelings or creating a new one. That sense of freedom and creativity will simply cease to exist once Newspeak has finally displaced ‘Oldspeak.’ “The Revolution will be complete,” Syme tells Winston, “when the language is perfect.” The Proles and the Outer Party (95% of Oceania’s population) will become a mass of mindless duckspeakers, the linguistic equivalent of ‘philosophical zombies’.

Newspeak implies that cognition depends on language—that symbolic communication isn’t merely a neutral means for sending and receiving thoughts. Instead, the words and sentences we use actually influence the way we think about and perceive the world. While Orwell was obviously inspired by the propaganda techniques used by the dictators of his day, perhaps he was also familiar with Nietzsche’s “On Truth and Lying in a Non-Moral Sense” or the work of anthropologists like Boas and Sapir, all of whom embraced some form of what is now called linguistic relativism, a theory which argues for the reality of what Orwell proposed in fiction: we experience the world according to how our language lets us experience it.

Linguist Lera Boroditsky

Linguist Lera Boroditsky

Linguistic relativism is on the rise in the contemporary study of language. The work of, for example, Lera Boroditsky and Daniel Everett provide strong empirical data that supports (at least the weak version of) linguistic relativism, challenging the Chomskian paradigm, which posits a universalist account of how language is acquired, functions, and, by extension, relates to cognition and perception.

In my previous essay on the Uroboric model of mind, I asked about the connection between neuronal processes and symbolic systems: how can an abstract representation impact or determine the outcome of tangible physical processes? How can ionic thresholds in axons and the transmission of hormones across synaptic gaps depend upon the meaning of a symbol? Furthermore, how can we account for this in a naturalistic way that neither ignores the phenomena by defining them out of existence nor distorts the situation by positing physics-defying stuff? In short, how do we give an emergent account of the process?

StopFirst, we ask: what is language? Most linguists will say it means symbolic communication: in other words, information exchanges that utilize symbols. But what is a symbol? As you may recall from your grade school days, symbols are things that stand for, refer to, or evoke other things—for example, the red hexagonal shapes on street corners provokes your foot to press against the brake, or the letters s, t, o, and p each refer to particular sounds, which, when pronounced together, mean ‘put your foot on the brake.’ Simple enough, right? But the facility with which we use language, and with which we reflexively perceive that usage, belies both the complexity of the process and the powerful effects it has on our thinking.

Cognitive linguists and brain scientists have shown that much of our verbal processing happens unconsciously. Generally speaking, when we use language, words just seem to ‘come to mind’ or ‘show up’ in consciousness. We neither need to consciously think about the meaning of each and every word we use, nor do we have to analyze every variation of tone and inflection to understand things like sarcasm and irony. These complex appraisals and determinations are made subconsciously because certain sub-cortical and cortical systems have already processed the nonverbal signals, the formal symbols, and decoded their meaning. That’s what learning a language equips a brain to do, and we can even identify parts that make major contributions. Broca’s area, for example, is a region in the left frontal lobe that is integral to both language production and comprehension. If a stroke damages Broca’s area, the sufferer may lose the ability not only to produce speech, but to comprehend it as well.

Left-brain language regions

Left-brain language regions

Dr. Jill Bolte Taylor

Dr. Jill Bolte Taylor

One of the most publicized cases of sudden ‘language-less-ness’ is that of Dr. Jill Bolte Taylor, the Harvard brain scientist who, in 1996, happened to have a stroke in her left hemisphere, which impacted both the Broca’s and Wernicke’s areas of her brain. She couldn’t remember who she was. She couldn’t use language. Taylor compares it to dying and being reborn, to being an infant in a grown woman’s body. Her insights into a language-less reality shed light on how words and sentences impact cognition. She says she lost her inner voice, that chatter that goes on ‘in’ the head. She no longer organized her experiences in a categorical, analytic way. Reality no longer showed up to her with the same fine-grained detail: it wasn’t divided and subdivided, classified and prejudged in terms of past associations or future expectations, in terms of self and other, us vs. them, and so on. She no longer had an ‘I’ at the center of her experience. Once the left-brain’s anxious, anal-retentive chatter went offline, right-brain processes took over, and, Taylor claims, the world showed up as waves of energy in an interconnected web of reality. She says that, for her at least, it was actually quite pleasant. The world was present in a way that language had simply dialed down and filtered out. [Any of you who are familiar with monotheistic mysticism and/or mindfulness meditation are probably seeing connections to various religious rituals and the oceanic experiences she describes.]

This has profound implications for the study of consciousness. It illustrates how brain anatomy and neural function—purely physical mechanisms—are necessary to consciousness. Necessary, but not sufficient. While we need brain scientists to continue digging deep, locating and mapping the neuronal correlates of consciousness, we also need to factor in the other necessary part of the ‘mystery of consciousness.’ What linguistic relativism and the Bolte Taylor case suggest is that languages themselves, specific symbolic systems, also determine what consciousness is and how it works. It means not only do we need to identify the neuronal correlates of consciousness but the socio-cultural correlates as well. This means embracing an emergent model that can countenance complex systems and self-referential feedback dynamics.

OrwellOrwell understood this. He understood that rhetorical manipulation is a highly effective form of mind control and, therefore, reality construction. Orwell also knew that, if authoritarian regimes could use language to oppress people [20th century dictators actually used these tactics], then freedom and creativity also depend on language. If, that is, we use it self-consciously and critically, and the language itself has freedom and creativity built into it, and its users are vigilant in preserving that quality and refuse to become duckspeakers.

The Challenges of Teaching Critical Thinking

Posted in Consciousness, freedom, irrational, Neurology, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, rational animal, Rationalization, rationalizing animal, reason, Socrates with tags , , , , , , , , on September 6, 2013 by Uroboros
How much power does reason have?

How much power does reason have?

The other day in my critical thinking class, I asked my students about how much control they think they have over their emotions. It’s a crucial issue in the quest to become a better critical thinker. After all, irrational reactions and unfounded feelings are often the main barriers to logical inquiry and sound reasoning.

My argument was that emotions are primal, subconscious judgments our brains make of the environment. I don’t consciously have to order myself to be afraid of a snake and flinch or run. It’s an automatic response. If we feel fear or anger or sadness or joy, it’s because our subcortex has already evaluated the variables, fired up the glands, secreted the hormones, and signaled our organs and muscles to respond in particular way. All of this happens in the blink of an eye, in the interval of a heartbeat. We don’t really consciously choose how to feel about anything. We might be capable of controlling the actions that flow from our feelings—to stop ourselves from reacting this way or that-. But the feelings themselves persist, and you can’t wish them away anymore than you can wish away the rain. In short, our feelings occur to us.

Emotions happen.

I was surprised by how many students didn’t agree. Several claimed they can consciously modulate their feelings, even talk themselves into or out of feeling angry or sad or afraid or joyful if they desire. Part of me wanted to cry, “B.S.” If emotional management worked like that, there wouldn’t be billions spent each year on therapists and happy pills. But in the spirit of critical thinking, we put the idea on trial. In the end, I think most of the students came around to the notion that we have less conscious control over our feelings than we’d like to think, especially after I showed them a clip about marketing guru Clotaire Rapaille and his theory of the reptilian brain and how, in America, the cheese is always dead (seriously click the link and watch the clip—it’s fascinating).

But the initial reaction still puzzles me. Was it the youthful tendency to overestimate one’s abilities? Were they just being provocative, Socratic contrarians? Or is this indicative of a change? I don’t want to make a hasty generalization, but it prompts the question: is there a new psychological self-concept developing among this generation? Do some Millennials have a different phenomenological perspective when it comes to their emotions? Are the medicalization of mental issues and the proliferation of pharmaceutical remedies leading to a new attitude toward human psychology?

As a philosophical person, I’m curious about the history of how humans perceive their own psyches. Plato compared our primal motivations and emotional intuitions to wild horses that reason, the charioteer, tames and steers. Like Nietzsche, I’ve always thought Plato distorted and overrated our rational capacities. Hume said reason is ultimately the slave of our passions. But I’ve always wondered if that isn’t too fatalistic. I guess I lean more towards Hume’s assessment, but if I didn’t still believe in at least the spirit of Plato’s metaphor, then I wouldn’t be teaching critical thinking, right? I mean, what would be the point?

What do you think?

More Human Than Human: Blade Runner and the Radical Ethics of A.I.

Posted in A.I., artificial intelligence, Blade Runner, Brain Science, Christianity, Consciousness, Descartes, Entertainment, Ethics, Film, Jesus, Morality, Neurology, Phillip K Dick, Philosophical and Religious Reflections, Philosophy of Mind, Pop Cultural Musings, Prometheus, Psychology, Religion, Ridley Scott, Science, Science fiction, Uncategorized with tags , , , , on April 27, 2012 by Uroboros

Blade Runner: What makes us human?

Self-consciousness is a secret, or at least its existence is predicated upon one. The privacy of subjective experience has mystified philosophers for centuries and dogged neuroscientists for decades. Science can, in principle, unravel every enigma in the universe, except perhaps for the one that’s happening in your head right now as you see and understand these words. Neurologists can give rich accounts of the visual processing happening in your occipital lobes and locate the cortical regions responsible for parsing the grammar and grasping the concepts. But they can’t objectively identify the ‘you’ part. There’s no neuron for ‘the self.’ No specific neural network which is essentially causing ‘you’ –with all your unique memories, interpretive quirks, and behavioral habits—to read these words have the particular experience you are having.

This problem is illustrated in debates about artificial intelligence. The goal is to create non-biological sentience with a subjective point-of-view, personal memories, and the ability to make choices. The Turing Test is a method for determining whether a machine is truly intelligent, as opposed to just blindly following a program and reacting algorithmically to stimuli. Basically, if a computer or a robot can convince enough people in a blind test that it is intelligent, then it is. That’s the test. The question is, what kind of behaviors and signs would a machine have to have in order to convince you that it’s self-aware?

Voight-Kampf Test

The 1982 film Blade Runner, based on Phillip K. Dick’s novel Do Androids Dream of Electric Sheep?, has a version of this called the Voight-Kampf test. The androids in the story, Nexus-6 Replicants, are so close to humans in appearance and behavior that it takes an intense psychological questionnaire coupled with a scan of retinal and other involuntary responses to determine the difference. A anomalous emotional reaction is symptomatic of artificial, as opposed to natural, intelligence. Rachel, the Tyrell corporation’s most state-of-the-art Replicant, can’t even tell she’s artificial. “How can it not know what it is?” asks Deckard, the bounty hunter charged with ‘retiring’ rogue Replicants. Tyrell says memory implants have given her a sense of self, a personal narrative context through which she views the world. The line between real and artificial humans, therefore, is far from clear. Rachel asks Deckard if he’s ever ‘retired’ a human by mistake. He says he hasn’t, but the fact that Rachel had to ask is telling. Would you want to take this test?

If you think about it, what makes you’re own inner subjectivity provable to others—and their subjectivity provable to you—are the weird kind of quirks, the idiosyncrasies which are unique to you and would be exceedingly difficult for a program to imitate convincingly. This is what philosophers call the problem of other minds. Self-consciousness is the kind of thing which, by its very nature, cannot be turned inside out and objectively verified. This is what Descartes meant by ‘I think, therefore I am.’ Your own mental experience is the only thing in the world you can be sure of. You could, in principle, be deluded about the appearance of the outer world. You think you’re looking at this computer screen, but who do you know you’re not dreaming or hallucinating or are part of Matrix-like simulation? According to Descartes’ premise, even the consciousness of others could be faked, but you cannot doubt the fact that you are thinking right now, because to doubt this proposition is to actually prove it. All we’re left with is our sense of self. We are thinking things.

Fembot Fatale

The Turing Test, however, rips the rug away from this certainty. If the only proof for intelligence is behavior which implies a mindful agent as its  source, are you sure you could prove you’re a mindful, intelligent being to others? Can you really prove it to yourself? Who’s testing who? Who’s fooling who?

The uncanny proposition hinted at in Blade Runner is that you, the protagonist of your own inner narrative, may actually be artificial, too. Like Rachel and the not-so-human-after-all Deckard, you may be an android and not know it. Your neural circuitry may not have evolved by pure accident. The physical substrate supporting your ‘sense of self’ may be the random by-product of natural selection, something that just blooms from the brain, like an oak grows out of an acorn—but ‘the you part’ has to be programmed in. The circuitry is hijacked by a cultural virus called language, and the hardware is transformed in order to house a being that maybe from this planet, but now lives in its own world. Seen this way, the thick walls of the Cartesian self thin out and become permeable—perforated by motivations and powers not your own, but ‘Society’s.’ Seen in this light, it’s not as hard to view yourself as a kind of robot programmed to behave in particular ways in order to serve purposes which are systematically hidden.

This perspective has interesting moral implications. The typical question prompted by A.I. debates is, if we can make a machine that feel and thinks, does it deserve to be treated with the same dignity as flesh and blood human beings? Can a Replicant have rights? I ask my students this question when we read Frankenstein, the first science fiction story. Two hundred years ago, Mary Shelley was already pondering the moral dilemma posed by A.I. Victor Frankenstein’s artificially-intelligent creation becomes a serial-killing monster precisely because his arrogant and myopic creator (the literary critic Harold Bloom famously called Victor a ‘moral idiot’) refuses to treat him with any dignity and respect. He sees his artificial son as a demon, a fiend, a wretch—never as a human being. That’s the tragedy of Shelley’s novel.

Robot, but doesn’t know it

In Blade Runner,the ‘real’ characters come off as cold and loveless, while the artificial ones turn out to be the most passionate and sympathetic. It’s an interesting inversion which suggests that what really makes us human isn’t something that’s reducible to neural wiring or a genetic coding—it isn’t something that can be measured or tested through retinal scans. Maybe the secret to ‘human nature’ is that it can produce the kind of self-awareness which empowers one to make moral decisions and treat other creatures, human and non-human, with dignity and respect. The radical uncertainty which surrounds selfhood, neurologically speaking, only heightens the ethical imperative. You don’t know the degree of consciousness in others, so why not assume other creatures are as sensitive as you are, and do unto others as you would have them do to you.

In other words, how would Jesus treat a Replicant?

%d bloggers like this: