Archive for the rationalizing animal Category

Fatal Curiosity: Nietzsche, Lovecraft, and the Terror of the Known

Posted in Consciousness, Existentialism, Gothic, Horror, irrational, Literature, Lovecraft, Lovecraftian, Metaphor, Metaphysics, Myth, Nietzsche, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, Pop Cultural Musings, Pop culture, Prometheus, Psychology, rationalizing animal, Religion, religious, Repression, resistance to critical thinking, short story, Speculative fiction, terror, Uncategorized with tags , , , , , , on October 30, 2013 by Uroboros

Once upon a time, in some out of the way corner of that universe which is dispersed into numberless twinkling solar systems, there was a star upon which clever beasts invented knowing. That was the most arrogant and mendacious minute of ‘world history,’ but nevertheless, it was only a minute. After nature had drawn a few breaths, the star cooled and congealed, and the clever beasts had to die.

Friedrich Nietzsche (1844-1900)

Friedrich Nietzsche (1844-1900)

If you’re a fan, you might think this an excerpt from an H.P. Lovecraft story, one of his twisted tales about erudite, curious men who learn too much about the nature of reality and are either destroyed or deeply damaged by what they discover. But this is actually the opening to Nietzsche’s essay “On Truth and Lies in an Extra-moral Sense” (1873), a biting critique of the epistemological pretentiousness he finds running rampant through Western philosophy. Nietzsche is an iconoclastic philosopher, hammering away at venerated ideas, slashing through sacred assumptions. He gleefully turns traditional theories on their heads, challenging our beliefs, disturbing our values—an intellectual calling that has much in common with H.P. Lovecraft’s literary mission. His favorite theme is what he calls cosmic indifferentism. If Lovecraft has a philosophy, it is this: the universe was not created by a divine intelligence who infused it with an inherent purpose that is compatible with humanity’s most cherished existential desires. The cosmos is utterly indifferent to the human condition, and all of his horrific monsters are metaphors for this indifference.

Nietzsche and Lovecraft are both preoccupied with the crises this conundrum generates.

H.P. Lovecraft (1890-1937)

H.P. Lovecraft (1890-1937)

“What does man actually know about himself?” Nietzsche asks, “Does nature not conceal most things from him?” With an ironic tone meant to provoke his readers, he waxes prophetic: “And woe to that fatal curiosity which might one day have the power to peer out and down through a crack in the chamber of consciousness.” In Lovecraft’s “From Beyond” (1934) this ‘fatal curiosity’ is personified in the scientist Crawford Tillinghast. “What do we know of the world and the universe about us?” Tillinghast asks his friend, the story’s unnamed narrator. “Our means of receiving impressions are absurdly few, and our notions of surrounding objects infinitely narrow. We see things only as we are constructed to see them, and can gain no idea of their absolute nature.” His Promethean quest is to build a machine that lets humans transcend the inherent limitations of our innate perceptual apparatus, see beyond the veil of appearances, and experience reality in the raw. From a Nietzschean perspective, Tillinghast wants to undo the effect of a primitive but deceptively potent technology: language.

In “On Truth and Lie in an Extra-moral Sense,” Nietzsche says symbolic communication is the means by which we transform vivid, moment-to-moment impressions of reality into “less colorful, cooler concepts” that feel “solid, more universal, better known, and more human than the immediately perceived world.” We believe in universal, objective truths because, once filtered through our linguistic schema, the anomalies, exceptions, and border-cases have been marginalized, ignored, and repressed. What is left are generic conceptual properties through which we perceive and describe our experiences. “Truths are illusions,” Nietzsche argues, “which we have forgotten are illusions.” We use concepts to determine whether or not our perceptions, our beliefs, are true, but all concepts, all words, are “metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins.” [For more analysis of this theory of language, read my essay on the subject.]

Furthermore, this process happens unconsciously: the way our nervous system instinctually works guarantees that what we perceive consciously is a filtered picture, not reality in the raw. As a result, we overlook our own creative input and act as if some natural or supernatural authority ‘out there’ puts these words in our heads and compels us to believe in them. Lovecraft has a similar assessment. In “Supernatural Horror in Literature” (1927), his essay on the nature and merits of Gothic and weird storytelling, he says the kind of metaphoric thinking that leads to supernatural beliefs is “virtually permanent so far as the subconscious mind and inner instincts are concerned…there is an actual physiological fixation of the old instincts in our nervous tissue,” hence our innate propensity to perceive superhuman and supernatural causes when confronting the unknown. Nietzsche puts it like this: “All that we actually know about these laws of nature is what we ourselves bring to them…we produce these representations in and from ourselves with the same necessity with which the spider spins.” This, of course, applies to religious dogmas and theological speculations, too.

From Beyond (1986 film adaptation)

From Beyond (1986 film adaptation)

In “From Beyond,” Crawford Tillinghast wants to see “things which no breathing creature has yet seen…overleap time, space, and dimensions, and…peer to the bottom of creation.” The terror is in what slips through the rift and runs amok in this dimension. His scientific triumph quickly becomes a horrific nightmare, one that echoes Nietzsche’s caveat about attaining transgressive knowledge: “If but for an instant [humans] could escape from the prison walls” of belief, our “‘self consciousness’ would be immediately destroyed.”

Here in lies the source of our conundrum, the existential absurdity, the Scylla and Charybdis created by our inherent curiosity: we need to attain knowledge to better ensure our chances of fitting our ecological conditions and passing our genes along to the next generation, and yet, this very drive can bring about our own destruction. It’s not simply that we can unwittingly discover fatal forces. It’s when the pursuit of knowledge moves beyond seeking the information needed to survive and gets recast in terms of discovering values and laws that supposedly pertain to the nature of the cosmos itself. Nietzsche and Lovercraft agree this inevitably leads to existential despair because either we continue to confuse our anthropomorphic projections with the structure of reality itself, and keep wallowing in delusion and ignorance as a result, or we swallow the nihilistic pill and accept that we live in an indifferent cosmos that always manages to wriggle out of even our most clear-headed attempts to grasp and control it. So it’s a question of what’s worse: the terror of the unknown or the terror of the known?

Nietzsche is optimistic about the existential implications of this dilemma. There is a third option worth pursuing: in a godless, meaningless universe, we have poetic license to become superhuman creatures capable of creating the values and meanings we need and want. I don’t know if Lovecraft is confident enough in human potential to endorse Nietzsche’s remedy, though. If the words of Francis Thurston, the protagonist from his most influential story, “The Call of Cthulhu” (1928), are any indication of his beliefs, then Lovecraft doesn’t think our epistemological quest will turn out well:

“[S]ome day the piecing together of dissociated knowledge will open up such terrifying vistas of reality…we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.”

"Cthulhu Rising" by_Somniturne

“Cthulhu Rising” by_Somniturne

What is language? What can we do with it, and what does it do to us?

Posted in 1984, 99%, anxiety, barriers to critical thinking, Big Brother, Brain Science, Consciousness, critical thinking, Dystopia, Dystopian, emotion, freedom, George Orwell, humanities, irrational, Jason Reynolds, limbic system, Moraine Valley Community College, Neurology, Newspeak, Nineteen Eighty-four, Orwell, paranoia, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, politics, Politics and Media, rational animal, Rationalization, rationalizing animal, reason, resistance to critical thinking, theory, theory of mind, thoughtcrime, Two Minutes Hate, Uncategorized, Uroboros, Zombies with tags , , , , , , , , , , , , , , , , , , , , , , , , , , on September 20, 2013 by Uroboros

In Orwell’s 1984, INGSOC’s totalitarian control of Oceania ultimately depends on Newspeak, the language the Party is working hard to develop and implement. Once in common use, Newspeak will eliminate the possibility of thoughtcrime, i.e. any idea that contradicts or questions absolute love for and devotion to Big Brother. Newspeak systematically scrubs away all those messy, gray areas from the English language, replacing them with a formal, logically-rigid system. For example, instead of having to decide whether to use ‘awesome,’ ‘fabulous,’ or ‘mind-blowingly stupendous’ to describe a situation, you would algorithmically deploy the Newspeak formula, which reduces the plethora of synonyms you could use to ‘good,’ ‘plusgood,’ or ‘doubleplusgood.’ Furthermore, all antonyms are reduced to ‘ungood,’ ‘plusungood,’ or ‘doubleplusungood.’Newspeak

Syme, a Party linguist, tells Winston, the novel’s rebellious protagonist, that the ultimate goal is to eliminate conscious thought from the speaking process altogether. The Newspeak term for it is ‘duckspeak‘—a more mechanical form of communication that doesn’t require higher-level cognitive functions, like having to pick the word that best expresses your feelings or creating a new one. That sense of freedom and creativity will simply cease to exist once Newspeak has finally displaced ‘Oldspeak.’ “The Revolution will be complete,” Syme tells Winston, “when the language is perfect.” The Proles and the Outer Party (95% of Oceania’s population) will become a mass of mindless duckspeakers, the linguistic equivalent of ‘philosophical zombies’.

Newspeak implies that cognition depends on language—that symbolic communication isn’t merely a neutral means for sending and receiving thoughts. Instead, the words and sentences we use actually influence the way we think about and perceive the world. While Orwell was obviously inspired by the propaganda techniques used by the dictators of his day, perhaps he was also familiar with Nietzsche’s “On Truth and Lying in a Non-Moral Sense” or the work of anthropologists like Boas and Sapir, all of whom embraced some form of what is now called linguistic relativism, a theory which argues for the reality of what Orwell proposed in fiction: we experience the world according to how our language lets us experience it.

Linguist Lera Boroditsky

Linguist Lera Boroditsky

Linguistic relativism is on the rise in the contemporary study of language. The work of, for example, Lera Boroditsky and Daniel Everett provide strong empirical data that supports (at least the weak version of) linguistic relativism, challenging the Chomskian paradigm, which posits a universalist account of how language is acquired, functions, and, by extension, relates to cognition and perception.

In my previous essay on the Uroboric model of mind, I asked about the connection between neuronal processes and symbolic systems: how can an abstract representation impact or determine the outcome of tangible physical processes? How can ionic thresholds in axons and the transmission of hormones across synaptic gaps depend upon the meaning of a symbol? Furthermore, how can we account for this in a naturalistic way that neither ignores the phenomena by defining them out of existence nor distorts the situation by positing physics-defying stuff? In short, how do we give an emergent account of the process?

StopFirst, we ask: what is language? Most linguists will say it means symbolic communication: in other words, information exchanges that utilize symbols. But what is a symbol? As you may recall from your grade school days, symbols are things that stand for, refer to, or evoke other things—for example, the red hexagonal shapes on street corners provokes your foot to press against the brake, or the letters s, t, o, and p each refer to particular sounds, which, when pronounced together, mean ‘put your foot on the brake.’ Simple enough, right? But the facility with which we use language, and with which we reflexively perceive that usage, belies both the complexity of the process and the powerful effects it has on our thinking.

Cognitive linguists and brain scientists have shown that much of our verbal processing happens unconsciously. Generally speaking, when we use language, words just seem to ‘come to mind’ or ‘show up’ in consciousness. We neither need to consciously think about the meaning of each and every word we use, nor do we have to analyze every variation of tone and inflection to understand things like sarcasm and irony. These complex appraisals and determinations are made subconsciously because certain sub-cortical and cortical systems have already processed the nonverbal signals, the formal symbols, and decoded their meaning. That’s what learning a language equips a brain to do, and we can even identify parts that make major contributions. Broca’s area, for example, is a region in the left frontal lobe that is integral to both language production and comprehension. If a stroke damages Broca’s area, the sufferer may lose the ability not only to produce speech, but to comprehend it as well.

Left-brain language regions

Left-brain language regions

Dr. Jill Bolte Taylor

Dr. Jill Bolte Taylor

One of the most publicized cases of sudden ‘language-less-ness’ is that of Dr. Jill Bolte Taylor, the Harvard brain scientist who, in 1996, happened to have a stroke in her left hemisphere, which impacted both the Broca’s and Wernicke’s areas of her brain. She couldn’t remember who she was. She couldn’t use language. Taylor compares it to dying and being reborn, to being an infant in a grown woman’s body. Her insights into a language-less reality shed light on how words and sentences impact cognition. She says she lost her inner voice, that chatter that goes on ‘in’ the head. She no longer organized her experiences in a categorical, analytic way. Reality no longer showed up to her with the same fine-grained detail: it wasn’t divided and subdivided, classified and prejudged in terms of past associations or future expectations, in terms of self and other, us vs. them, and so on. She no longer had an ‘I’ at the center of her experience. Once the left-brain’s anxious, anal-retentive chatter went offline, right-brain processes took over, and, Taylor claims, the world showed up as waves of energy in an interconnected web of reality. She says that, for her at least, it was actually quite pleasant. The world was present in a way that language had simply dialed down and filtered out. [Any of you who are familiar with monotheistic mysticism and/or mindfulness meditation are probably seeing connections to various religious rituals and the oceanic experiences she describes.]

This has profound implications for the study of consciousness. It illustrates how brain anatomy and neural function—purely physical mechanisms—are necessary to consciousness. Necessary, but not sufficient. While we need brain scientists to continue digging deep, locating and mapping the neuronal correlates of consciousness, we also need to factor in the other necessary part of the ‘mystery of consciousness.’ What linguistic relativism and the Bolte Taylor case suggest is that languages themselves, specific symbolic systems, also determine what consciousness is and how it works. It means not only do we need to identify the neuronal correlates of consciousness but the socio-cultural correlates as well. This means embracing an emergent model that can countenance complex systems and self-referential feedback dynamics.

OrwellOrwell understood this. He understood that rhetorical manipulation is a highly effective form of mind control and, therefore, reality construction. Orwell also knew that, if authoritarian regimes could use language to oppress people [20th century dictators actually used these tactics], then freedom and creativity also depend on language. If, that is, we use it self-consciously and critically, and the language itself has freedom and creativity built into it, and its users are vigilant in preserving that quality and refuse to become duckspeakers.

The Challenges of Teaching Critical Thinking

Posted in Consciousness, freedom, irrational, Neurology, Philosophical and Religious Reflections, Philosophy, Philosophy of Mind, rational animal, Rationalization, rationalizing animal, reason, Socrates with tags , , , , , , , , on September 6, 2013 by Uroboros
How much power does reason have?

How much power does reason have?

The other day in my critical thinking class, I asked my students about how much control they think they have over their emotions. It’s a crucial issue in the quest to become a better critical thinker. After all, irrational reactions and unfounded feelings are often the main barriers to logical inquiry and sound reasoning.

My argument was that emotions are primal, subconscious judgments our brains make of the environment. I don’t consciously have to order myself to be afraid of a snake and flinch or run. It’s an automatic response. If we feel fear or anger or sadness or joy, it’s because our subcortex has already evaluated the variables, fired up the glands, secreted the hormones, and signaled our organs and muscles to respond in particular way. All of this happens in the blink of an eye, in the interval of a heartbeat. We don’t really consciously choose how to feel about anything. We might be capable of controlling the actions that flow from our feelings—to stop ourselves from reacting this way or that-. But the feelings themselves persist, and you can’t wish them away anymore than you can wish away the rain. In short, our feelings occur to us.

Emotions happen.

I was surprised by how many students didn’t agree. Several claimed they can consciously modulate their feelings, even talk themselves into or out of feeling angry or sad or afraid or joyful if they desire. Part of me wanted to cry, “B.S.” If emotional management worked like that, there wouldn’t be billions spent each year on therapists and happy pills. But in the spirit of critical thinking, we put the idea on trial. In the end, I think most of the students came around to the notion that we have less conscious control over our feelings than we’d like to think, especially after I showed them a clip about marketing guru Clotaire Rapaille and his theory of the reptilian brain and how, in America, the cheese is always dead (seriously click the link and watch the clip—it’s fascinating).

But the initial reaction still puzzles me. Was it the youthful tendency to overestimate one’s abilities? Were they just being provocative, Socratic contrarians? Or is this indicative of a change? I don’t want to make a hasty generalization, but it prompts the question: is there a new psychological self-concept developing among this generation? Do some Millennials have a different phenomenological perspective when it comes to their emotions? Are the medicalization of mental issues and the proliferation of pharmaceutical remedies leading to a new attitude toward human psychology?

As a philosophical person, I’m curious about the history of how humans perceive their own psyches. Plato compared our primal motivations and emotional intuitions to wild horses that reason, the charioteer, tames and steers. Like Nietzsche, I’ve always thought Plato distorted and overrated our rational capacities. Hume said reason is ultimately the slave of our passions. But I’ve always wondered if that isn’t too fatalistic. I guess I lean more towards Hume’s assessment, but if I didn’t still believe in at least the spirit of Plato’s metaphor, then I wouldn’t be teaching critical thinking, right? I mean, what would be the point?

What do you think?

The Rational Animal? Really?

Posted in Aristotle, barriers to critical thinking, Carol Tavris, cognitive dissonance, critical thinking, Elliot Aronson, emotion, gadfly, irrational, Leon Festinger, Philosophical and Religious Reflections, Psychology, rational animal, rationalizing animal, reason, resistance to critical thinking, social psychology, Socrates, UFO Cults, Uncategorized, When prophecy fails with tags , on August 29, 2013 by Uroboros
Socrates: the Gadfly, Godfather of Critical Thinking

Socrates: the Gadfly, Godfather of Critical Thinking

Last week I began teaching a philosophy course at Moraine Valley Community College on the Southside of Chicago. The course is PHIL 111: Critical Thinking, a topic that never ceases to amaze and, at times, perplex and challenge my assumptions about what it means to be human.

As a philosophy student, I was always struck by Aristotle’s description of human nature: we are the rational animal. But the more we analyze ourselves, the more we explore our capacities and limitations as a species, the more we discover that rational thinking is the exception, and not the norm…Far, far from it apparently. Neurologists, anthropologists, and evolutionary psychologists have all made convincing arguments that the human brain, and the thought patterns it tends to produce, evolved not to pursue logical explanations and objective evidence, but to concoct self-serving, self-justifying theories about how the world works. Why let objective facts and explanations ruin a good, wish-fulfilling story, right?

This is because the deluded, biased explanations serve a more fundamental purpose: survival. There’s an adaptive advantage to spinning yarns and fabricating facts that, though divorced from the realm of reason and objectivity, nevertheless reduce stress and anxiety, making the world a seemingly more sensible, human-friendly place and building our confidence.

When Prophecy Fails: classic study of cognitive dissonance

When Prophecy Fails: classic study of cognitive dissonance

For over fifty years, social psychologists have been exploring this capacity for mendacity. The theory of Cognitive Dissonance accounts for our need to justify core beliefs and behaviors. To justify them at almost any cost. Cognitive Dissonance is the uneasy, sometimes terrified, sometimes enraged, feeling you get when an event or a person challenges or threatens your beliefs. It was first proposed by Leon Festinger in the 1950s, the product of research into a doomsday UFO cult. Chronicled in the book When Prophecy Fails, Festinger and his colleagues wondered what cult members would do when the mothership didn’t come and whisk them away to another planet. How would these so-called rational animals behave then? So, did the cult members adjust their beliefs? Did they discard their delusions and wake up to reality? What Festinger found was that cult members actually doubled-down on their beliefs and tended to become more invested in the cult’s bizarre mythology. Instead of critically analyzing why they were so deluded, they used their powers of reason to recalculate the arrival of the mothership. They kept on believing. They were just too committed to back out and admit the truth.

Now you might say: that sounds about right. After all, they were UFO cult members. Shouldn’t we expect them to act that way? But Festinger and his student Elliot Aronson found cognitive dissonance to be a universal human trait. When confronted with contradictory and disturbing information, ideas and facts that threaten our core beliefs, we all put our creative mental powers to work defending those beliefs. And what is the most essential idea we are dying to protect? The most fragile one of all: the idea we have of ourselves as intelligent, well-meaning, competent people. Whether it’s smokers, music downloaders, cheaters, whatever the sin or vice, it’s so easy to come up with a story that makes your sin or vice or moment of incompetence sound like the most reasonable, ethical thing in the world. In other words, self-justification and esteem needs trump the desire for transcendent truth.

That’s why I love teaching critical thinking. That’s why we have to teach it. It doesn’t come naturally. The first step is to overcome cognitive dissonance, put your own ego in check, and really engage in that Socratic call to examine life. Critical thinkers listen to the buzzing inner-gadfly of skepticism and curiosity. Only then can a human being emerge from the muck of deluded, self-serving thoughts, shake off the slime, and become the clear-headed, rational animal Aristotle challenged us to be. Otherwise, we are, as Aronson pointed out, merely rationalizing animals. That’s what comes naturally. Thanks to evolution, I get to make a living teaching others, and myself, how to immunize the mind from the virus of irrationality.

If you’re interested in the topic, too, I suggest checking out Aronson’s and his colleague Carol Tavris’ work on cognitive dissonance. What are your thoughts on rationality? Do you agree or disagree with cognitive dissonance theory? Why? What would Socrates ask?

Mistakes Were Made (But Not By Me) by Aronson and Tavris

The Science of Myth and the Myth of Science

Posted in anxiety, archetypes, barriers to critical thinking, Brain Science, collective unconscious, Consciousness, Creationism, critical thinking, emotion, God, History, humanities, irrational, Jung, Knowledge, limbic system, Maori, Myth, Mythology, Neurology, paranoia, Philosophical and Religious Reflections, psychoanalysis, Psychology, rational animal, Rationalization, rationalizing animal, reason, Religion, religious, Repression, resistance to critical thinking, Science, social psychology, terror, Terror Management Theory, theory, theory of mind, Uroboros, V.S. Ramachandran, William James with tags on February 3, 2012 by Uroboros

Years ago in a mythology course I taught, a student once came up to me after class with an annoyed look. We’d just covered the Maori creation myth, and something about it had gotten under his skin. According to the myth, father sky, Rangi, and mother earth, Papa, formed out of primordial chaos and tangled in a tight, erotic embrace. Their offspring decided to pry Rangi and Papa apart in order to escape and live on their own. With his ax, Tane, the forest god, finally separated Father Sky and Mother Earth, and in that space, life grew and flourished.

The broad strokes of this creation myth aren’t unique. Ancient Egyptian, Chinese, Greek, and Norse stories (just to name a few) relate life’s origins to the separation of giant primordial parents.

“How could people believe that?” the student asked, shaking his head. It wasn’t his perturbed incredulity that struck me. Often, students initially find stories from ancient cultures to be, well, weird. It was his condescension. For him, ‘myth’ meant not just ‘false,’ but ‘silly.’ In his defense, it’s what it means for most of us. When we want to sneer at strange, fantastical beliefs, we call them ‘myths.’

The term is synonymous with ‘false.’

‘Myth’ originally meant the exact opposite, though. The Ancient Greek root of mythos referred to life’s deepest truths, something discussed and contemplated with a sense of awe and reverence, not incredulity and disdain. Seen in this light, myths are the stories humans tell in order to explain the unknown and make sense of the world. My thesis is that humans are essentially myth-making creatures and will continue to be so—no matter how scientific our stories get.

Scowls form on some students’ faces when they hear a professor say that science is, on a certain level, still mythological. Scientists are still storytellers, though, trying to turn the unknown into the known. Ancient and modern storytellers have different ways of approaching the unknown—different notions about what counts as a valid explanation.

Today, people (tend to) prefer creation stories that fit the scientific paradigm that’s proved so successful in explaining and predicting natural phenomena. But in dismissing past explanations, we overlook some interesting similarities. Ancient and modern stories share what psychologist Carl Jung called archetypal patterns. Jung theorized that humans share underlying patterns of thought because we all inherit the same neurological equipment. The anatomical differences between an ancient human brain and, say, Darwin’s brain are negligible. Setting the obvious differences between the Maori story and Darwin’s theory aside for just a moment, there are archetypal similarities between these accounts.

Darwinism says life began in a kind of primordial soup where, over time, inorganic molecules organized into the first living cell, and then single-celled organisms eventually separated into multicellular organisms, and from that, thanks to genetic mutation and the pressure of natural selection, lifeforms diversified and flourished. The Big Bang has this underlying pattern too: a ‘primordial atom,’ containing all matter, exploded and separated into the cosmic forms we see today.

I think the key difference between ancient and modern creation stories is in the tendency to personify nature, or the lack there of. The modern scientific method tries remove the subjective factor from the equation. Once we stopped projecting our emotions upon ‘Mother Nature,’ we started telling different stories about how ‘she’ works.

Now scientists are investigating how myth-making itself works. Neurologists and evolutionary psychologists are exploring the biological basis of our ability to mythologize and the possible adaptive purposes informing our storytelling instinct. Let’s start by getting hypothetical and do a little ‘state of nature’ thought experiment. Imagine a prehistoric hunter startled by booming thunder. Now we know the meteorological explanation, but he doesn’t. He experiences what thunder feels like to him: anger. But who is angry?

The problem is addressed by the limbic system, the subcortical brain structure that initially processes emotion and memory. Potential dangers must be understood or anxiety will overwhelm the mind, rendering the hunter less able to cope and survive. The amygdala, the brain’s watchdog, primes the body for action—for fight or flight—while the hippocampus tries to associate feelings with memories in order to focus and better define both the stimuli and the appropriate response. This process is entirely unconscious—faster than the speed of consciousness.

The hippocampus recalls an experience of anger, perhaps one involving the hunter’s own father, and then the cerebral cortex, home of our higher cognitive capacities, gets involved. Somewhere in our cortical circuitry, probably in the angular gyrus, where neuroscientist VS Ramachandran says our metaphoric functions reside, storm images are cross-wired with paternal images. A myth is born: sky is father, earth is mother, and the cause-effect logic of storytelling in the brain’s left-hemisphere embellishes until the amygdala eases up, and the anxiety is relatively alleviated. At least the dread becomes more manageable. In neurochemical terms, the adrenaline and cortisol rush are balanced off and contained by dopamine, the calming effect of apparent knowledge, the pleasure of grasping what was once unknown.

From then on, thunder and lightning will be a little less terrifying. Now there is a story to help make sense of it. Storms are a sign of Father Sky’s anger. What do we do? We try to appease this force–to make amends. We honor the deity by singing and dancing. We sacrifice. Now we have myths and rituals. In short, we have a religion.

That’s why so many prehistoric people, who had no contact with one another, came to believe in primordial giants, and we are still not that far removed from this impulse. For example, why do we still name hurricanes? Sometimes, it’s just easier for us to handle nature if we make it a little more human. As neurologists point out, we are hardwired to pick up on patterns in the environment and attribute human-like qualities and intentions to them. Philosophers and psychologists call this penchant for projecting anthropomorphic agency a theory of mind. English teachers call it personification, an imaginative, poetic skill.

This is why dismissive, condescending attitudes toward myth-making frustrate me. The metaphoric-mythic instinct has been, and still is, a tremendous boon to our own self-understanding, without which science, as we know it, probably wouldn’t have evolved. I came to this conclusion while pondering a profound historical fact: no culture in human history ever made the intellectual leap to objective theories first. Human beings start to know the unknown by projecting what they’re already familiar with onto it.

It’s an a priori instinct. We can’t help it.

Modern science helps make us more conscious of this tendency. The scientific method gives us a way of testing our imaginative leaps—our deeply held intuitions about how the world works—so we can come up with more reliable and practical explanations. The mythological method, in turn, reminds us to be critical of any theory which claims to have achieved pure, unassailable objectivity—to have removed, once and for all, the tendency to unconsciously impose our own assumptions and biases on the interpretation of facts. The ability to do that is just as much a myth as the ‘myths’ such claims supposedly debunk. I’ll paraphrase William James here: The truth is always more complex and complicated than the theories which aim to capture it. Just study the history of modern science—the evolution of theories and paradigms over the last 350 years especially—to see evidence for the asymmetrical relationship between beliefs, justifications, and the ever-elusive Truth.

Laid-back, self-aware scientists have no problem admitting the limitations built into the empirical method itself: Scientific conclusions are implicitly provisional. A theory is true for now. The beauty and power of science hinges upon this point—the self-correcting mechanism, the openness to other possibilities. Otherwise, it’s no longer the scientific method at work. It’s politicized dogma peddling. It’s blind mythologizing.

The recent research into the neurology and psychology of myth-making is fascinating. It enhances our understanding of what a myth is: a story imbued with such emotional power and resonance that how it actually lines up with reality is often an afterthought. But what’s equally fascinating to me, is the mythologizing which still informs our science-making.

I think it’s, of course, dangerous to believe blindly in myths, to accredit stories without testing them against experience and empirical evidence. I also believe it’s dangerous to behold scientific theories as somehow above and beyond the mythological instinct. Like the interconnected swirl of the yin-yang, science and myth need each other, and that relationship should be as balanced and transparent as possible.

Uroboros. A universal symbol of balance and immortality.

Uroboros. A universal symbol of balance and immortality.