Reason and Intuition: Is There Really Any Difference?

My sister just sent me the link to this discussion by Razib Khan on reason and intuition–timely, because it refers to Denialism: How Irrational Thinking Hinders Scientific Progress, Harms the Planet, and Threatens Our Lives by Michael Specter, who I just saw (and spoke to briefly) when he spoke at University of Washington last week on his book tour. (I found him engaging, interesting, and well-armed with fascinating facts and anecdotes, but he didn’t deliver any aha! moments for me–I’m definitely the choir, and he’s definitely preaching.)

My sister thinks Khan “gets it just about right,” and I certainly hesitate to question her sagacity. (She is older hence certainly wiser, after all.) But I came away from the post distinctly less satisfied. My basic problem: Khan–like almost everybody else that I hear discussing this subject–doesn’t seem to have any idea what he actually means by “intuition.” It’s tossed around generally in contradistinction to logical, syllogistic thinking (which is often referred to–in another example of poorly or un-considered thinking–as “linear” thinking).

To put the parenthetical aside first: logical thinking about any reasonably complex subject or problem is rarely linear. While portions of the thinking will certainly include “A leads to B which leads to C,” it also almost always includes some type of recursion and (I haven’t studied this so can’t give a good list) a variety of other decidedly nonlinear cognitive tools.

Returning to intuition, though: a definition by negation (“it’s non-logical, non-linear thought”) is useless even if “logical” and”linear” were accurate and clearly understood, because intuition can then encompass anything that isn’t that one thing. Khan exhibits this in spades. He suggests that intuition might be:

  • Innate knowledge: “These stances encapsulate the wisdom of evolution (e.g., aversion to sibling-sibling incest) and/or society (again, aversion to sibling-sibling incest).”
  • Surmise based on casual observation: “folk physics”.
  • Easily understandable explanations that are in accord with everyday experience: “Science allows us to stand on the shoulders of giants, no matter how bizarre or counterintuitive their theories are.”
  • Knowledge so ingrained through practice as to feel automatic: “Specialists in technical fields often develop domain-specific intuitions through long experience.”

There are almost endless other possibilities. (Ones you hear often involve “humaness,” “empathy,” “holism,” etc., all of which seem to grope toward something that does seem right for the definition.) Some may stand up as good definitions of intuition. But if the writer/thinker hasn’t done the thinking to even know what he’s talking about when he uses the word–or has only employed wooly, half-hearted noodling without nailing down what he means–the writing doesn’t yield much light.

I’ve always thought of intuition–in an almost equally useless definition–as “back-brain processing.” It’s the stuff we’re all familiar with, embodied in the phrase “lemme sleep on it.” We know that the mind (and the brain, though in a different way) is made up of a whole lot of semi-autonomous modules that are good at particular things. Consciousness–which is sometimes (inaptly) called “the executive function”–is only one of those modules. The operations of those modules are most often completely opaque or downright invisible to other modules–including the “conscious thought” module. For instance, we can’t perceive or much control the modules that handle various types of visual or spatial processing, even though (see Pinker’s How The Mind Works and The Stuff of Thought) those very modules and a whole lot of others are being turned to task for us (even) when we think we’re thinking consciously and “logically.” A lot of thinking, of any type (including logical thought), is farmed out to the back brain.

And those back-brain modules are no slouches. They use the same kind of complex, recursive, and yes, linear cognitive techniques that the “logical” mind does. (They may only use one or two techniques each–some use this a lot, some use that…there’s a big toolbox to draw from.) All that processing may emerge in the consciousness module with the appearance (to us) of logic and linearity, or it may appear as a flash of “intuition.” But it’s the same mass of modules that’s doing the grunt work down deep.

Now maybe “intuitive thinking” refers to a method that employs particular mind modules more, or in different ways from, those that are used for “logical thinking.” Fair enough. But I don’t know what those methods and modules might be, and I’m quite certain that 99.999% of people who talk about “intuition” don’t know either. Also fair enough–you shouldn’t be required to know intimately how the mind works to think about the issue–especially as we (even cognitive scientists) still have very little understanding of how the mind works.

But scientists are, finally, learning a hell of a lot these days–actual knowledge, tested by controlled experiments that can overcome our innate predilection for self-delusion, as opposed to the armchair surmise that has constituted the stuff of psychological “knowledge” since Freud started the courageous effort to plumb our deepest mystery. (As Steven Pinker has pointed out, the brain/mind is actually going from the status of “mystery” to the status of “problem that can potentially be solved, at least in parts.”)

It was some of that experimental knowledge–delivered in a talk by Jonah Lehrer that I attended, and in his book How We Decide–that started to crystalize the fuzzy notion I’ve always had of intuition as back-brain processing. He explains that we learn through a dopamine-feedback system. We make predictions and when they turn out right, we get a little dopamine hit. Feels good. When we get them wrong, though, the pusher holds our fix, and it feels really bad. (This goes a long way toward explaining why humans will go to such extraordinary lengths–destroying relationships with loved ones, etc.–to avoid [the self-perception of?] being wrong.) Our decisions are always based–at some fundamental level–on how something feels. (This is not an endorsement of “just trust your feelings”; see “fundamental level” in the last sentence.)

Lehrer also points to a set of experiments that seem to show that when we think we’re making a decision in our frontal cortex, in fact the decision has already been made, milliseconds before, elsewhere in the brain. (Though Daniel Dennet does a fairly convincing job of discrediting these experiments in Freedom Evolves.) The message that goes to our so-called “executive function” is not in the form of a thought, it’s a feeling: feels good, or feels bad. That’s how the back brain communicates the already-made decisions to the front brain–through pleasure and pain. (How else could we possibly know what we “want”? How could a nematode know that it wants food, or learn how to get it, aside from a pain/pleasure feedback system?)

In short, what feels like conscious decision-making really isn’t. We feed the problem back into our back brain, and ask it, essentially, “how do ‘I’ feel about this?” It responds with a “Tarzan feel bad” or “Jane feel good” signal strong enough to emerge into, be felt by, our consciousness module.

This does not negate the value of logical thought, but it casts our control over that process in a very different framework. We don’t (just?) control our “impulses” on their way out of the back-brain/id/reptilian brain/whatever you want to call it, as in the Freudian and other psychological models. Quite the opposite really: Our real power is in what we feed back into that massively multimodular brain. We can deliver it the information, experience, and experimental results (including feelings that emerge from that back brain itself, further processed consciously) that it needs to make smart decisions, or we can feed it pablum and pap, well-processed or not. It’s a true garbage-in-garbage-out situation. We can, to some extent, manage the process that sometimes looks (to us) like logic and sometimes looks like intuition, but that is mostly made up of the very same back-brain stuff.

In other words: intuition is reasoning. And logical reasoning–except of the simplest sort–inevitably relies on intuition.

At this point I think I should throw the baton to a writer who has done his thinking damned carefully (though without the benefits of experimental knowledge that we have). In a long-favorite passage of mine, Milton in Paradise Lost has the Angel Raphael (speaking to the as-yet-unfallen couple of their blissful state) comment on two types of reason:

Discursive, or Intuitive; discourse
Is oftest yours, the latter most is ours

Raphael knows: humans aren’t very good at intuitive reasoning. It’s the stuff of angels. (Partly because we’re forever polluting it with various combinations of the stuff that Khan vaguely thinks of as “intuition.”) We should should stick to the the lowly, earthly discursive stuff that our measly frames are fit for.

But when we do achieve it, that divine intuitive reasoning, the results are downright angelic. Even heavenly.

This is all still feeling like pretty loosy-goosy thinking to me. But I hope it at least shows an effort towards actually knowing what I mean when I talk about “intution.” I think I’m gonna have to sleep on it.



, , ,




One response to “Reason and Intuition: Is There Really Any Difference?”

  1. Sandy Avatar

    Hey, you should visit me before the toll on 520 goes into effect.