Archive for the ‘Evolutionary Psychology’ Category

More On Being Wrong

April 24th, 2011 No comments

Barry Ritholz links to Kathryn Schulz’s TED talk on Being Wrong (I wrote about her book here), and comments,

I dont know about anyone else, but I am wrong all the time.

expect to be wrong.

Which led me to clear up some of thinking that I’ve been doing since my last post on the subject. Here’s the comment I left on Barry’s blog, with some editing:

I of course expect to be wrong about particular things. I think we all do. But that’s future tense. “Some of my current and future predictions will turn out to be incorrect.” Well, yeah. Not really so interesting.

What is interesting is the human propensity for present-tense denial of even obvious reality, and the extraordinary lengths and contortions to which we’ll go to avoid admitting that we’re wrong.

The big aha insight for me in Schultz’s book was this, which she skips by, doesn’t really put across, in her talk:

There is no such thing as the real-time experience of being wrong. Present tense. As soon as you realize you’re wrong, you’re not anymore.

She just hints at this glancingly in the talk, when she says that being wrong feels like … being right. Nice line, that.

The rest of her book left me dissatisfied, though (and even more so her talk), because it didn’t answer the fundamental question: why does it feel so bad to discover that we’re wrong? Why did we evolve to be like that? Wouldn’t it be more evolutionarily fit to embrace and enjoy the discovery of wrongness, for purposes of self-correction and accurate perception of reality? Wouldn’t people with that propensity have more grandchildren?

I’m kind of astounded that after five years of thinking about these questions, she never seems to have asked, much less answered, that one. She says we don’t like discovering that we’re wrong because it feels bad. But she never discusses why it feels bad.

The best (possible) answer I’ve come across is via Jonah Lehrer’s How We Decide.

Short story, it’s how the learning mechanism works. We’ve evolved so that if we are right in a prediction, we get a dopamine hit of pleasure. If we’re wrong, we don’t get our fix, and that feels really bad. (This helps explain why humans’ loss-aversion exceeds our gain-seeking.) It’s pretty straightforward behaviorism, embedded in a fascinatingly complex set of constructs.

So the the learning mechanism, ironically, makes us not want to discover that we’re wrong, because it feels bad.

I can only figure that the fitness benefits of the learning mechanism outweigh the unfitness of reality denial, and that evolution couldn’t “figure out” any other, less “expensive” way to do learning.

Just Cause I Thought This Was Hilarious — For Multiple Reasons

February 20th, 2011 1 comment

From Steven Jay Gould’s The Flamingo’s Smile:

A hungry female black widow spider is also a formidable eating machine, and courting males must exercise great circumspection. On entering a female’s web, the male taps and tweaks some of her silk lines. If the female charges. the male either beats a hasty retreat or sails quickly away on his own gossamer. If the female does not respond, the male approaches slowly and cautiously, cutting the female’s web at several strategic points, thereby reducing her routes of escape or attack. The male often throws several lines of silk about lhe female, called, inevitably I suppose, the “bridal veil.” They are not strong. and the larger female could surely break them. but she generally does not. And copulation, as they like to say in the technical literature, “then ensues.” The male, blessed with paired organs for transferring sperm, inserts one palp, then, if not yet attacked by the female, the other. Hungry females may then gobble up their mates, completing the double-entendre of a consummation devoutly to be wished.

Is This Person Liberal or Conservative? In One Question.

February 20th, 2011 1 comment

The OK Trends blog on the OK Cupid dating site is pretty amazing. They pull all their hundreds of millions of pieces of data and suss out amazing facts about how people are, and how they interact. Here’s a beaut re: politics and ideology (Jonathan Haidt, take note):

The Best Questions For A First Date « OkTrends.

Jim Manzi Makes the Case for Doing Whatever We’re Doing Right Now — Or Nothing

November 20th, 2010 2 comments

I have to start this post by saying how much I like Jim’s recently-bruited notion (and coinage): “causal density.” I’ve been sharing it with my friends. In my words:

An event in physics — a ball being bit by a bat and landing in center field  – has very few causes, so it’s pretty easy to deduce and explain those causes. Chemistry is significantly more complex, but still largely likewise. 

Get into biology, and you’re very quickly into a lot of causes (what causes diabetes?), and those interacting with each other in extremely complex ways. It’s much harder to deduce causation, or reduce it to a simple or even straightforward formula or algorithm — or to make predictions based on particular interventions.

Jim suggests that the social sciences have even higher causal density. My example: What caused the decades-long decline in crime rates? Better policing techniques? Shifts in law-enforcement and judicial budgets? Harsher sentencing laws? Demographics? Roe v. Wade? Update: unleaded gasoline?

Despite yet more decades of regression analyses trying to “control” for all those factors and many more, controlling for those factors’ interactions makes the quest for cause deucedly difficult, perhaps even Quixotic. This especially when you consider all the factors that might not have been considered or controlled for: cf. Roe v. Wade, until Levitt came along, or the effect of genes on character development, before the explanations of evolutionary psychology — especially pre-Trivers.

Jim brings this thinking home to the world of economics here and here. His basic assertion is that we have no idea what (long-term) effects different economic policies will have.

I think he overstates his case some (economics does have some quite good predictive abilities in particular areas — way better than we had a century ago), but still I agree that it’s a difficult issue. (Though I give a tentative and perhaps only somewhat useful response here.) But what concerns me most is what I discern to be the unstated implication of his posts:

Since we can’t predict the results, we shouldn’t do anything.

This thinking is rooted in a fundamental (and not totally crazy) belief among conservatives: that we’ve spent decades, centuries, millennia working out our social, economic, and cultural systems, so they should get extra credence in deciding what to do next. (I’m pretty sure I’ve heard Jim make this argument, but if not we’ve all certainly heard it from other conservatives.) The best bet, probabalistically, is to maintain the status quo. (This closely related to a basic dictum of efficient-market theory: absent any new information, the most likely tomorrow price for something is … today’s price.)

Here are a few ways that I have a problem with that:

1. According to Jim’s own thinking, where we happen to be at any given moment is the result of an infinitely complex mix of interrelated causes and effects — some of which we’ve had control over, but many of which (especially the interactions) were and are completely beyond our ken. So our current position is to a great extent the result of luck, chance, fortune: an accident of history. Saying otherwise is to invoke a rather Panglossian and teleological belief system.

2. Which moment in history embodies that Panglossian moment? Today? This is directly at odds with the conservatives’ stated beliefs: that things were much better before the New Deal. Who gets to choose the conservative moment? Who decides that the status quo of the current moment (or some other) is the best yet, and that we should cling to it?

3. Favoring the status quo is directly at odds with America’s (humanity’s?) greatest virtue and capability: trying new ideas and approaches in an effort to progress and make things better.

4. The current moment can be very far from Panglossian — think Soviet Russia, or the radical, free-market- and government-capture-driven wealth and income inequality in America today. Since we don’t know with certainty what policies will fix that broken moment, should we do nothing about it? (But then again, I think we — with the exception of Republicans in their self-delusion — do know with reasonable certainty many of the answers for American inequality and its drag on growth. We know them largely because of research in economics — particularly econometrics.)

5. (Added Nov. 22) Doing nothing is doing something: it’s actively choosing whatever — to some extent by historical accident — happens to be in place at the moment. Who is to say that the unintended consequences of that choice will be superior to the unintended consequences of some other choice? We have to make a choice; the only question is what we should use as the grounds for that choice. Should systematic, rigorous economic analysis — despite all its failings — be considered in making that choice?

We’ve faced a decidedly non-Panglossian series of moments over the past decade: health-care costs going through the roof, and projected to continue, threatening to shatter the American way of life. And in the meantime the Republican party had its blank piece of paper — an almost total carte blanche — for six years. The “unintended consequences” and “history is our best guide” reasoning led to exactly what Jim’s posts suggest and imply: doing nothing. I don’t think he would argue that it was the optimal policy approach.

Now Jim might say that he’s not implying that we should cling to the status quo or do nothing. But he rather leaves us hanging. If the implication that I deduce is not what he intended to suggest, what was his intention? Simply to debase economic analysis as an input to our judgments, in favor of — for instance — casual lifelong observations?

It seems likely. He says that he came to his belief in the superiority of free-market solutions “through some combination of historical reasoning, introspection, practical experience and so forth.”

I’m completely agog. He doesn’t even mention systematic (or statistical) analysis — by himself or others. I find it hard to believe that Jim has never read an econometric analysis, or that he has been completely uninfluenced by those he has read. He’s not that kind of guy.

Understand: Jim is co-founder, chairman, and managing director of Applied Predictive Technologies. To quote Wikipedia (wonder who wrote this…), “APT’s software takes a statistically rigorous test and learn approach…”

Now of course it’s true:  we can’t design and implement controlled tests of economies, which makes his company’s work of a decidedly different order. But does that mean that econometric analysis has no place — at all — in informing our judgments? That we should simply ignore it? That’s what Jim seems to be saying. But again: I really can’t believe that he does that.

If we review hundreds of systematic, rigorous econometric studies looking at many different data sets using many different analysis methods — and even have the benefit of systematic, rigorous reviews of all those studies — and we find that X seems to have no correlation with Y, should we draw the conclusion that changing X probably isn’t going to change Y in the future? Or should we ignore that finding, and rely instead purely on “historical reasoning, introspection, practical experience, and so forth”? In other words, casual, non-systematic observation and surmise? Should we do so even if that surmise is contradicted by all the systematically rigorous analysis?

Manzi admirers want to know.

Why Would We Rather Be Wrong than Perceive Ourselves as Being Wrong?

November 8th, 2010 3 comments

Why would we rather perceive ourselves as right than be right? Why does believing ourselves to be right feel so good?

People hate being wrong. From an evolutionary perspective, this makes sense. If we’re wrong about the world out there, we’re less likely to survive and produce grandchildren. You’d expect being wrong to feel bad, because it discourages being wrong.

But here’s what weird: what people really hate is perceiving themselves as being wrong. They hate it so much, they’d often rather be wrong — even with all the evolutionary downsides that being wrong delivers.

Example: believing that some animal is godlike and hence untouchable for food. Result: less available food, so (in aggregate, over generations) less grandchildren.

But try to tell someone holding that belief that he or she is wrong, that it’s a false belief. You’ll encounter massive, impenetrable resistance.

What in the heck is going on there? It seems completely contrary to evolutionary logic.

Why did humans evolve so that perceiving ourselves as being right is more pleasurable — feels better — than actually being right? (Remember: in general, natural selection results in “fit” behaviors — like having sex — giving pleasure; that pleasure reward is what encourages the behavior. This is using “fit” in the technical evolutionary sense: “likely to result in more grandchildren.”)

Not surprisingly, the remarkable Robert Trivers addressed this question:

The Elements of a Scientific Theory of Self‐Deception Update: Ungated version here.

And many have built on his work since. (The paper has 133 citations in Google Scholar, which is remarkably high by Google-citation standards.)

Trivers offers a few possible explanations, and suggests several avenues for further research. I’ll just share his first suggestion, and leave the more abstruse ones for those as is interested.

Explanation #1: if we deceive ourselves, it’s easier to deceive others.

Suppose you know that your proposal is bad for the person you’re proposing it to. People are good mind-readers (or more accurately, readers of facial expressions, tone of voice, body language, etc.), so if you’re consciously aware that you’re lying, they can often tell.

So what’s the best strategy? Hide that knowledge in your unconscious, where even you can’t see it. So your conscious mind believes a lie, even though your unconscious mind knows it’s a lie. Since the truth is hidden from you, it’s also hidden from others.

This gives self-deception real evolutionary advantages — it lets you convince people of things that are bad for them and good for you — so natural selection would naturally result in a mechanism that allows for that. It would also make it pleasurable to use that mechanism — to fool yourself — so you’ll use that mechanism. Hence the widely-demonstrated joys and benefits of self-delusion.

There are (at least) two problems here, though:

• Self delusion must, on average, be beneficial to individuals, or it (and the pleasure reward for doing it) wouldn’t have evolved. But once the pleasure reward exists, it could encourage self-delusions that are not self-beneficial.

• Even if the self-delusion instinct is (overall) good for individuals’ “fitness,” it’s quite possibly bad for everyone in aggregate — depending on how you define “bad.”

This subject goes far deeper and gets almost infinitely complex, but I’ll stop here and leave others to ruminate further.

What’s Wrong with Free Markets: “The ‘Wisdom’ of the Crowds”

October 6th, 2010 2 comments

This may seem obvious to many, but it’s been very clarifying for me.

People often argue against the free-market system — which is based on the idea of rational actors — by saying “people are obviously not rational actors!”

But that’s a stupid argument. It misses the point. Nobody thinks that everyone, always, makes rational decisions. That would be dumb. Rather, the Wisdom of the Crowds idea is that the market operates as if everybody makes rational decisions.

Here are the assumptions underlying that thinking:

The free market results in the best allocation of resources.

Because: People make decisions about what they want to buy, so resources flow to producers of those things.

Even though: Individual purchase decisions are often irrational — not delivering maximum utility to the purchaser (much less to society as a whole).

But: All those irrational decisions cancel each other out, so the rational decisions dominate, effectively allocating resources.

Because: The irrational decisions are random — non-systematic.

The final assumption — which most free-market advocates don’t know they’re making — is the fatal flaw underlying the belief system. (Or at least one of the fatal flaws.)

Bryan Caplan addressed this issue beautifully in his Myth of the Rational Voter. (Some comments on it here.) He points out (and demonstrates) that people’s voting choices are irrational. But more importantly, he shows that they’re systematically irrational. So rational choices don’t float to the top of crowd; they’re dominated by systematically irrational decisions by that crowd.

Example (mine, not his; he has lots of his own): People A) think foreign aid is a big part of the U.S. budget (it’s well under 1%), B) are naturally driven by jingoism and ethnocentrism, C) underestimate the personal and national benefits deriving from foreign aid (just ask Mullen and Gates), and D) don’t like taxes. So they vote for people who promise to cut the budget (hence taxes) by cutting foreign aid.

But Bryan is a free-market believer. So he doesn’t apply the same thinking to purchase decisions that he does to voting decisions. He doesn’t consider (or acknowledge) that in fact, the crowd’s purchase decisions are also systematically irrational.

Example: People A) vastly overestimate their own driving skills (almost everybody believes they’re above average, or even in the top 10%), B) underestimate the dangers of traffic accidents (#1 cause of death in children) while overestimating other dangers (child abduction or terrorist attack: vanishingly small odds), and C) greatly overestimate the value of maneuverability and visibility (sitting up high and looking down on others) in avoiding accidents (braking distance is what counts). So they systematically underspend on what matters for auto safety (braking distance, air bags, etc.), favoring power (“I need it to get out of dangerous situations”; yeah, right), style, size (which generally increases braking distance), and “handling” instead.

So in this case and myriad others, because of systematic human irrationality, the free market does not deliver the best allocation of resources — either for individuals or for society as a whole.

I won’t get into what we as a society do and should do given these facts. (If you want to, you could start here.) Just to say, it’s important to know the facts.

Delight and Abject Dismay on Richard Dawkins’ Birthday

March 26th, 2010 15 comments

Another of those convergences: I just joined the Richard Dawkins group on Facebook, and discovered that today is his birthday. (Happy birthday sir!) It’s a convergence because over the last week I’ve been horribly dismayed. After decades of near hero-worship on my part, I’ve discovered that he is not acting as the man I’ve always believed him to be.

The issue is his position on group selection. (Don’t go away: it matters.) The way he has defended that position seems contrary to everything I have always so admired about him.

And I have so admired him, for so long. I have to watch myself constantly to avoid the kind of wild-eyed evangelism that serves only to give aid and comfort to the creationist enemy. The Selfish Gene and The Extended Phenotype provided (some of) the fundamental underpinnings for my understanding of (human) existence, and the belief and value system that’s built on that understanding.

I didn’t really need to read The God Delusion — preaching to the choir — but I did so and greatly enjoyed it purely for the joy of his arguments — the lucidity, the cogency, the logical and rhetorical coherence.

I can’t count the number of times I’ve recounted his anecdote about an aging professor who changes his mind. (“My dear fellow, I wish to thank you. I have been wrong these fifteen years.” . . .  “We clapped our hands red.”) It still brings tears to my eyes when I read it, and epitomizes how science, for all its real-world failings, is fundamentally different from faith. (Here. Start with “It does happen.”)

So, again, I’m nearly teary-eyed at the stance he has taken, and the rhetoric he’s deployed, in response to a body of thinking that has grown over decades and came to something of a culmination in 2007. (I’m late to the party on this one.) That body of evidence and theory contradicts one of his longest- and strongest-held beliefs: that group selection is hooey, that it could not have had any role in the evolution of human altruism.

Remember the stated goal of Dawkins’ seminal book: “My purpose is to examine the biology of selfishness and altruism.”

His basic theory: genes are the units of selection, and organisms are the vehicles of that selection. If a gene causes organisms to have more grandchildren, the gene’s frequency expands in the population.

Based on this, he rightly pooh-poohed warm, mushy, poorly-reasoned notions about genes contributing to “social cohesion” and the like. No altruistic gene could survive in a group if it didn’t provide net benefit for the individual containing that gene — either by helping the individual, helping kin who have the same gene, or through reciprocal payback from other individuals.

But what about the success of groups? Could groups with more altruistic genes have more grandchildren than groups with more purely self-serving genes? Could that group selection effect predominate over individual selection within the group?

It seems plausible, and from the first time I encountered the conundrum, it has always seemed to me to be a purely statistical question.

And that’s how (a damned impressive set of) mid-20th-century evolutionists went at it. They built models, ran the numbers, and determined that no: group selection could not overwhelm the forces of individual selection. If a gene isn’t good for an individual (and/or his kin), it will die out.

That belief achieved an orthodoxy in the political ecology of scientific academe that largely prevented later scientists from even raising the question, and successfully crushed most of the few efforts to re-examine it. It’s agonizingly similar to the despicable response that sociobiology and evolutionary psychology themselves encountered over those same decades, from the likes of Lewontin, Gould, and the “Theory” humanists.

As a result, both professionals and amateurs — including reasonably diligent amateurs like me — have been unthinkingly chanting along with that orthodoxy for years, decades. I don’t know how many times I’ve discredited thinking that seemed rooted in group-selectionist thinking.

And I was wrong. At least, I was too categorical. So I was sometimes/often wrong.

Here’s what makes me so sad: Richard Dawkins has been perhaps the most powerful voice for that orthodoxy, and he seems to be clinging to that idol even when its feet — his feet — are looking resoundingly clay-like.

Cutting to the meat, simplified:

In 2007, David Sloan Wilson and E. O. Wilson (the founder of sociobiology and one of the most brilliant, diligent, and sober evolutionary biologists to ever live, as Dawkins certainly agrees) published a paper (PDF) laying out the cogent, lucid, and compelling case that group selection can indeed predominate over individual selection in the evolution of altruistic genes — that the group can be a vehicle of selection, just as the individual can. (They talk about “multilevel selection.”)

In other words, genes that benefit the group can proliferate in the larger population, even if those genes are disadvantaged within the group. Again, it’s all a matter of models and statistics, and the Wilsons (no relation) deployed and cited damned convincing models and statistics showing that the earlier evolutionists probably got it wrong.

Now if Dawkins had cogent takedowns of those models and statistics, there is nobody I would rather hear them from. But his counterarguments have all been from principles, even when those principles are not thrown into question by Wilson and Wilson — their arguments are based on those principles.

What’s more dismaying is that Dawkins’ few dozen paragraphs in reply (remember, it’s been three years since then) bear all the hallmarks of a religionist who has not a leg to stand on, lashing out in frantic, desperate defense with red herrings, tangents, inapplicable arguments, dodges, weaves, and personal invective. (I’m not a professional in the field, but I know good and bad arguments when I hear them.)

This post is already too long, so I won’t detail everything here. You can see one of Dawkins’ replies here (PDF), and you can read the whole story from D. S. Wilson — including much of Dawkins’ response — here. Wilson’s 19-post blog thread is here in a one PDF.

I’ll just quote one passage from Dawkins to give the flavor of those replies:

…as far as I am concerned, the statement is false: not a semantic confusion; not an exaggeration of a half-truth; not a distortion of a quarter truth; but a total, unmitigated, barefaced lie.

This is not the Richard Dawkins I’ve known and (intellectually) loved for lo these many decades. It is, in fact, the exact opposite of that Richard Dawkins.

I can only quote D.S. Wilson’s words, which precisely echo my most heartfelt feelings:

In my dreams, I imagine him reading my modified haystack model and saying “Well done, David! I have been wrong all these years.”

Richard Dawkins won’t you please come home?

Can John Gottman Predict Divorce? (Probably Not.)

March 24th, 2010 No comments

Update: Instead of saying “Probably Not” in the title, I probably should have said “We have no idea.”

Being a Seattle parent with kids in private schools, I’ve been assailed for years by pronouncements and lectures by and about the Seattle-based Gottman Institute (tagline: “Researching and Restoring Relationships”). Their most widely known claim is their ability to predict, after watching a married couple for fifteen minutes, whether they’ll get divorced.

The basic Gottman theory — that facial expressions of contempt during couples’ interactions are predictive of divorce — seems very plausible, intuitively. But there are many intuitively plausible surmises that are just wrong.

And no matter how intuitively plausible it is, the claim always seemed fishy to me. But I never did the research to find out if their predictions were really accurate. Happily, somebody has finally done it for me. Here’s Andrew Gelman, god of all things statistical, blogging about a Slate article excerpted from Laurie Abraham’s Husbands and Wives Club.

Short story, their “predictions” are built on quicksand. Here’s how they do it.

Gather data on a bunch of couples — say, six variables for each couple. Determine which of those couples get divorced. Then run a program that finds an equation correlating the (presumably) predictive variables to the results. (These quite remarkable programs — the realm of ultimate-wonk physicists only a decade or two ago — are now available for free download, or as $49 Excel plug-ins. To quote my buddy Olav, “Isn’t it great living in the future?”)

Here’s what’s wrong: these programs will always find an equation that correlates the variables to the results. (With a greater or lesser “fit” to the data.) Does that mean the equation is predictive? Only if it makes an accurate prediction when applied to a different set of data.

That is what Gottman has not done, at least in his published papers. Every one of them has a new equation that — surprise — “predicts” the divorces in the group with surprising accuracy — the same group that was used to generate the equation.

Now this is true: if the program finds a good data-fitting equation (which Gottman seems to have done — multiple times), there’s a greater chance that the equation will actually be predictive. But there’s only one way to know: use it to predict. If the prediction fails, the predictive ability of the equation is falsified.

Gottman has not (to my knowledge) attempted any falsifiable predictions, so we have no idea if his predictions are true or false.

The Gottman Institute presumably has all the data to hand, and could test past predictions against future results. I’m wondering: will they now do so?

Google doesn’t turn up any hits for “Abraham” on the site, so I’m thinking they haven’t responded. One can rather understand why. Abraham says in a comment to the Slate post (and, she says, in a footnote in the book — it’s not search-insideable on Amazon) that she repeatedly requested an interview in May 2009 but Gottman wouldn’t see her until October — too late for her book. I do wish she’d tried again before the excerpt was published, giving him ample benefit of the doubt.

But absent that, I’m quite curiously waiting to see what we hear from The Gottman Institute.

Is Altruism Inevitable?

March 24th, 2010 No comments

In one of those wonderful confluences, two items just came together for me. I read The Social Atom by Mark Buchanan, and my friend Steve posted a link to an Economist piece on evolution, fairness, markets, and religion.

It all circulates around a central conundrum that evolutionists (including Darwin) have been worrying at since Darwin: why do humans, in all cultures, perform selfless acts? You’d think that natural selection would weed out those fools — that cheaters who take advantage of the selfless would have more grandchildren, making the do-gooders vanish from the population.

The answer seems to be group selection: groups with more selfless types survive better than groups of cheaters. (I’ll be posting soon on the controversy over group selection, particularly Richard Dawkins’ intransigence on the issue; suffice it here to say that it makes sense to me. Can’t wait? Wikipedia.)

Which brings me to The Social Atom. Buchanan looks at systems made of of fairly simple “atoms,” and shows how those atoms’ simple properties can result in very sophisticated and often predictable system behavior. Think (my example) of flocks made up of birds with very simple algorithms — “if there’s a bird on one side of you and it moves away, move closer” — imagine flocks of birds flying, and you get the idea.

Likewise the atoms in a magnet: you can ignore all the insane complexities of sub-atomic particles and just think of those atoms as arrows whose direction affects adjacent atoms’ arrow directions. That single property explains the whole system.

This plays out in human systems too. If you had stock markets where everyone throws darts, the ups and downs of the market would map to a bell curve with a normal distribution — thin tails, with very few days of large ups and downs.

But build a system where individuals switch between strategies in response to market movements and  other individuals’ strategies, and you get the distribution we actually have: a much flatter bell curve with fatter tails — many more days with big up-and-down swings. (This is what brought down Long Term Capital Management. A one-in-five-hundred-year event on a normal bell curve was a one-in-five-year event in the actual distribution of market movements.)

That higher-volatility pattern affects the individuals’ decisions — their strategies — but the pattern remains. Because — this is what’s fascinating — it doesn’t matter what the individual strategies are. The simple fact of individuals adaptively selecting strategies is all it takes for the high-volatility pattern to emerge.

How does this all bear on non-kin altruism? In my words: Once you have a vehicle — language — by which cultural values and mores can be transmitted to others and across generations, cooperation and individual selflessness are naturally emergent properties of that system. They’re inevitable, because groups that don’t transmit and enforce those values don’t survive in competition with ones that do.

And that leads me to the study that Steve and The Economist linked to. It looks at fifteen contemporary, small-scale societies (hunters, fishermen, foragers), and asks how the selflessness that makes small groups prosper could have extended to the kind of global altruism that we see today.

They suggest that “such societies may have required norms and institutions that sustain fairness in ephemeral [one-time] exchanges.” Their findings support that: larger communities “punish” cheaters more, and groups with more market interactions and participation in world religions have more “fair” behavior by individuals.

In my thinking: these social patterns and institutions are naturally emergent properties of a species that has language. (A necessary caveat that I won’t expand on here: periodic genocide is also a naturally emergent property of such a species, for the same reasons of group selection.)

Now I don’t know about you, but my spidey sense detects a strong whiff of axe-grinding in Henrich’s conclusions, demonstrating that those favored children of the Right — markets and religion — are what accounts for fairness. (I think this may be why Steve tweeted it?)

But the study makes me wonder (and wonder why the researchers didn’t wonder): do government-like institutions in those societies also enforce fairness and encourage selflessness? Do equivalents of our three branches — strong leaders, councils of elders, and systems of group adjudication — correlate with more fairness and selflessness in a society?

This dichotomy — between market/religion-based institutions and government institutions — also makes me think again about the Jonathan Haidt research I’ve been blogging recently, showing that Republicans give much more moral weight than liberals to group loyalty. Might it be — since liberals believe more in government as the fairness enforcer — that the two groups just define “group” differently (or that liberals’ support for government is based on reasoned belief instead of “sanctity” or “moral intuition”)?

Steven Pinker has suggested that cooperation with non-kin is one of the three main attributes (along with language and tool/technology use) that distinguish humans from other animals. That cooperation put us at the top of the food chain.

Which leads me to reiterate a thought I expressed recently in the comments, in response to those who champion competition as a great good: competition is a second-order effect; its only merit is that it makes cooperation more efficient. If we’d “all just cooperate” (the woolly headed liberal’s mantra, finger twirling in cheek), we’d all be better off than if we all competed. It’s both obvious and stupid. Given a population of selfish social atoms, competition forces people to cooperate in groups.

But competition (a.k.a. “the market”) is not the only thing that improves cooperation. Henrich’s work shows that religion does as well. And it’s not at all difficult to find other instances  — government, for instance — in which cooperation is improved by . . . cooperation.

Do Moral Intuitions Change in Different Situations?

March 17th, 2010 No comments

In response the Jonathan Haidt’s comment on Bryan’s post:

One of my biggest questions about Haidt’s work: are people’s moral intuitions consistent across different situations?

We know that behavior is often not generalized across situations. i.e. interventions in children’s homes/families have little or no effect on their behavior at school.

I wonder if survey choices distinguishing between the private and public realms would yield very different weightings in different groups.

For instance: if we looked at honesty/truthfulness (a realm I very much wish that Haidt would explore–not just “authenticity” or integrity), would we find that Conservatives value it highly (more than Liberals?) in private, especially face-to-face, dealings, but downgrade it significantly in public dealings where groups are interacting–notably in public debate–situations where group loyalty would overwhelm it?

This in general raises the thorny issue of interactions between the realms, an issue that promises to do for Haidt’s work what genetic interactions and epigenetics have done in genetics: make it extraordinarily complex (and interesting).