Archive for the ‘Evolutionary Psychology’ Category

Nassim Taleb: Two Myths About Rivalry, Scarcity, Competition, and Cooperation

June 28th, 2014 2 comments

I’m delighted to find that someone with the necessary statistical chops has answered a question I’ve been asking for a while: Have any of the 130+ evolution scientists who’ve savaged Wilson and Nowak’s Eusociality paper (and Wilson’s Social Conquest of Earth) gone deep into the maths of their model (laid out in their technical appendix)? I check periodically, but don’t follow the field carefully.

According to this Taleb Facebook post, the answer’s still no, almost four years after the paper was published.

Emphasis mine, links in the post:

There are two myths that prevail in academic circles (hence the general zeitgeist) because of mental contagion and confirmatory effects (simply from the way researchers look at data and the way it is disseminated): 

1) That people are overly concerned by hierarchy (and pecking order), and that hierarchy plays a real role in life, a belief generalized from the fact that *some* people care about hierarchy *most the time* (most people may care about hierarchy *some of the time* but it does not mean hierarchy is a driver). The problem is hierarchy plays a large role zero-sum environments like academia and corrupt economic regimes (meaning someone wins at the expense of others) so academics find it natural so they tend to see it in real life and environments where if may not be prevalent. Many many people don’t care and there is no need to pathologize them as “not motivated” –academics who publish tend to be “competitive” and “competitive” in a zero-sum environment is deadly. I haven’t seen any study looking at things the other way.

2) That “competition” plays a large role compared to *cooperation* in evolutionary settings –of course if you want ruthless competition you will find examples and can model it with bad math. The latter point is extremely controversial, Wilson and Nowak have been savagely attacked for their papers (with >130 signatures contesting it) and, what is curious NOBODY was able to debunk the math (very very very rigorous backup material). If Nowak/Wilson were wrong someone would have shown where, and in spite of the outpour of words nobody did.

I’d condense my thinking on the subject as follows:

1) People mistake rivalry for scarcity. If one tribe excludes all the others from a water source, forces them to do their will to get water, there’s obviously scarcity, right? Wrong.

Don’t get me started on the sacralization of (largely inherited) “property rights,” ownership — the right to exclude others.

2) They don’t understand that competition’s only virtue is increasing and improving cooperation. Cooperation — non-kin altruism, eusociality, etc. — is the thing that got us to the top of the food chain. Cooperation is what wins the battle against scarcity.

Competition fetishists think that competition is always good because it sometimes improves cooperation, even though it frequently does the exact opposite.

Think: trade wars. Or just…wars.

Cross-posted at Angry Bear.

The Five Best Nonfiction Books

June 2nd, 2014 11 comments

Okay fine, not the best. (Click bait!) But for me, the most important — the five books that, more than any others, taught me how to think about the world.

A friend in my “classics” book group asked me for nonfiction book recommendations. Here’s what I wrote:

The NF books that wow me, get me all excited, have me thinking for years or decades, are ones that are comprehensible to mortals but that transform their fields, become the essential touchstones and springboards for whole disciplines and realms of thought. Writing for two such disparate audiences is insanely hard, and the fact that these books succeed is a big part of what makes them brilliant.

Also books that cut to the core of what we (humans) are, how we know. (So, there’s much science tilt here, but far bigger than arid “science.”)

“I don’t know how I thought about the world before I read this.”


“Yes! That’s exactly what I’ve been kinda sorta thinking, in a vague and muddled way. THANK YOU for figuring out what I think.”

These books let you sit in on, even “participate,” in discussions at the cutting edge of human understanding. They make you (or me, at least) feel incredibly smart.

And they’re fun to read — at least for those with a certain…bent…

Probably have to start with Dawkins’ The Selfish Gene. When it came out in ’76 it crystallized how everybody thought about evolution, hence life and humanity. The amazing Dawkins, amazingly to me, has become kind of hidebound and reactionary in response to new developments since then (group/multilevel selection, inheritance of acquired characteristics), but the new information and new thinking that make parts of this book wrong, couldn’t exist without the thinking so beautifully condensed in this book. Might not need to read the whole thing, but it’s pretty short and you might not be able to resist. Very engaging writer and full of fascinating facts about different species and humans. Also the place where the word “memes” was coined.

Steven Pinker. The Blank Slate: The Modern Denial of Human Nature. The most important book I’ve read in decades. Philosophy meets science meets sociology, anthropology, psychology, politics, law… Pinker’s core expertise is in language acquisition, how two-year-olds accomplish the spectacularly complex task of learning language (see: The Language Instinct: How the Mind Creates Language.). He has a love-affair with verbs, in particular. Just loves those fuzzy little things. But his knowledge is encyclopedic and his mind is vast. And he’s laugh-out-loud funny on every other page. Also incredibly warm and human. I have such a bro-crush on this guy. (Also: everything else he’s ever written, including at least some chunks of his latest, The Better Angels of Our Nature: Why Violence Has Declined.)

Daniel Kahneman, Thinking Fast, and Slow. Kahneman and his lifelong cohort Amos Tversky (sadly deceased) are psychologists who won the 2002 Nobel Prize — in Economics! — for their 1979 work on “Prospect Theory.” (Fucking economists have been largely ignoring their work ever since, but that’s another subject…) About “Type 1” and “Type 2” thinking: the first is instantaneous, evolved heuristics that let us, e.g., read a person’s expression in a microsecond from a block away. The second is what we think of as “thinking” — slow, tiring, and…crucial to what makes us human. Interestingly, in interviews Kahneman says that he almost didn’t write this book, thought it would fail, for the very reason that it’s so great: it addresses both mortals and the field’s cutting-edge practitioners, brilliantly. The book’s discussions of his lifelong friendship and collaboration with Tversky are incredibly touching.

E. O. Wilson, The Social Conquest of Earth. Q: How did we end up at the top of — utterly dominating — the world food chain? A: “Eusociality”: roughly, non-kin altruism. Wilson knows more about the other hugely successful social species — insects and especially ants — than any other human. He basically founded the field of evolutionary psychology with his ’76 book, Sociobiology. As with the others, this is deep, profound, wide-ranging, and incredibly warm and human in its insights into what humanity is, what humans are. Those things that are wrong in The Selfish Gene? Here’s where you’ll find them.

Michael Sandel, Justice: What’s the Right Thing to Do? Philosophy. It draws on some scientific findings, but mainly this is very careful step-by-step thinking through a subject, a construct, that is not uniquely human, but close. (Elephants, apes, etc. do seem to care about justice, sort of.) I find it especially engaging and important because it addresses and untangles the central political arguments of recent times — is it “just” to make everyone better off by taking from the rich and giving to the poor? Should individual “liberty” trump individual rights? What rights? Etc. This book did much to help me comb out my muddled thinking on this stuff.

Morton Davis, Game Theory, a Nontechnical Introduction. Stands out on this list cause it’s not one of those “big” books. Available in a shitty little $10 Dover edition. But it’s an incredibly engaging walk through the subject, full of surprising anecdotes and insights. And he does all the algebra for you! The stuff in here makes all the other books above, better, cause they’re all using some aspects of this thinking. Here’s an Aha! example I wrote up: Humans are Pathologically Nuts: Proof Positive.

Okay, you noticed there are six books here. Did I mention click bait?

Cross-posted at Angry Bear.

More On Being Wrong

April 24th, 2011 Comments off

Barry Ritholz links to Kathryn Schulz’s TED talk on Being Wrong (I wrote about her book here), and comments,

I dont know about anyone else, but I am wrong all the time.

I expect to be wrong.

Which led me to clear up some of thinking that I’ve been doing since my last post on the subject. Here’s the comment I left on Barry’s blog, with some editing:

I of course expect to be wrong about particular things. I think we all do. But that’s future tense. “Some of my current and future predictions will turn out to be incorrect.” Well, yeah. Not really so interesting.

What is interesting is the human propensity for present-tense denial of even obvious reality, and the extraordinary lengths and contortions to which we’ll go to avoid admitting that we’re wrong.

The big aha insight for me in Schultz’s book was this, which she skips by, doesn’t really put across, in her talk:

There is no such thing as the real-time experience of being wrong. Present tense. As soon as you realize you’re wrong, you’re not anymore.

She just hints at this glancingly in the talk, when she says that being wrong feels like … being right. Nice line, that.

The rest of her book left me dissatisfied, though (and even more so her talk), because it didn’t answer the fundamental question: why does it feel so bad to discover that we’re wrong? Why did we evolve to be like that? Wouldn’t it be more evolutionarily fit to embrace and enjoy the discovery of wrongness, for purposes of self-correction and accurate perception of reality? Wouldn’t people with that propensity have more grandchildren?

I’m kind of astounded that after five years of thinking about these questions, she never seems to have asked, much less answered, that one. She says we don’t like discovering that we’re wrong because it feels bad. But she never discusses why it feels bad.

The best (possible) answer I’ve come across is via Jonah Lehrer’s How We Decide.

Short story, it’s how the learning mechanism works. We’ve evolved so that if we are right in a prediction, we get a dopamine hit of pleasure. If we’re wrong, we don’t get our fix, and that feels really bad. (This helps explain why humans’ loss-aversion exceeds our gain-seeking.) It’s pretty straightforward behaviorism, embedded in a fascinatingly complex set of constructs.

So the the learning mechanism, ironically, makes us not want to discover that we’re wrong, because it feels bad.

I can only figure that the fitness benefits of the learning mechanism outweigh the unfitness of reality denial, and that evolution couldn’t “figure out” any other, less “expensive” way to do learning.

Just Cause I Thought This Was Hilarious — For Multiple Reasons

February 20th, 2011 1 comment

From Steven Jay Gould’s The Flamingo’s Smile:

A hungry female black widow spider is also a formidable eating machine, and courting males must exercise great circumspection. On entering a female’s web, the male taps and tweaks some of her silk lines. If the female charges. the male either beats a hasty retreat or sails quickly away on his own gossamer. If the female does not respond, the male approaches slowly and cautiously, cutting the female’s web at several strategic points, thereby reducing her routes of escape or attack. The male often throws several lines of silk about lhe female, called, inevitably I suppose, the “bridal veil.” They are not strong. and the larger female could surely break them. but she generally does not. And copulation, as they like to say in the technical literature, “then ensues.” The male, blessed with paired organs for transferring sperm, inserts one palp, then, if not yet attacked by the female, the other. Hungry females may then gobble up their mates, completing the double-entendre of a consummation devoutly to be wished.

Is This Person Liberal or Conservative? In One Question.

February 20th, 2011 1 comment

The OK Trends blog on the OK Cupid dating site is pretty amazing. They pull all their hundreds of millions of pieces of data and suss out amazing facts about how people are, and how they interact. Here’s a beaut re: politics and ideology (Jonathan Haidt, take note):

The Best Questions For A First Date « OkTrends.

Jim Manzi Makes the Case for Doing Whatever We’re Doing Right Now — Or Nothing

November 20th, 2010 2 comments

I have to start this post by saying how much I like Jim’s recently-bruited notion (and coinage): “causal density.” I’ve been sharing it with my friends. In my words:

An event in physics — a ball being bit by a bat and landing in center field  — has very few causes, so it’s pretty easy to deduce and explain those causes. Chemistry is significantly more complex, but still largely likewise. 

Get into biology, and you’re very quickly into a lot of causes (what causes diabetes?), and those interacting with each other in extremely complex ways. It’s much harder to deduce causation, or reduce it to a simple or even straightforward formula or algorithm — or to make predictions based on particular interventions.

Jim suggests that the social sciences have even higher causal density. My example: What caused the decades-long decline in crime rates? Better policing techniques? Shifts in law-enforcement and judicial budgets? Harsher sentencing laws? Demographics? Roe v. Wade? Update: unleaded gasoline?

Despite yet more decades of regression analyses trying to “control” for all those factors and many more, controlling for those factors’ interactions makes the quest for cause deucedly difficult, perhaps even Quixotic. This especially when you consider all the factors that might not have been considered or controlled for: cf. Roe v. Wade, until Levitt came along, or the effect of genes on character development, before the explanations of evolutionary psychology — especially pre-Trivers.

Jim brings this thinking home to the world of economics here and here. His basic assertion is that we have no idea what (long-term) effects different economic policies will have.

I think he overstates his case some (economics does have some quite good predictive abilities in particular areas — way better than we had a century ago), but still I agree that it’s a difficult issue. (Though I give a tentative and perhaps only somewhat useful response here.) But what concerns me most is what I discern to be the unstated implication of his posts:

Since we can’t predict the results, we shouldn’t do anything.

This thinking is rooted in a fundamental (and not totally crazy) belief among conservatives: that we’ve spent decades, centuries, millennia working out our social, economic, and cultural systems, so they should get extra credence in deciding what to do next. (I’m pretty sure I’ve heard Jim make this argument, but if not we’ve all certainly heard it from other conservatives.) The best bet, probabalistically, is to maintain the status quo. (This closely related to a basic dictum of efficient-market theory: absent any new information, the most likely tomorrow price for something is … today’s price.)

Here are a few ways that I have a problem with that:

1. According to Jim’s own thinking, where we happen to be at any given moment is the result of an infinitely complex mix of interrelated causes and effects — some of which we’ve had control over, but many of which (especially the interactions) were and are completely beyond our ken. So our current position is to a great extent the result of luck, chance, fortune: an accident of history. Saying otherwise is to invoke a rather Panglossian and teleological belief system.

2. Which moment in history embodies that Panglossian moment? Today? This is directly at odds with the conservatives’ stated beliefs: that things were much better before the New Deal. Who gets to choose the conservative moment? Who decides that the status quo of the current moment (or some other) is the best yet, and that we should cling to it?

3. Favoring the status quo is directly at odds with America’s (humanity’s?) greatest virtue and capability: trying new ideas and approaches in an effort to progress and make things better.

4. The current moment can be very far from Panglossian — think Soviet Russia, or the radical, free-market- and government-capture-driven wealth and income inequality in America today. Since we don’t know with certainty what policies will fix that broken moment, should we do nothing about it? (But then again, I think we — with the exception of Republicans in their self-delusion — do know with reasonable certainty many of the answers for American inequality and its drag on growth. We know them largely because of research in economics — particularly econometrics.)

5. (Added Nov. 22) Doing nothing is doing something: it’s actively choosing whatever — to some extent by historical accident — happens to be in place at the moment. Who is to say that the unintended consequences of that choice will be superior to the unintended consequences of some other choice? We have to make a choice; the only question is what we should use as the grounds for that choice. Should systematic, rigorous economic analysis — despite all its failings — be considered in making that choice?

We’ve faced a decidedly non-Panglossian series of moments over the past decade: health-care costs going through the roof, and projected to continue, threatening to shatter the American way of life. And in the meantime the Republican party had its blank piece of paper — an almost total carte blanche — for six years. The “unintended consequences” and “history is our best guide” reasoning led to exactly what Jim’s posts suggest and imply: doing nothing. I don’t think he would argue that it was the optimal policy approach.

Now Jim might say that he’s not implying that we should cling to the status quo or do nothing. But he rather leaves us hanging. If the implication that I deduce is not what he intended to suggest, what was his intention? Simply to debase economic analysis as an input to our judgments, in favor of — for instance — casual lifelong observations?

It seems likely. He says that he came to his belief in the superiority of free-market solutions “through some combination of historical reasoning, introspection, practical experience and so forth.”

I’m completely agog. He doesn’t even mention systematic (or statistical) analysis — by himself or others. I find it hard to believe that Jim has never read an econometric analysis, or that he has been completely uninfluenced by those he has read. He’s not that kind of guy.

Understand: Jim is co-founder, chairman, and managing director of Applied Predictive Technologies. To quote Wikipedia (wonder who wrote this…), “APT’s software takes a statistically rigorous test and learn approach…”

Now of course it’s true:  we can’t design and implement controlled tests of economies, which makes his company’s work of a decidedly different order. But does that mean that econometric analysis has no place — at all — in informing our judgments? That we should simply ignore it? That’s what Jim seems to be saying. But again: I really can’t believe that he does that.

If we review hundreds of systematic, rigorous econometric studies looking at many different data sets using many different analysis methods — and even have the benefit of systematic, rigorous reviews of all those studies — and we find that X seems to have no correlation with Y, should we draw the conclusion that changing X probably isn’t going to change Y in the future? Or should we ignore that finding, and rely instead purely on “historical reasoning, introspection, practical experience, and so forth”? In other words, casual, non-systematic observation and surmise? Should we do so even if that surmise is contradicted by all the systematically rigorous analysis?

Manzi admirers want to know.

Why Would We Rather Be Wrong than Perceive Ourselves as Being Wrong?

November 8th, 2010 3 comments

Why would we rather perceive ourselves as right than be right? Why does believing ourselves to be right feel so good?

People hate being wrong. From an evolutionary perspective, this makes sense. If we’re wrong about the world out there, we’re less likely to survive and produce grandchildren. You’d expect being wrong to feel bad, because it discourages being wrong.

But here’s what weird: what people really hate is perceiving themselves as being wrong. They hate it so much, they’d often rather be wrong — even with all the evolutionary downsides that being wrong delivers.

Example: believing that some animal is godlike and hence untouchable for food. Result: less available food, so (in aggregate, over generations) less grandchildren.

But try to tell someone holding that belief that he or she is wrong, that it’s a false belief. You’ll encounter massive, impenetrable resistance.

What in the heck is going on there? It seems completely contrary to evolutionary logic.

Why did humans evolve so that perceiving ourselves as being right is more pleasurable — feels better — than actually being right? (Remember: in general, natural selection results in “fit” behaviors — like having sex — giving pleasure; that pleasure reward is what encourages the behavior. This is using “fit” in the technical evolutionary sense: “likely to result in more grandchildren.”)

Not surprisingly, the remarkable Robert Trivers addressed this question:

The Elements of a Scientific Theory of Self‐Deception Update: Ungated version here.

And many have built on his work since. (The paper has 133 citations in Google Scholar, which is remarkably high by Google-citation standards.)

Trivers offers a few possible explanations, and suggests several avenues for further research. I’ll just share his first suggestion, and leave the more abstruse ones for those as is interested.

Explanation #1: if we deceive ourselves, it’s easier to deceive others.

Suppose you know that your proposal is bad for the person you’re proposing it to. People are good mind-readers (or more accurately, readers of facial expressions, tone of voice, body language, etc.), so if you’re consciously aware that you’re lying, they can often tell.

So what’s the best strategy? Hide that knowledge in your unconscious, where even you can’t see it. So your conscious mind believes a lie, even though your unconscious mind knows it’s a lie. Since the truth is hidden from you, it’s also hidden from others.

This gives self-deception real evolutionary advantages — it lets you convince people of things that are bad for them and good for you — so natural selection would naturally result in a mechanism that allows for that. It would also make it pleasurable to use that mechanism — to fool yourself — so you’ll use that mechanism. Hence the widely-demonstrated joys and benefits of self-delusion.

There are (at least) two problems here, though:

• Self delusion must, on average, be beneficial to individuals, or it (and the pleasure reward for doing it) wouldn’t have evolved. But once the pleasure reward exists, it could encourage self-delusions that are not self-beneficial.

• Even if the self-delusion instinct is (overall) good for individuals’ “fitness,” it’s quite possibly bad for everyone in aggregate — depending on how you define “bad.”

This subject goes far deeper and gets almost infinitely complex, but I’ll stop here and leave others to ruminate further.

What’s Wrong with Free Markets: “The ‘Wisdom’ of the Crowds”

October 6th, 2010 2 comments

This may seem obvious to many, but it’s been very clarifying for me.

People often argue against the free-market system — which is based on the idea of rational actors — by saying “people are obviously not rational actors!”

But that’s a stupid argument. It misses the point. Nobody thinks that everyone, always, makes rational decisions. That would be dumb. Rather, the Wisdom of the Crowds idea is that the market operates as if everybody makes rational decisions.

Here are the assumptions underlying that thinking:

The free market results in the best allocation of resources.

Because: People make decisions about what they want to buy, so resources flow to producers of those things.

Even though: Individual purchase decisions are often irrational — not delivering maximum utility to the purchaser (much less to society as a whole).

But: All those irrational decisions cancel each other out, so the rational decisions dominate, effectively allocating resources.

Because: The irrational decisions are random — non-systematic.

The final assumption — which most free-market advocates don’t know they’re making — is the fatal flaw underlying the belief system. (Or at least one of the fatal flaws.)

Bryan Caplan addressed this issue beautifully in his Myth of the Rational Voter. (Some comments on it here.) He points out (and demonstrates) that people’s voting choices are irrational. But more importantly, he shows that they’re systematically irrational. So rational choices don’t float to the top of crowd; they’re dominated by systematically irrational decisions by that crowd.

Example (mine, not his; he has lots of his own): People A) think foreign aid is a big part of the U.S. budget (it’s well under 1%), B) are naturally driven by jingoism and ethnocentrism, C) underestimate the personal and national benefits deriving from foreign aid (just ask Mullen and Gates), and D) don’t like taxes. So they vote for people who promise to cut the budget (hence taxes) by cutting foreign aid.

But Bryan is a free-market believer. So he doesn’t apply the same thinking to purchase decisions that he does to voting decisions. He doesn’t consider (or acknowledge) that in fact, the crowd’s purchase decisions are also systematically irrational.

Example: People A) vastly overestimate their own driving skills (almost everybody believes they’re above average, or even in the top 10%), B) underestimate the dangers of traffic accidents (#1 cause of death in children) while overestimating other dangers (child abduction or terrorist attack: vanishingly small odds), and C) greatly overestimate the value of maneuverability and visibility (sitting up high and looking down on others) in avoiding accidents (braking distance is what counts). So they systematically underspend on what matters for auto safety (braking distance, air bags, etc.), favoring power (“I need it to get out of dangerous situations”; yeah, right), style, size (which generally increases braking distance), and “handling” instead.

So in this case and myriad others, because of systematic human irrationality, the free market does not deliver the best allocation of resources — either for individuals or for society as a whole.

I won’t get into what we as a society do and should do given these facts. (If you want to, you could start here.) Just to say, it’s important to know the facts.

Delight and Abject Dismay on Richard Dawkins’ Birthday

March 26th, 2010 15 comments

Another of those convergences: I just joined the Richard Dawkins group on Facebook, and discovered that today is his birthday. (Happy birthday sir!) It’s a convergence because over the last week I’ve been horribly dismayed. After decades of near hero-worship on my part, I’ve discovered that he is not acting as the man I’ve always believed him to be.

The issue is his position on group selection. (Don’t go away: it matters.) The way he has defended that position seems contrary to everything I have always so admired about him.

And I have so admired him, for so long. I have to watch myself constantly to avoid the kind of wild-eyed evangelism that serves only to give aid and comfort to the creationist enemy. The Selfish Gene and The Extended Phenotype provided (some of) the fundamental underpinnings for my understanding of (human) existence, and the belief and value system that’s built on that understanding.

I didn’t really need to read The God Delusion — preaching to the choir — but I did so and greatly enjoyed it purely for the joy of his arguments — the lucidity, the cogency, the logical and rhetorical coherence.

I can’t count the number of times I’ve recounted his anecdote about an aging professor who changes his mind. (“My dear fellow, I wish to thank you. I have been wrong these fifteen years.” . . .  “We clapped our hands red.”) It still brings tears to my eyes when I read it, and epitomizes how science, for all its real-world failings, is fundamentally different from faith. (Here. Start with “It does happen.”)

So, again, I’m nearly teary-eyed at the stance he has taken, and the rhetoric he’s deployed, in response to a body of thinking that has grown over decades and came to something of a culmination in 2007. (I’m late to the party on this one.) That body of evidence and theory contradicts one of his longest- and strongest-held beliefs: that group selection is hooey, that it could not have had any role in the evolution of human altruism.

Remember the stated goal of Dawkins’ seminal book: “My purpose is to examine the biology of selfishness and altruism.”

His basic theory: genes are the units of selection, and organisms are the vehicles of that selection. If a gene causes organisms to have more grandchildren, the gene’s frequency expands in the population.

Based on this, he rightly pooh-poohed warm, mushy, poorly-reasoned notions about genes contributing to “social cohesion” and the like. No altruistic gene could survive in a group if it didn’t provide net benefit for the individual containing that gene — either by helping the individual, helping kin who have the same gene, or through reciprocal payback from other individuals.

But what about the success of groups? Could groups with more altruistic genes have more grandchildren than groups with more purely self-serving genes? Could that group selection effect predominate over individual selection within the group?

It seems plausible, and from the first time I encountered the conundrum, it has always seemed to me to be a purely statistical question.

And that’s how (a damned impressive set of) mid-20th-century evolutionists went at it. They built models, ran the numbers, and determined that no: group selection could not overwhelm the forces of individual selection. If a gene isn’t good for an individual (and/or his kin), it will die out.

That belief achieved an orthodoxy in the political ecology of scientific academe that largely prevented later scientists from even raising the question, and successfully crushed most of the few efforts to re-examine it. It’s agonizingly similar to the despicable response that sociobiology and evolutionary psychology themselves encountered over those same decades, from the likes of Lewontin, Gould, and the “Theory” humanists.

As a result, both professionals and amateurs — including reasonably diligent amateurs like me — have been unthinkingly chanting along with that orthodoxy for years, decades. I don’t know how many times I’ve discredited thinking that seemed rooted in group-selectionist thinking.

And I was wrong. At least, I was too categorical. So I was sometimes/often wrong.

Here’s what makes me so sad: Richard Dawkins has been perhaps the most powerful voice for that orthodoxy, and he seems to be clinging to that idol even when its feet — his feet — are looking resoundingly clay-like.

Cutting to the meat, simplified:

In 2007, David Sloan Wilson and E. O. Wilson (the founder of sociobiology and one of the most brilliant, diligent, and sober evolutionary biologists to ever live, as Dawkins certainly agrees) published a paper (PDF) laying out the cogent, lucid, and compelling case that group selection can indeed predominate over individual selection in the evolution of altruistic genes — that the group can be a vehicle of selection, just as the individual can. (They talk about “multilevel selection.”)

In other words, genes that benefit the group can proliferate in the larger population, even if those genes are disadvantaged within the group. Again, it’s all a matter of models and statistics, and the Wilsons (no relation) deployed and cited damned convincing models and statistics showing that the earlier evolutionists probably got it wrong.

Now if Dawkins had cogent takedowns of those models and statistics, there is nobody I would rather hear them from. But his counterarguments have all been from principles, even when those principles are not thrown into question by Wilson and Wilson — their arguments are based on those principles.

What’s more dismaying is that Dawkins’ few dozen paragraphs in reply (remember, it’s been three years since then) bear all the hallmarks of a religionist who has not a leg to stand on, lashing out in frantic, desperate defense with red herrings, tangents, inapplicable arguments, dodges, weaves, and personal invective. (I’m not a professional in the field, but I know good and bad arguments when I hear them.)

This post is already too long, so I won’t detail everything here. You can see one of Dawkins’ replies here (PDF), and you can read the whole story from D. S. Wilson — including much of Dawkins’ response — here. Wilson’s 19-post blog thread is here in a one PDF.

I’ll just quote one passage from Dawkins to give the flavor of those replies:

…as far as I am concerned, the statement is false: not a semantic confusion; not an exaggeration of a half-truth; not a distortion of a quarter truth; but a total, unmitigated, barefaced lie.

This is not the Richard Dawkins I’ve known and (intellectually) loved for lo these many decades. It is, in fact, the exact opposite of that Richard Dawkins.

I can only quote D.S. Wilson’s words, which precisely echo my most heartfelt feelings:

In my dreams, I imagine him reading my modified haystack model and saying “Well done, David! I have been wrong all these years.”

Richard Dawkins won’t you please come home?

Can John Gottman Predict Divorce? (Probably Not.)

March 24th, 2010 Comments off

Update: Instead of saying “Probably Not” in the title, I probably should have said “We have no idea.”

Being a Seattle parent with kids in private schools, I’ve been assailed for years by pronouncements and lectures by and about the Seattle-based Gottman Institute (tagline: “Researching and Restoring Relationships”). Their most widely known claim is their ability to predict, after watching a married couple for fifteen minutes, whether they’ll get divorced.

The basic Gottman theory — that facial expressions of contempt during couples’ interactions are predictive of divorce — seems very plausible, intuitively. But there are many intuitively plausible surmises that are just wrong.

And no matter how intuitively plausible it is, the claim always seemed fishy to me. But I never did the research to find out if their predictions were really accurate. Happily, somebody has finally done it for me. Here’s Andrew Gelman, god of all things statistical, blogging about a Slate article excerpted from Laurie Abraham’s Husbands and Wives Club.

Short story, their “predictions” are built on quicksand. Here’s how they do it.

Gather data on a bunch of couples — say, six variables for each couple. Determine which of those couples get divorced. Then run a program that finds an equation correlating the (presumably) predictive variables to the results. (These quite remarkable programs — the realm of ultimate-wonk physicists only a decade or two ago — are now available for free download, or as $49 Excel plug-ins. To quote my buddy Olav, “Isn’t it great living in the future?”)

Here’s what’s wrong: these programs will always find an equation that correlates the variables to the results. (With a greater or lesser “fit” to the data.) Does that mean the equation is predictive? Only if it makes an accurate prediction when applied to a different set of data.

That is what Gottman has not done, at least in his published papers. Every one of them has a new equation that — surprise — “predicts” the divorces in the group with surprising accuracy — the same group that was used to generate the equation.

Now this is true: if the program finds a good data-fitting equation (which Gottman seems to have done — multiple times), there’s a greater chance that the equation will actually be predictive. But there’s only one way to know: use it to predict. If the prediction fails, the predictive ability of the equation is falsified.

Gottman has not (to my knowledge) attempted any falsifiable predictions, so we have no idea if his predictions are true or false.

The Gottman Institute presumably has all the data to hand, and could test past predictions against future results. I’m wondering: will they now do so?

Google doesn’t turn up any hits for “Abraham” on the site, so I’m thinking they haven’t responded. One can rather understand why. Abraham says in a comment to the Slate post (and, she says, in a footnote in the book — it’s not search-insideable on Amazon) that she repeatedly requested an interview in May 2009 but Gottman wouldn’t see her until October — too late for her book. I do wish she’d tried again before the excerpt was published, giving him ample benefit of the doubt.

But absent that, I’m quite curiously waiting to see what we hear from The Gottman Institute.