My friend Steve likes to proclaim the value of casual intuition — based on one’s day-to-day observations over the course of life — and downplay the value of expertise, analysis, and data in making good judgments. Among other things, he defends Sarah Palin and other less-thinkerly politicians on these grounds.
He also points to Robert McNamara — the king of data analysis — as having failed utterly in his judgments on Vietnam. This putting aside the facts that 1. Steve’s casual intuition would have led him to exactly the same policies (if not worse), and 2. McNamara’s data was not the driving force behind the big decisions and judgments on Vietnam. They were at best excuses, self-justifications, rationalizations, or simple thumb-twiddling. McNamara actually manufactured a system that delivered systematically false data.
Also: systematic, in-depth knowledge — rooted in research, analysis, and frequently, data — is obviously not sufficient to guarantee good judgment. But it is arguably necessary. Or at least, it (greatly?) improves the odds of making good judgments. If the Bush administration, for instance, had had some basic knowledge of the difference between a Shiite and a Sunni…
One of the key books on this field is Philip Tetlock’s Expert Political Judgment. He argues — based on analysis of 82,000 predictions by 284 experts — that political experts perform only slightly better than random dart throws. It’s a pretty damning condemnation of experts.
But as Bryan Caplan has pointed out, there are two fatal flaws in Tetlock’s argument:
1. He only examines questions that are highly controversial among experts. (If 50% believe each way, 50% will inevitably be wrong.) Tetlock explicitly ignores the “dumb” questions that seem to the experts to have obvious answers, but which everyday folks might consider controversial.
2. He doesn’t compare the the experts to the average person on the street. The only such comparison in the book is between experts and Berkeley undergrads — who are darned high on the elite/expert spectrum, in absolute terms. And even in that comparison, the experts win in a landslide. The undergrads aren’t even as good as chimps or dartboards.
This suggests that if you looked at those “obvious” questions — which are often not at all obvious to non-experts — and compared casual to expert opinion, you’d see experts being right far more of the time. As they say in the biz, “more research needed.”
Tetlock does reveal another fact, however, that serves to seriously undermine one’s confidence in the intuitionally inspired beliefs of Sarah and similar: among the experts, “foxes” — those who in Nicholas Kristof’s words are “are more cautious, more centrist, more likely to adjust their views, more pragmatic, more prone to self-doubt, more inclined to see complexity and nuance” — resoundingly beat out the “hedgehogs” — those who “have a focused worldview, an ideological leaning, strong convictions.”
Is this also true of everyday folks? Based on my many years of decidedly non-systematic observation, I would suggest that it is.
Update: Chris’ comment,
This worked well enough back when virtually all information of note was controlled by experts. Now they’re forced to compete with everyone, which has the nasty side effect of forcing people to become steadily more extreme and loud just to be heard.
Reminds me of another takeaway from Tetlock’s research. Again quoting Kristoff because he summarizes it well:
the only consistent predictor [of accuracy] was fame — and it was an inverse relationship. The more famous experts did worse than unknown ones. That had to do with a fault in the media. Talent bookers for television shows and reporters tended to call up experts who provided strong, coherent points of view, who saw things in blacks and whites.
In other words, the loudest, most simplistic, and most dogmatic “experts” — the extreme hedgehogs — 1. are the least accurate, and 2. get the biggest megaphone.