Archive for the ‘Science’ Category

The Luddite Fallacy Fallacy

August 21st, 2012 11 comments

I’ve spent a lot of time considering (here, here, here, and here) the notions of technological unemployment and the Luddite Fallacy: the idea that technologically driven productivity — machines — will replace, are replacing, human labor. I’d like to revisit that here.

My basic conclusion: the Luddites were obviously wrong at the time. But they’re right now — at least in the U.S. Even a stopped clock is right eventually.

I think the Luddite Fallacy argument ignores two things:

1. The limits to human capabilities. By definition, 50% of people have an IQ below 100. I don’t think anyone who’s reading (or writing) these words can begin to imagine how hard it would be to make a go of it in modern America with an IQ of 90 — to build a prosperous and secure life, raise a stable, happy family, or ensure that you can be self-sufficient in your waning years. Even getting through high school would be really hard.

The original Luddites weren’t hitting that cognitive limit — not even close. Today, tens of millions of people are slamming right into it (over time, hundreds of millions). Increasingly, only those at the right end of the bell curve are able to claim a decent (or any) share of the American pie. As the American economy is constituted (in its global context), diligence and hard work are not sufficient to give you that claim.

2. The declining marginal utility of innovation and consumption. As I pointed out in a post a while back:

Pretty much every important invention of the modern world – trains, planes, automobiles, air conditioning, antibiotics, painkillers, telephones, radio/television, computers – had already been invented and was in at-least-fairly widespread use when I was growing up in the sixties. The only thing since then has been the internet.

Post-’70 it’s just been distribution, improvements (i.e. cell phones over land lines), and price reductions — important stuff, no doubt, but compared to the germ theory of disease or the electric motor? (Arguably even the internet is just a distribution thing.)

The innovations that the Luddites were facing all delivered massive increases in human utility (via increasingly inexpensive and higher-quality goods and services). So while the losses to particular groups — and their required readjustments — were painful (sometimes horribly so), in the big picture they were overwhelmed by the overall increase in utility.

You just can’t say the same thing about Twitter, or inexpensive heated car seats. The human essentials that early innovations delivered (food, clothing, shelter, medicine, transportation, communication) were massively more valuable than the improvements we’ve seen in my lifetime.

Yes, the utility pie is still getting larger (far more slowly than it was in the past), but the slice that machines can’t provide — especially at the margin — is getting smaller, faster.

Combine these two realities to perceive a world in which:

1. A great (and increasing) proportion of human utility is, can be, delivered by machines.

2. Humans who do not (don’t have the wherewithal to) control those machines can only compete among each other to deliver an ever-decreasing slice of lower-utility goods and services. And they are compensated — given a slice of the pie — based on the steadily smaller amount of utility they can deliver. Left to itself, the market will provide many with a sub-subsistence level of compensation.

In the great log-rolling exercise that is our economy, an increasing number of people over the decades are falling off the log, and finding it hard or impossible to climb back on. Many — millions — are drowning.

And that magical log — which miraculously grows as more people climb on and have the sustenance to run faster — is not growing as fast.

Have I mentioned the Earned Income Tax Credit lately?

Cross-posted at Angry Bear.


The Top Two Criteria for Expert Judgment: Curiosity and . . . Curiosity

May 15th, 2012 No comments

First a recap:

Philip Tetlock’s Expert Political Judgment was a groundbreaking look at whether political experts really are expert, as judged by their success at making predictions. His overall conclusion: they aren’t. But (lifted from a previous post):

…among the experts, “foxes” — those who in Nicholas Kristof’s words are “are more cautious, more centrist, more likely to adjust their views, more pragmatic, more prone to self-doubt, more inclined to see complexity and nuance” — resoundingly beat out the “hedgehogs” — those who “have a focused worldview, an ideological leaning, strong convictions.”

This even while hedgehogs end up getting the biggest megaphones for their incorrect predictions.

But as Bryan Caplan pointed out quite cogently, there were two key flaws in Tetlock’s methodology (my words here, again from my previous post):

1. He only examines questions that are highly controversial among experts. (If 50% believe each way, 50% will inevitably be wrong.) Tetlock explicitly ignores the “dumb” questions that seem to the experts to have obvious answers, but which everyday folks might consider controversial.

2. He doesn’t compare the the experts to the average person on the street. The only such comparison in the book is between experts and Berkeley undergrads — who are darned high on the elite/expert spectrum, in absolute terms. And even in that comparison, the experts win in a landslide. The undergrads aren’t even as good as chimps or dartboards.

Back to the present: Tetlock’s latest initiative, the Good Judgment Project, looks to address those shortcomings. The first-round results are in — reported in an email to Tyler Cowen – and they’re eye-opening. Their predictors:

collectively blew the lid off the performance expectations that IARPA had for the first year. Their original hope was that in Year 1 the best forecasting submissions might be able to outperform the unweighted average forecasts of the control group by 20%. When we created weighted-averaging algorithms that gave more weight to our most insightful and engaged forecasters, these algorithms beat that baseline by roughly 60% (exceeding IARPA’s expectations for Year 4).

And what, you may ask, are the characteristics of their successful expert predictors?

(1) an intense curiosity about the workings of the political-economic world; (2) an intense curiosity about the workings of the human mind; (3) cognitive crunching power (“fluid intelligence” and a capacity for “timely self correction”).

Again: the foxes kick the hedgehogs’ butts.

I always agreed with the commentary about George W and his ilk — that they have no real curiosity about how the world works, they just seek confirmation of their existing (and often simplistic) beliefs — but I never considered it much of a knock-down argument. These results — once we see them explained (the email is pretty thin stuff) — may change my beliefs about that.

Caveat: these “curiosity” criteria are uncannily good at describing Philip Tetlock, Bryan Caplan, Tyler Cowen, and me. I tend to look askance at findings that are self-congratulatory.

Justin Fox reports and ruminates on his experience as a fairly mediocre forecaster in the project (emphasis mine):

So what distinguishes a bad forecaster? In my case, two things: (1) a discomfort with expressing my level of confidence with the size of my bets — this is a real flaw, perhaps traceable to the fact that I had never played a game of poker until two weeks ago; and (2) an almost complete lack of interest in the events being forecast. I think I’m pretty curious about the workings of the political-economic world. I just wasn’t interested in whether the IMF would officially announce before 1 April 2012 that an agreement had been reached to lend Hungary an additional 15+ Billion Euros.

… Hedgehogs who are obsessively focused on a particular theory of how the world works aren’t very good at forecasting. But foxes who don’t care aren’t very good at it either. The best forecasters would appear to be foxes who really really want to win the game of forecasting. To quote Saffo again, the key is to “hold strong opinions weakly.” Don’t be stuck in your views; be willing to revise them quickly when new information comes in. But have bold views, or don’t bother trying to make forecasts.

Cross-posted at Angry Bear.


Government Gets the Lead Out, Crime Plummets

May 29th, 2011 3 comments

No, this is not about lead-footed Starsky and Hutch-style car chases by law enforcement.

Rather, it’s about damned convincing evidence that unleaded gasoline (introduced in the U.S. in the 70s) is largely responsible for the huge decline in crime rates since the early 90s. (Update: it continues.) Even more convincing than (but not precluding) Levitt’s Roe v Wade hypothesis.

Short story: we spent fifty years quite literally poisoning the minds of our children — especially inner-city and low-income children. The damage is permanent.

Researcher Rick Nevin is getting some well-deserved press (Washington Post, Wall Street Journal) for his cross-country analyses of lead exposure and crime. (Worth noting: he first published this research in 1999.)

Here’s a picture; you can eyeball the correlations yourself. The researchers, naturally, analyze the correlations more systematically.

Here’s more, consumption of lead in gasoline in the USA, in thousands of metric tons (click for source):

See also the work of Jessica Wolpaw Reyes, who claims that half the decline in crime resulted from less lead in the environment (hence in little kids’ heads).

Robert Waldmann (hat tip!) at Angry Bear is looking at the latest data from the UK, where they went unleaded thirteen years later, so the effects should be showing up now. They seem to be:

the total number of violent crimes was basically identical in 2004/5 2005/6 and 2006/7 then declined about 17% by 2009/10. The predicted peak of 2007 corresponds about as precisely to the data as is conceivable.

From the WaPo article:

Chicago’s Robert Taylor Homes, for example, were built over the Dan Ryan Expressway, with 150,000 cars going by each day. Eighteen years after the project opened in 1962, one study found that its residents were 22 times more likely to be murderers than people living elsewhere in Chicago.

Nevin’s finding implies a double tragedy for America’s inner cities: Thousands of children in these neighborhoods were poisoned by lead in the first three quarters of the last century. Large numbers of them then became the targets, in the last quarter, of Giuliani-style law enforcement policies.

We’re seeing it in spades: the history of tetraethyl lead (read it and weep) is a tragic textbook case of market/profit interests eviscerating the commons and making us all (including the rich) far worse off, in the name of “the invisible hand” making us all better off.

That ebil gubmint man with his heavy-handed regulations impinging on honest businesspeople (who are just trying to make a buck, for everyone’s benefit) sure did have a pernicious effect, huh?


May 5th, 2011 1 comment

More On Being Wrong

April 24th, 2011 No comments

Barry Ritholz links to Kathryn Schulz’s TED talk on Being Wrong (I wrote about her book here), and comments,

I dont know about anyone else, but I am wrong all the time.

expect to be wrong.

Which led me to clear up some of thinking that I’ve been doing since my last post on the subject. Here’s the comment I left on Barry’s blog, with some editing:

I of course expect to be wrong about particular things. I think we all do. But that’s future tense. “Some of my current and future predictions will turn out to be incorrect.” Well, yeah. Not really so interesting.

What is interesting is the human propensity for present-tense denial of even obvious reality, and the extraordinary lengths and contortions to which we’ll go to avoid admitting that we’re wrong.

The big aha insight for me in Schultz’s book was this, which she skips by, doesn’t really put across, in her talk:

There is no such thing as the real-time experience of being wrong. Present tense. As soon as you realize you’re wrong, you’re not anymore.

She just hints at this glancingly in the talk, when she says that being wrong feels like … being right. Nice line, that.

The rest of her book left me dissatisfied, though (and even more so her talk), because it didn’t answer the fundamental question: why does it feel so bad to discover that we’re wrong? Why did we evolve to be like that? Wouldn’t it be more evolutionarily fit to embrace and enjoy the discovery of wrongness, for purposes of self-correction and accurate perception of reality? Wouldn’t people with that propensity have more grandchildren?

I’m kind of astounded that after five years of thinking about these questions, she never seems to have asked, much less answered, that one. She says we don’t like discovering that we’re wrong because it feels bad. But she never discusses why it feels bad.

The best (possible) answer I’ve come across is via Jonah Lehrer’s How We Decide.

Short story, it’s how the learning mechanism works. We’ve evolved so that if we are right in a prediction, we get a dopamine hit of pleasure. If we’re wrong, we don’t get our fix, and that feels really bad. (This helps explain why humans’ loss-aversion exceeds our gain-seeking.) It’s pretty straightforward behaviorism, embedded in a fascinatingly complex set of constructs.

So the the learning mechanism, ironically, makes us not want to discover that we’re wrong, because it feels bad.

I can only figure that the fitness benefits of the learning mechanism outweigh the unfitness of reality denial, and that evolution couldn’t “figure out” any other, less “expensive” way to do learning.

Name one Really Big Invention since 1970 (besides the internet)

January 25th, 2011 15 comments

Prompted by:

1) My curiosity about what might have changed in the ’70s

2) My sister’s suggestion that this Andrew Sullivan post might be a clue (we invent ipods now, not particle accelerators)

3) Tyler Cowen’s new e-book(let), The Great Stagnation (talking about America’s slow growth of the last 30 years), and

4) A realization I had a couple of years ago.

Here it is: pretty much every important invention of the modern world — trains, planes, automobiles, air conditioning, antibiotics, painkillers, telephones, radio/television, computers — had already been invented and was in at-least-fairly widespread use when I was growing up in the sixties. The only thing since then has been the internet.

Post-’70 it’s just been distribution, improvements (i.e. cell phones), and price reductions — important stuff, no doubt, but compared to the germ theory of disease or the electric motor? (Arguably even the internet is just a distribution thing.)

Can you think of an exception?

I don’t know quite what to do with this fact, but I would like to know if others think it is in fact a fact, and what you would do with it.

Asymptotic Freedom

November 21st, 2010 1 comment

I really love that term, though I just barely, maybe understand what it means.

Eric Drexler (he of Engines of Creation, the breakout 1987 book on nanotech) thinks the source is very cool indeed. Here from November 7:

…the most exciting paper I’ve seen on quantum field theory and gravitation in a long time. It offers no speculations about strings, extra dimensions, new symmetries, or the like, and no loop quantum gravity or causal dynamical triangulations, just a carefully cross-checked mathematical analysis that reveals how general relativity transforms quantum electrodynamics at very edge of the breakdown of GR [general relativity] itself.

The question is the strength of electric interactions — the effective magnitude of a single charge — and how it changes (as it does) at high energies and small distances.

In brief, QED predicts that the strength approaches infinity;
QED + GR predicts that the strength approaches zero.

Many of the (meager) news reports to date describe Toms’ paper as if it merely smoothed out some difficulties with calculations in QED — those pesky infinities! — but that misses the point: This result extracts new physics from old physics, sharply revising our understanding of current physical theory as it approaches the Planck scale, the very edge of the unknown.

Here it is:

Quantum gravitational contributions to quantum electrodynamics
Quantum electrodynamics describes the interactions of electrons and photons. Electric charge (the gauge coupling constant) is energy dependent, and there is a previous claim that charge is affected by gravity (described by general relativity) with the implication that the charge is reduced at high energies. However, that claim has been very controversial and the matter has not been settled. Here I report an analysis (free from the earlier controversies) demonstrating that quantum gravity corrections to quantum electrodynamics have a quadratic energy dependence that result in the electric charge vanishing at high energies, a result known as asymptotic freedom.