Archive

Archive for December, 2012

Guns and Gun Deaths, State by State

December 24th, 2012 16 comments

The other day I looked at number of guns versus number of gun deaths by country, in countries like ours that have pretty good rule of law. The correlation is pretty clear: more guns, more gun deaths.

But I was also wondering about correlation within the U.S., by state. I’m pleased to find that Sam Wang’s got it:

Scientific American’s gun error

I’d say that great minds think alike, but really: Sam — a professor at Princeton — has got it all over me when it comes to drawing valid conclusions from statistical data.* For instance: like Nate Silver, he called every state correctly in the recent presidential election. But his stated confidence level was way above Nate’s — between a 99.2 and 100% chance that Obama would be re-elected. That’s putting your reputation where your predictions are.

Read Sam’s post and the one preceding it. He makes all sorts of sense. He also gives us the sources, so I thought I’d show the data a couple of other ways. How about…red states vs blue states? Here you go:

gundeathsbystate

Does anyone see a pattern here? Here it is on a map:

Screen shot 2012-12-24 at 6.35.25 AM

Though I find that Richard Florida has beat me to it on this one:

You can quibble all you want about correlation and causation, but the simple fact is: if you live in a red state the odds of your children dying of gun violence is 75% higher than if you live in a blue state.

That may help explain why, when Americans vote with their feet and choose where to live, only 38% vote for red states.

It may also help explain why people in red states want guns so much: it’s dangerous to live in a red state. They’ve got all those guns.

I’m still asking: would you rather “feel” safe, or would you rather be safe?

* “Prof. Sam Wang‘s academic specialties are biophysics and neuroscience. In these fields he uses probability and statistics to analyze complex experimental data, and has published many papers using these approaches. He is also the author of Welcome To Your Brain, a popular book about his field.”

Cross-posted at Angry Bear.

Guns, Murders, and the Rule of Law: Running the Numbers

December 22nd, 2012 15 comments

When I was eighteen years old, I went down to the government office in Olympia, Washington with my friend Steve (no, not that Steve) and signed as the character witness on his application for a concealed carry permit for his handgun. I was probably stoned at the time; I often was back then. (FYI, I grew up with guns in my house — stored in the gun locker up in the attic, but we took them out and shot targets now and then, cleaned them, took care of them. I went hunting several times as a kid.)

Steve and I are still buds, and he still carries. He even stays at my place sometimes when he’s working up here in Seattle, but he leaves the heat in his car. I guess he doesn’t feel the need to scare off the dangerous girl gangs that are forever threatening to invade my houseboat and have their way with us. (Yeah: wildest dreams.)

You won’t be surprised to hear that Steve and I have been going at it on Facebook since the Newtown horror (cordially, if you can believe that). I point to the numbers — less guns, less murders — and he points to countries like Mexico, which have gun control laws but still have high levels of gun homicide. I point out: those countries don’t have strong rule of law; corruption and criminal intimidation is rampant in the police, the judiciary, and the legislatures.

Those countries’ problem is not that they don’t (try to) control guns. It’s that they can’t control guns.

Curious as always, this led me to wonder: in countries like ours that do have a strong, well-institutionalized rule of law — countries that can control guns if they choose to — do less guns mean less gun killings?

Short answer, Yes:

firearms 3

CL is Chile. MK is Macedonia. FI is Finland. You know what US stands for.

The WJP index looked like a good measure among those I found on the web; you can choose others if you wish. I used .55 as the cutoff because it’s the line above which all European countries are included, and the resulting list seemed to consist of countries that (at least in aggregate) are fairly comparable to ours. Here’s the list:

Australia, Austria, Belgium, Canada, Chile, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Italy, Japan, Jordan, Macedonia, Malaysia, Netherlands, New Zealand, Norway, Poland, Portugal, Singapore, Slovenia, Spain, Sweden, Turkey, United Kingdom, United States, Uruguay.

The firearms and homicide data is from the United Nations Office on Drugs and Crime, but it’s laid out in a conveniently sortable table with a linked Google spreadsheet at this Guardian page.

Steve also pointed me toward a blog that includes, among other things, a time-series analysis of gun violence and gun restrictions in the UK. I’ll simply say: if you live in the U.S. instead of the U.K., you are 43 times as likely to die of a gunshot.

So are your children.

Cross-posted at Angry Bear.

 

Wealth and Redistribution Revisited: Does Enriching the Rich Actually Make Us All Richer?

December 16th, 2012 3 comments

Update: There is a revised and corrected version of the model and spreadsheet here, with discussion.

In a recent post I built a model with one rich person and ten poorer people to ask: does redistribution from rich to poor make us all more wealthy? The conclusion was Yes. Jump back there to see a quick rundown of the model’s assumptions.

Michael Sankowski at Monetary Realism put the model through its paces, and provide feedback by email. He pointed out one very interesting thing: total wealth accumulation in the model increases (faster) with redistribution in both directions — from rich to poor and poor to rich.

(Note that redistribution could take infinite forms — traditional welfare, education and health-care spending, tax preferences for rich people’s investment income, corporate subsidies, etc. This is systemic redistribution we’re talking about here. Like this model, the system just does it.)

Here’s what that looks like, with starting wealth of $2 million, divided 50/50 between the rich person and the ten poorer people. (click for larger):

wealth redistribution

(Since income in this model is based on spending from wealth, the income curves look similar to these wealth curves.)

It sure looks like giving more money to rich people does make the pie bigger — just not as fast or as much as giving more money to poorer people.

Since the model is based on the idea that poorer people spend more of their wealth each year, so giving them more money increases money velocity hence GDP and eventually wealth, this seems weird. But here’s the explanation: there’s a zero lower bound on poor people’s spending. It can only decrease so far, and so fast. Rich people’s spending can keep increasing with no constraints.

So as you transfer more wealth from poor to rich, the poorer people’s spending doesn’t decline as fast as the rich person’s spending increases — even though the poorer people spend a far larger proportion of their wealth than the rich (here, 80% vs. 30%). The zero-bound effect overwhelms the velocity effect.

Except. There’s a flaw in the model: as I’ve said, it assumes no behavioral responses — only that production (and the surplus from production) are driven by spending, with that whole process relegated to a black box. More spending yields more production and more surplus, with those received as income proportionally to each person’s spending. And spending is based on wealth.

But at the left side of the curve, where the rich person starts getting all the income, doing all the spending, and holding all the wealth, the poorer people have no incentive to work. (Their time is much better spent storming the castle with torches and pitchforks.)

If the poorer people have no incentive to work, there are no goods for the rich person to buy. Spending, income, and wealth would all go negative. So you would inevitably expect the left end of the income and wealth curves (like the peasants) to bend over and di(v)e eventually.

But for quite a ways along the left side of the graph, we’re not at that point. People have plenty of incentive, actually increasing incentive, until they actually are starving: they need to keep their families from starving, get health care for their kids, keep a house over their heads. So the bulk of the graph does yield lessons.

Here’s the most interesting one, to me: the smallest wealth accumulation in this model occurs at the point that maintains the status-quo wealth distribution. (Where there’s a 1.4% annual wealth transfer from poor to rich.) Everybody stays the same relative to each other, and the pie does get bigger (there’s a 5% surplus every year!). But it ends up being a smaller pie than with any other redistribution scheme.

Another lesson here: trickle-down actually works! Compared to zero redistribution, transferring 1.4-4% of poorer people’s wealth to the rich person results in everyone getting wealthier (again, compared to no redistribution), including the poorer people. Move to the left of that, though, and the poorer people get poorer while the the rich person gets richer.

And everyone, poor and rich alike, gets wealthier faster if you redistribute in the other direction — from rich to poor. Compare:

Ending wealth in millions after 20 years if you transfer 1.5% of wealth annually (starting wealth $2 million divided 50/50):
Rich One Poorer Ten Total
Rich to poor: $1.0 $3.1 $4.1
Poor to rich: $1.8 $1.7 $3.5

Remember that this model assumes no inflation, so in the top line the rich person’s real wealth is unchanged.

Note that these redistribution percentages are just illustrative. They’re nothing like policy proposals. The turning points and break points all depend on what parameters you plug into the model. But the shape of the curve, and the concepts that emerge, remain unchanged.

Finally, note this: If we redistribute enough (rich to poor) to actually reduce rich people’s dollar wealth, the pie gets much bigger, much faster.

And if you add the notion of declining marginal utility of spending and consumption, the aggregate utility pie gets even bigger, even faster.

Cross-posted at Angry Bear.

 

“Starve the Beast” Theory in One Sentence

December 14th, 2012 1 comment

Krugman’s leeches analogy today spurs me to comment:

If the police department in your town is doing a bad job and the crime rate is high, the obvious solution is to cut funding for the police department!

Sounds a lot like bloodletting theory, dontcha think? We’ll just drain off the bad blood! It’s so simple and obvious!

Cross-posted at Angry Bear.

Creating the Commons: A Tragedy in No Acts

December 10th, 2012 1 comment

Two articles in The New York Times today got me thinking about the tragedy of the commons. This is not new thinking, but it’s not widespread enough, in my opinion. And, I hope this expresses it in a somewhat new way.

One of the articles talks about the ongoing failure of pharmaceutical companies to develop new antibiotics to replace the increasingly ineffective drugs in our collective arsenal. The second describes a newly developed T-cell treatment for leukemia that really has to be described as a miracle treatment (at least in some cases).

Both made me realize what’s wrong with the standard Tragedy of the Commons thought-experiment, at least as it is commonly deployed by less-thoughtful libertarians.

In its standard form: many sheep farmers share a common pasture. Each has every incentive to add more sheep, so inevitably, the commons is consumed and destroyed.

The standard (at least libertarian) solution is to privatize the commons — split it up among the farmers and give them each property rights on their share. They’ll each have incentive to use and maintain the land judiciously.

But like most (all?) economic thought experiments, this one rests on an assumption: that the amount of land — the quantity of commons — is fixed.

And that ignores a crucial reality of the larger economic world: we — individually and collectively — are constantly creating the commons. If there are no roads on which the farmers can get their wool to market, they have no incentive to produce that wool. (And no: they “didn’t build” those roads.) Some complement of common goods is necessary for the private incentives to emerge and be maximized.

This points to the other tragedy: none of those individual farmers has the incentive to build roads — to create the commons — unless they’re guaranteed to profit (at least semi-)exclusively from those roads. So the commons doesn’t get built.

I think everyone would agree that we’d all be better off (more well-being, more “prosperity”) if anyone could produce and sell the (nonexistent) new antibiotics, and offer the T-cell therapy — if those things were part of the “commons” the way Newton’s Laws, the germ theory of disease, and word-processing are. But: The antibiotics article explains in simple supply/demand/profit terms why the institutions called “drug companies” don’t have an incentive to create new, tightly targeted antibiotics (so the people who work for them don’t either). The T-cell article explains why their current business model doesn’t work when developing cures that can’t be produced in mass quantities in factories — so, no incentive.

So not only are these things not part of the commons, they aren’t even created (or not as prolifically as we would like). The market doesn’t provide the proper incentives.

Libertarians seem to be woefully blind to this tragedy.

My friend Steve is constantly saying “it’s about incentives, not motives.”* But this ignores the fact that we can agree collectively on virtuous motives and goals that make us all better off. (And we do so — really have no choice but to do so — and have for millenia.) We can collectively create, empower, and fund institutions in which people are individually incentivized to adopt those motives, and seek those goals. (And we do so, and we have for millenia.) Few people will spend decades of their lives creating new life-saving therapies absent institutions that pay them to do so. (Financial incentives matter!) And we’ve already seen that the private market doesn’t “naturally” or necessarily result in the creation of such institutions. If it did, we’d have new antibiotics coming online in spades. We don’t. There is, however, a wide variety of Gangnam Style t-shirts available.

The two NYT articles point to several ways (which I will leave to your delectation) in which we overcome this inherent, inevitable market failure and collectively fund things like new antibiotics and T-cell therapies. Some of the impetus comes from private charities. Some from government funding. Both are driven by “motives” — to develope life-saving cures — not individual financial incentives. (Though the institutions do provide financial incentives to individuals to adopt the goals that those motives imply.) Both often seek to harness and realign market incentives to implement their motives and achieve their goals.

Yes, in many cases government “picks [potential] winners.” Think: NIH funding grants. (And one hopes that entities such as the NIH do some “central planning” before throwing grants around.)

Returning to the thrust of this post: you might say that those new discoveries end up being offered by private entities, for a price — and individuals are free to choose among the available offerings. They’re not part of “the commons.” But that ignores another reality: the set of available choices is itself part of the commons. Absent the original government funding, many valuable choices would not be available. (Like, for instance, the option of shipping your goods to market via the interstate highway system.)

This reality is revealed if you read the literature on utility, choices, and preferences. While much of that literature concentrates on preferences among available choices and the ordinal utility ranking of those preferences,** those who have thought carefully on the subject (mainstream economists, by the way) have pointed out that different sets of choices themselves have different amounts of utility. We decide collectively what choices will be available, even if our decision is not to decide — to leave the decision to “the market.” But that’s all a subject for another post.

Steven Pinker and E. O. Wilson — two of the most profound and scientifically-based thinkers we’ve got on the subject of human nature and the human condition — bring this point home. Pinker has said on numerous occasions that three primary things set humans apart from other animals: tool use, language, and non-kin altruism. Those are the things that got us not only to the top of the food chain, but to a position of complete speci-al hegemony.

Wilson takes it further (see: The Social Conquest of Earth). Group selection based on collective altruism, or “eusociality” — our singular ability to agree on future collective plans, and implement those plans even in opposition to apparent short-term individual incentives — is the very thing that’s gotten us where we are.

Together — by acting on motives instead of just individual (financial) incentives — over the millenia we’ve created a very, very big commons that we all “profit” from in multitudinous ways. If we’re smart, we can continue to build it, rather than watching it waste away through our short-sighted individual (in)actions.

* By this he generally seems to mean financial incentives (though he’s not consistent on this), which — when’s that is his meaning — completely ignores a huge panoply of incredibly powerful human incentives, many of them coalescing around various valences of “pride.”

** The bulk of this discussion, in my opinion, constitutes a centuries-long self-serving circle-jerk by economists (yes, the intellectually masturbatory and show-off motives are significant), with a primary but largely unconscious goal of defending, preserving, and increasing the wealth of incumbent wealth holders like themselves.

Cross-posted at Angry Bear.

Coase on Mainstream Economics

December 5th, 2012 Comments off

‘It is suicidal for the field to slide into a hard science of choice,’ Coase writes in HBR, ‘ignoring the influences of society, history, culture, and politics on the working of the economy.’ (By ‘choice,’ he means ever more complex versions of price and demand curves.)

U Chicago, my daughter’s college, sends me interesting emails now and then, undoubtedly in hopes that I’ll give them money. The latest had this interesting link to an interview with one of their favored sons:

Bloomberg Businessweek (November 30, 2012)
Urging economists to step away from the blackboard
At 101, Nobel laureate Ronald Coase attempts to launch a journal that focuses on the economic study of firms and people over abstractions and statistics.

Cross-posted at Angry Bear.


Explaining the Fed Credibility Argument

December 4th, 2012 2 comments

Following up on my last post, I actually think that there are two possible explanations for the “Fed Credibility” argument’s wide deployment, both hinted at in Simon’s response to my comment:

I think the credibility argument is really about the underlying motives of the policymakers, rather than their abilities. However I also think that argument is overdone – it takes a few generations to forget the lessons of the past, and policymakers are still obsessed with the 1970s.

In my words, two possibilites:

1. It’s a smokescreen. Actual reason: Creditors hate (unexpected) inflation. One extra percentage point transfers hundreds of billions of dollars of buying power from creditors to debtors, annually. ‘Nuf to get a fellow’s attention. The Fed is run by creditors.

2. (70s) They actually are worried — that the higher inflation target won’t work in goosing the economy or employment, so they’ll run into a stagflation situation where stomping on (spiraling?) inflation is…problematic. So they won’t be able to fulfill the second half of their promise without causing a job recession a la Volcker. Rock and a hard place.

 Cross-posted at Angry Bear.

The Fed Credibility Argument

December 4th, 2012 1 comment

For once a short post, inspired by Simon Wren’s suggestion that the Fed should allow (encourage) a temporary rise in wage inflation. (The merits of such a policy being obvious to many of us given the current virtues of some extra inflation, and past decades’ wage trends.)

I’ve never understood the credibility danger of the Fed announcing and executing a temporary increase in the inflation target.

If the Fed said “we’re going to let (wage) inflation float a bit higher until the economy’s moving strongly back up toward capacity, then we’ll bring it back down,” and then did exactly that, wouldn’t that greatly enhance its reputation for being able to control inflation?

Cross-posted at Angry Bear.

Modeling the Price Mechanism: Simulation and The Problem of Time

December 1st, 2012 4 comments

Today’s New York Times article on rapid online repricing by holiday retailers depicts a retail world starting to approach the “flash-trading” status of financial markets:

Amazon dropped its price on the game, Dance Central 3, to $24.99 on Thanksgiving Day, matching Best Buy’s “doorbuster” special, and went to $15 once Walmart stores offered the game at that lower price. Amazon then brought the price up, down, down again, up and up again — in all, seven price changes in seven days.

And:

The parrying could be seen with a Nintendo game, Mario Kart DS.

A week before Thanksgiving, the retailers’ prices varied, with Amazon selling it at $29.17, Walmart at $40.88, and Target at $33.99, according to Dynamite Data. Through Thanksgiving, as Target kept the price stable, Walmart changed prices six times, and Amazon five. On Thanksgiving itself, Walmart marked down the price to its advertised $29.96, which Amazon matched.

This made me think about a Greg Hannsgen post from last May on the Levy Economics Institute’s Multiplier Effect blog, a post I’ve been meaning to write about. It looks at the pricing mechanism based on how fast prices change/adjust.

The especially interesting thing about this post: It uses a dynamic simulation model to display the effects of slower and faster price adjustments, and lets you run the model yourself, right on the page, by moving a slider to change the speed of price adjustment and watch the results. (You need to install a browser plug-in from Wolfram, which only worked for me in Firefox under Mac OS X 10.6.8; it failed in Chrome.)

The gist:

You wonder what will happen when markets finally start working. How about, for example, a market that changes prices and wages quickly in response to fluctuations in demand? …

The pathway shown in the figure just below is followed by public production, capacity utilization, and the markup.

As you move the lever to the right, you are increasing a parameter that controls the speed at which the markup changes in response to high or low levels of customer demand.

What happens as the speed parameter is increased is that the economy’s pathway gradually changes until there is a relatively sudden vertical jump in the middle and much higher markup levels at the end—which means a bigger total rise in capital’s share.

Then:

The next pathway is the one followed by a second group of three variables during the same simulation. This second group includes: money, the government deficit (surpluses are negative numbers in this figure), and the employment rate (total work hours divided by total hours supplied).

when the lever is all the way to the right, the pathway begins with an outward spiral, leading to a new inward spiral, and finally an employment “crash” of sorts that occurs as the center of the second spiral is reached. This occurs after the markup has reached very high levels, as seen in the earlier diagram.

That’s a pretty amazing set of conclusions that I don’t think could ever have emerged from mainstream economic modeling techniques. You’ll notice that equilibrium, in particular, is a decidedly problematic concept here. It only seems to emerge when things have gone off the rails and hit the edge of the known world — not in those comfortable middle grounds so fondly envisioned by mainstream models.

I am making no claims of validity for this model’s assumptions, techniques, or conclusions (though I find the conclusions fascinating and plausible) — that’s beyond me. What I want to talk about is the validity of this class or type of dynamic simulation model compared to those commonly employed in “textbook” economic analysis.

In particular it makes me think about some recent posts and comments by Nick Rowe that are (bless him) teaching posts at least partially in response to my constantly demonstrated inability to properly grasp those standard approaches. (No: I’m not being ironic.)

Nick laid out the textbook understanding of the pricing mechanism based on marginal cost of production with wonderful clarity and broad insight here. I want to suggest that the explanation’s key feature is its use of “short-term” and “long-term.” It’s all about time.

But looking at the figures above — which model time fluidly and continuously (realistically?) as opposed to depicting it in two vaguely defined “chunks” — I want to ask whether Nick’s explanation, and the models employed in that explanation, are sufficient (or even proper) to grasp the processes playing out in the marketplace. Could they predict the effects that we see above? Is that type of depiction and prediction even within their inherent realm of capability?

Yes, Hannsgen’s model employs some textbook constructs, but it deploys them in a model that is structurally, qualitatively different from “comparative static” textbook models. Maybe the modeling technique is the message. Or at least, the proper modeling technique is a necessary condition for imparting an accurate and useful message.

Reading Nick’s posts and those of other smart econobloggers and -commenters, I am constantly astounded at his (their) apparent ability to intuitively comprehend and mentally manipulate multidimensional (and multiconceptual) interplays that leave me flummoxed. But I still wonder: is that ability sufficient for Nick to representatively model, in his head — to intuitively understand — the complex interplay of factors at work? His frequent comments about holding one factor constant, and the importance and difficulty of simultanaeity in our thinking, suggest that the answer might be no.

When you add a minimum wage to a free-economy model, is he able to simulate, in his head, all the possibly resultant pathways through the multidimensional space of prices, labor inputs, capital inputs, and output quantities (not to mention redistribution feedback effects, or utility-related factors), all over time — a space where no factors are held constant?

Yes, I’m questioning Nick’s quantitative ability given the models employed to do this kind of simulation in his head. (Suggesting: it’s not just me!, though there is certainly a matter of degree.) But as a result, and also, I’m questioning the qualitative ability of those models as employed to enable such understanding in our limited minds (notably, mine).

As I said recently, science is about really understanding how things work — telling a convincing causative story — not (just) predicting what will (might) happen. (At the extreme, you could say that prediction is only useful for scientists as a test of understanding.) The textbook models seem to provide some predictive power, but you gotta wonder how much of that is false positives. And given that question, you have to ask how well they really “explain” how economies work — how much true understanding they provide.

In other words — back to my apparently congenital inability to really internalize and understand the textbook models — it’s not my fault! (Yes: now I am being ironic.)

All this raises one big question for me: why aren’t mainstream economists all over these kinds of dynamic simulation models? Why do we only see them at the fringes, in work by “heterodox” outsiders like Hannsgen and Keen, and in the ever-about-to-emerge work promised by the guys at the Santa Fe Institute? Why aren’t big, and (within limits) out-of-the-box thinkers like Nick, Scott Sumner, Tyler Cowen, etc. — people who show every indication of really wanting to understand — fascinated by the possibilities of this type of modeling? In the weather and climate business, textbook economic modeling techniques would be laughed out of the room. Are not economies of a similar complex, dynamic, emergent type with weather and climate systems — arguably even more so, and more complex, because economies include the game-theory grist of conscious intentions and expectations (about other people’s intentions and expectations)?

I recently corresponded with a econ Phd candidate who really wanted to work with and build such models for his thesis. He reported that everything about the institutional and intellectual structure of academic economics militated against his doing so. “Just grab a data set, build a model and pull some regressions, call it good and head for the tenure track.”

You don’t have to read Kuhn or Marx to wonder whether this isn’t the result of 1. the irresistible intellectual gravitational pull of institutionally sanctioned models however obviously flawed (miasma, phlogiston, epicycles, equilibrium), and 2. the undeniable (inherent?) effectiveness of existing textbook economic models and modeling techniques in perpetuating and amplifying the established power structures and (increasingly) unequal distribution of income and wealth — wealth that actively seeks to perpetuate and expand itself via institutional structures like universities. (No names, just initials: Mercatus Center.)

I’m not imputing moral corruption here (except perhaps institutional). I both prefer and tend to believe that most of us try to do right, “as God gives us to see the right.” Rather, I tend toward the quite plausible institutional explanation laid out so nicely by Chomsky in Manufacturing Consent. In my words: institutions that are dependent on, are part and parcel of, those larger structures of power and wealth, only hire and promote people who already — perhaps by their very nature — see “right” right. (Or right “right.”)

And who knows? They might be right. Maybe my personal incentive is just to show (myself?) how smart I am relative to the mainstream institutions. But based on the Aha! moments of intuitive understanding that I experience when I see and explore models like Hannsgen’s, I tend to doubt whether that’s the only thing at play.

Cross-posted at Angry Bear.