## Jared Bernstein Gives Us The Best Graph on the Employment Effects of Minimum Wage Increases

They say sample size matters. A handful of sample points in a study doesn’t tell you much, because they could just be showing random variation. This is also true not when you’re looking at many studies. You need to look at lots of research that usesÂ different methodologies and data sets to get a confident feel for the facts on the ground.

Jared Bernstein points us toÂ exactly such an effort, looking at 64 studies on the employment effects of minimum-wage increases, with a wonderfully informative display:

Note: â€œseâ€ refers to standard error; 1/se is a measure of statistical significance. The dots up high are generally more believable.

“Employment elasticity” is a measure of the impact of minimum-wage increases. A measure of -.1 (left of the zero line) suggests that 10% MW increase reduces employment by 1%.

All the high-statistical-significance studies put elasticity at zero: no employment effect.

There’s some clustering to the left of the line versus the right down at the bottom, suggesting a small negative employment effect, but none of those studies has high statistical significance.

And this doesn’t consider “file-drawer/publication bias”: studies that find no effect don’t get published, because researchers don’t submit them or journals don’t accept them for publication.Â The CBO explains this in its new report on minimum-wage effectsÂ (PDF). Emphasis mine.

…

an unexpectedly large number of studies report a negative effecton employment with a degree of precision just above conventional thresholds for publication. That would suggest that journalsâ€™ failure to publish studies finding weak effects of minimum-wage changes on employment may have led to a published literature skewed toward stronger effects.

And *that* doesn’t consider (*pas possible*!) negative-effect researchers finding ways to get to that publishable statistical-significance level. (It’s curious that those finding a positive effect don’t display this anomaly…)

So at least, you can mentally add a whole lot more unpublished dots to that tall vertical line. At most, you can shift a bunch of those published dots on the left farther to the right, and down.

*Cross-posted at Angry Bear.*

This picture in no way contradicts the CBO estimates.

The graph is a good example of manipulating the display of data to minimize an effect. The horizontal axis shows four or five studies where elasticity is less than ten (and 1/se shows little statistical significance). By graphing these few insignificant studies and adjusting the scale accordingly, the rest of the results are compressed into a blob around the y-axis, almost hiding the fact that a large majority of them show negative elasticity.

An elasticity of -.1% — on this scale a very small amount — corresponds to an employment reduction of 1%. For 16M people below the minimum wage, this works out to 160,000 per 1%, so an employment decrease of 500,000 should show as about -.3.

Even with the extreme elongation of the elasticity scale, it’s pretty easy to see that the average of these data sets is much closer to -.3 to -.6, or 500,000 to 1,000,000 jobs lost than to 0. A table would be far more useful for drawing conclusions but less subject subject to manipulation. Then we could tell if there are really 64 data points in the graph (there look to be a lot more). We could also use statistical ranges instead of point results.

This is an image intended for polemics, not for close inspection

@Gerry I find it too convenient to simply dismiss that graph because you can’t zoom in — especially if you factor in the (undisplayed) publication bias.

You can find more about it in this ungated paper from which the graph was derived:

https://www.deakin.edu.au/buslaw/aef/workingpapers/papers/2008_14eco.pdf

Which concludes: