Subscribe to DSC Newsletter

High crime rates explained by gasoline lead. Really?

Crime rates in big cities (where gasoline concentration is high) peaked about 20 years after lead was banned in gasoline, according to an econometric study by Rick Nevin. The 20 year time lag being the time elapsed between lead exposure at birth, and turning into a 20 year old criminal.

At least, that's the argument proposed by some well known econometricians, based on crime rates analysis over time in large cities. In my opinion, this is another example of of study done using the wrong kind of design of experiment, where statistical science is being abused or misused by people who claim to be experts.

You can read the article here

So how would you fix this study?

Here's my solution:

  • Get a well balanced sample of 10,000 people over a 30 years across all cities, split the sample into two subsets (criminals vs. non criminals), and check (using an odds ratio) whether criminals are more likely to have been exposed to lead at birth, than non criminals. In short, do the opposite of what Rick Kevin did: look at individuals rather than cities, that is, look at the micro rather than macro level, and perform a classic test of hypothesis using standard sampling and proper design of experiment (DOE) procedures.
  • Alternatively, if you really want to work on the original macro-level time series (assuming you have monthly granularity) then perform a Granger causality test: it will take into account all cross-correlation residuals after transforming the original time series into white noise (similar to spectral analysis of time series, or correlogram analysis). However, if you have thousands of metrics (and thus thousands of time series and thus dozens of millions of correlations), you WILL eventually find a very high correlation that is purely accidental. This is known as the curse of big data, and I will publish a note on this (with results based on simulations).
  • Correlation is not causation. Don't claim causation unless you can prove it. Many times, multiple inter-dependent factors contribute to a problem. Maybe the peak in crime happened when baby boomers (a less law-abiding generation) reached 20 years old. This is a more credible cause, in my opinion.

Related articles:

Views: 11381

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by James Lucas on March 30, 2017 at 12:32pm

I have been looking for the name Granger causality test for months.  I came across the test in my time series class and couldn't remember it to save my life.  Thank you for your post.

Comment by Lance Norskog on January 19, 2017 at 11:08pm

And the prison rate is disconnected with the violent crime rate; violent crime is what's predicted by lead levels in children.

While we're on the topic of science&policy, the violent crime rate is dominated by males age 17-24. It seems to me we need two age-segregated levels of adult prison. If you're 24 and there for a violent crime, chances are you'll grow out of it soon.

Comment by Lance Norskog on January 19, 2017 at 11:05pm

Heh! The crime rate is a magnet for unfounded opinions. Many are backed by politics.

The theory is not that banning lead in gasoline was it; both lead gasoline and lead housepaint were finally banned at the same time, even though it was known since the 1920s that they were bad ideas. The ban on leaded gas caused a geographically uniform drop in environmental lead levels. The ban on lead paint caused a drop with a much longer tail, concentrated in the wrong side of town.

A friend who grew up in a slum said that the paint dries, shrinks, and peels off forming strips. Why would kids eat these charming decorations? They taste like peppermint!

There is a body of statistical work, not just by this dude, showing that correlations between crime rate and old housing stock replacement rate are significant. Some claim that they outgun every other mooted reason for the crime rate drop.

Predictions based on this? The intentional poisoning of Detroit for the last 3 years is going to leave a nasty legacy.

Comment by Sune Karlsson on January 5, 2013 at 12:58pm

Vincent: Good thing I don't live in California then ;-) Seriously, criminality is rare enough that it matters since this is a situation where you are not drowning in Gigabytes of data - you have to go out and collect it yourself. Suppose 5% of the population are criminals and you take a sample of 1000. The expected number of criminals in your data is then 50 and everything related to the criminals will be relatively imprecisely estimated. OK, so take a sample of 10 000 and with 500 criminals in the data you can get good precision but that leaves you with 9 500 observations non-criminals which is probably much more than you need. It is much more (cost) efficient to take a sample of 500 from the population of known criminals and a sample of 500 from the population of non-criminals (or the population of not-yet-convicted if you are pessimistic about human nature). This changes the way you do the data analysis since things are now conditional on the criminal/non-criminal status of the individual.

You mean a uniform (and presumably high) cross-spectrum? Yes, that would indicate a strong relationship but not the kind of relationship I would be looking for. Except for locking everybody up and filling the streets, homes and workplaces with police I would expect most determinants of criminality to have long term effects. The short term or high frequency part of the spectrum is mostly noise and I would be looking for relationships in the lower frequency part of the spectra.

Comment by Vincent Granville on January 5, 2013 at 10:50am

@Sune: Criminality is not "rare", it certainly is above 1%: in California, I believe 2% of the population is in prison at any given time - making it the country with the highest crime rate in the world. Because it is not rare, it does not need to be processed (from a data point of view) like fraudulent credit card transaction that are indeed rare (0.04% vs. > 1.00% for crime)

Finally, I mentioned the Granger test in a more general framework of spectral analysis of time series: if all cross-correlations are very similar on the two normalized time series (you can test it with a Kolmogorov Smirnov test), then clearly this is a much stronger indicator of relationship than having a high correlation at lag = 20 years. 

Comment by Sune Karlsson on January 5, 2013 at 3:38am

Instead of just proclaiming that the studies are bad you might want to take the time to explain how and why they go wrong.

I have skimmed a few of them and while they are not perfect (few things are) there is nothing wrong with the general approach. A question like this can certainly be studied using macro (aggregated) data.

It can certainly, as you propose, be studied using data on individuals. But there are better ways than just taking a random sample of the population. Criminality is a rare "disease" and it is much more cost effective to do it as a retrospective study. That is you sample criminals and non-criminals separately to ensure that you have enough criminals in your data for a reasonable power.

Is it better? Maybe - maybe not. It depends on how large a sample you can afford to take, how much additional variables (confounders, controls whatever you want to call it) you can collect. What the response rate would be if you sent out a questionnaire and if there is non-response bias. Many of the same questions apply for the "macro" approach.

Your suggestion to use a Granger causality test suggests to me that you do not quite understand the test.Firstly it is not a test of causality (correlation is not causation), it is a test of lack of predictive power. This in turn is taken to imply something about causality, if X is not useful for predicting Y then X can not be a cause of Y. So how do you test this? You build a model for Y with X (or lags of X) as explanatory variables and test if the coefficient(s) on X are different from zero.

That is essentially what the studies you refer to have done. The outcome of the test very much depends on the model for Y that is used, which additional variables are included etc etc. The results are, in this sense, always debatable but this does not mean that the general approach is flawed.

Follow Us

Videos

  • Add Videos
  • View All

Resources

© 2017   Data Science Central   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service