Prominent Claims that Policing is Not Racially Biased Rest on Flawed Science

July 16, 2020 | 1:36 pm
mtiger88/flickr
Dean Knox & Jonathan Mummolo
Originally appeared on Medium.com.

This article is co-signed by more than 800 academics and researchers in business, communications, computer science, criminal justice, criminology, economics, engineering, gender, geography, history, law, mathematics, medicine, natural sciences, political science, psychology, public health, public policy, race, sociology, and statistics.

Following George Floyd’s killing and ensuing social unrest, policing has ascended to the top of the nation’s political agenda. Yet many politicians and pundits dismiss concerns over systemic racial bias in policing, with some calling it a “myth.” These claims rest on deeply flawed science that has nonetheless been circulated widely and uncritically by major news outlets that span the political spectrum.

Misleading statistics have been used to justify racial injustice in the past. They should not be used to do so now. This faulty research should not be relied upon in the current debate over policing reform.

Those rejecting the existence of discrimination prominently cite a study reporting no evidence of racial bias in police-involved shootings. But this Proceedings of the National Academy of Sciences (PNAS) article has faced widespread criticism by statisticians and policing experts, relies on a fundamental mathematical error, and is uninformative on the question of discriminatory policing. This broad condemnation is apolitical: experts agree the study’s approach violates a central axiom of data analysis. Even the study’s authors have recently called for the article to be retracted, stating that their analysis cannot “support the position that the probability of being shot by police did not differ between Black and White Americans.”

To begin to measure racial bias in police violence, careful researchers must ask how often officers use force against minority civilians out of all police encounters, versus white civilians, then adjust for relevant differences between minority and white encounters. Comprehensive records of lethal force — the numerator in the force-per-encounter ratio — only recently became available. (Subsequent research shows black males face a roughly 1/1,000 lifetime chance of being killed by police, 2.5 times more than white males.) The denominator — how often racial groups encounter police — is largely unknown.

The PNAS study estimated a different — and altogether irrelevant — quantity. Rather than analyzing shootings as a fraction of all encounters, it analyzed only shootings. This elementary error — only examining cases where events of interest occur — is called “selection on the dependent variable,” and is one of the first mistakes social scientists are warned about during academic training. As the study’s authors note in their retraction request, “the mistake we made was drawing inferences about the broader population of civilians who interact with police rather than restricting our conclusions to the population of civilians who were fatally shot by the police.”

To see why this is gravely misleading, consider a hypothetical: officers encounter 100 civilians — 80 white, 20 black — in identical circumstances, respectively shooting 20 and 10 of them. Here, police exhibit anti-black bias, shooting in 25% of white encounters, versus 50% of black encounters. However, using the study’s fallacious approach, because more white civilians were shot, we would falsely infer anti-white bias.

This hypothetical aligns closely with the study’s results. The paper showed white shooting victims outnumbered black and Hispanic victims in various circumstances — unsurprising, given their majority status — and reported no “evidence for anti-Black or anti-Hispanic disparity in police use of force… and, if anything, found anti-White disparities.” The paper then compares victim and officer race, and because it finds no strong correlation among the minuscule fraction of encounters involving shootings, dismisses diversity reforms. It concludes, “White officers are not more likely to shoot minority civilians than non-White officers.”

But as the authors have since admitted, the analysis cannot inform shooting rates because every encounter examined involves a fatal shootingPNAS editors noted, “the authors poorly framed the article, the data examined were poorly matched, and… unfortunately, address a question with much less public policy relevance than originally claimed.” The study attempts to account for local crime victimization rates by race and other county attributes, but this cannot remedy the fundamental issue: the study estimates the wrong statistic.

This is not the only misleading study cited as evidence against racially biased policing. Another prominent paper examines recorded detainments — arrests and stops — comparing force rates against racial groups of stopped civilians, adjusting for circumstances. It reported some racial bias in sub-lethal force, but no bias in lethal force. This study, too, suffers from an important limitation, albeit a more subtle one than that in the PNAS paper. By analyzing police detainments alone, this study commits what statisticians term “post-treatment selection.” Put differently, it fails to account for racial bias in detainment, potentially severely understating discrimination in the use of force, since force is often used in detainments that would never have occurred had civilians been whiteA new study addressing this source of error shows this approach can mask substantial amounts of discriminatory police violence, potentially leading to underestimates even when analysts seek only to quantify discrimination occurring after the detainment decision.

In today’s polarized climate, it can be difficult to separate genuine scientific disputes from opinion. These concerns are based on facts: the inferences are mathematically indefensible. Yet this flawed work continues to be cited uncritically as evidence that the uproar over policing practices is overwrought.

Slipshod inferences have no place in this debate. As America considers policing reforms, we must appeal to rigorous research. When we lack data, we must acknowledge uncertainty. Leaders should mandate nationwide collection and sharing of standardized policing data to facilitate progress. But gaps in knowledge must not be filled with faulty science.

We call on readers to disregard this dangerously misleading work.

Dean Knox is an Assistant Professor of Operations, Information and Decisions at the Wharton School, University of Pennsylvania. He develops statistical models and methods for policing and other complex social science applications. He conducts research on causal inference and machine learning with the goal of improving research on governance. Find Dean on Twitter at @dean_c_knox.

Jonathan Mummolo is an Assistant Professor of Politics and Public Affairs at Princeton University. He studies law enforcement agencies and police-civilian interactions. His work explores how controversial tactics are deployed, how rules and procedures affect police-civilian interactions, the role of race in police behavior, and how police tactics affect perceptions of law enforcement and crime. He also conducts methodological research causal inference, statistical modeling, and experimental design. Follow Jonathan on Twitter at @jonmummolo.

The UCS Science Network is an inclusive community of more than 25,000 scientists, engineers, economists, and other experts, focused on changing the world for the better. The views expressed in Science Network posts are those of the authors alone.