The Most Dangerous Crime Rankings

As an online special, we’re making this article available in its entirety. You may choose to read either the html version or a PDF version.

In the early 1990s, when crime rates were at an historic peak in the United States, the small Kansas publishing company Morgan Quitno began ranking U.S. cities and metropolitan areas by their crime rates, based on the FBI’s Uniform Crime Reports (UCR). Morgan Quitno promoted the results in press releases and an annual volume listing cities and metro areas from the “safest” to most “dangerous.”

Both media attention and public opposition to the rankings grew over time. When the 14th annual crime rankings were published in November 2007 by new owner CQ Press, the publishing arm of Congressional Quarterly, they met a hailstorm of criticism.

Not surprisingly, officials from the cities anointed most “dangerous” protested that the rankings hurt their development efforts and ignored the progress they’ve made in combating crime.

We are reasonably skilled social scientists, but we couldn’t figure out how the scores were produced or what they mean.

It’s tempting to dismiss their complaints as special pleading, except that attacks on the crime rankings have broadened, and now they come from national organizations with an interest in crime as well as the affected cities. Within the past year the FBI posted a “caution against ranking” message on its UCR website, the U.S. Conference of Mayors condemned the rankings as “distorted and damaging to cities’ reputations,” and the Executive Board of the American Society of Criminology passed a resolution characterizing the rankings as “invalid, damaging, and irresponsible.”

But more important than any criticism or media attention, the information contained in what is less a book and more a compendium of statistical tables provides an important opportunity. It allows us to discuss the appropriate uses and limitations of the FBI’s UCR data and the role responsible social scientists and journalists can play in starting a meaningful conversation about this nation’s crime policy.

There’s nothing objectionable, perhaps needless to say, about organizing and disseminating crime data in tabular form. The problem comes in using those data to designate the residents of some cities as “safe” and the residents of others as in “danger” based on an undisclosed, proprietary scoring methodology applied to aggregate UCR crime statistics.

Contrary to a fundamental rule of science, the interested reader has no way of reproducing or verifying the CQ crime rankings. We do know that a city’s or metro area’s rank is based on the rates of six offenses (homicide, rape, robbery, aggravated assault, burglary, and motor vehicle theft) and that each offense is weighted equally, so that an auto theft counts just as much as a homicide. According to CQ, the crime rates “were plugged into a formula” that, somehow, scores them in relation to an unspecified “national average” for each crime type. The separate scores were then summed to produce a “final score” for each city and metropolitan area.

We are reasonably skilled quantitative social scientists, but try as we might we couldn’t figure out how the scores were produced or what they mean. For example, according to CQ, the “safest” metro area in 2006 was Logan, Utah, because it scored lowest at “-72.45.” The second safest was Eau Claire, Wisconsin, with a score of “-71.06.”

How much violent or property crime do those numbers represent? How much lower is one’s risk for crime in Logan than in Eau Claire? The only way to answer these basic questions is to examine the original UCR crime rates.

Doing so, we were able to approximate—but not reproduce exactly—the CQ rankings just by adding together the six crime rates and arraying the cities and metropolitan areas from high to low on these summary rates. Even if the ranking formula were revealed, though, it would be no more valid than the data source, the FBI’s UCR data. Relying solely on the UCR data for city and metro area rankings assumes an unwarranted degree of data accuracy.

The UCR data include only those incidents the police were made aware of, recorded as crimes, and forwarded to the FBI. The most recent estimates of crimes reported to the police range from 41 percent of rapes and sexual assaults to 81 percent of motor vehicle thefts, according to the Bureau of Justice Statistics. Because crime reporting rates vary from community to community, an unknown but possibly large part of the difference between any two cities’ crime ranks is a function of measurement error.

Even if the ranking formula were revealed it would be no more valid than the data source.

A city’s crime rate equals the number of police-recorded crimes (the numerator) divided by the city’s residential population (the denominator), and measurement error affects both figures. Pure geographical happenstance—the location of the boundary line separating “city” and “suburb”—can artificially inflate a city’s crime rate. Some cities are geographically small and constitute a correspondingly small fraction of the population of the metropolitan area in which they’re situated.

For example, St. Louis, where we live, is less than 62 square miles in a metro area of 3,322 square miles and contains only 12 percent of the area population. In contrast, well over half the residents in the Memphis metro area live in the central city, which covers about 280 square miles. So if suburban residents are victims of crimes in the central city, they are added to the numerator but not the denominator. And with the exception of burglary, crime counts in central cities with large numbers of commuting workers or tourists are especially affected. This circumstance inflates the crime rate in cities dwarfed by their suburban areas.

If UCR crime rates are compared at all, the comparisons should be limited to metropolitan areas and not their central cities. Doing so can change the picture dramatically. St. Louis, 2nd in crime among central cities according to the new city rankings, places 120th in crime among metro areas. In contrast, Memphis, 8th among cities and 2nd among metro areas, is less affected by the city-metro area boundary distinction.

The crime rankings aren’t only methodologically questionable, they do real damage to the affected cities. Businesses think twice about relocating to “dangerous” places, organizations fail to sign or cancel convention contracts, families reconsider visiting or moving, and suburban and rural residents needlessly fear the city. Cities with large African-American populations are hit particularly hard. Fully half the residents of CQ’s “Most Dangerous 25” cities are black, compared with just 4 percent of the residents of the “Safest 25.”

The racially disparate damage done by the crime rankings might be unavoidable if the rankings were a meaningful indicator of risk. But knowing the city in which people live reveals next to nothing about their victimization risk or “danger,” especially when compared with known risk factors such as age and lifestyle. Neighborhood also matters. In all cities serious crime is disproportionately concentrated in a handful of high-risk neighborhoods. Variation in crime risk is far greater within than between cities.

CQ argues that the crime rankings help the average reader “better understand what is happening in their communities” by making comparisons across different cities and metro areas, and that rankings “enable local leaders and concerned citizens to track their own progress in addressing crime problems from year to year.”

We fail to see how additional insight about an individual’s risk for crime or a city’s progress in reducing crime can be derived from the ranking of cities according to a hidden methodology. Because the index scores have no directly translatable metric, greater insight about a city’s crime problem or progress against crime can be gained from the original crime rates and the changes in those rates.

That information is already available at no cost. Just go to the FBI’s UCR website (http://www.fbi.gov/ucr/ucr.htm), download the statistics in Excel spreadsheets, and if you aren’t deterred by the caution against ranking, push “Sort.”

Comments 5

Deciding between Marin county and Alameda county - San Francisco - California (CA) - Page 14 - City-Data Forum

June 13, 2010

[...] [...]


Living in Oakland? - San Francisco - California (CA) - Page 2 - City-Data Forum

August 13, 2010

[...] [...]


Criminologists honored for criticism of annual city rankings « UMSL Newsroom

November 16, 2010

[...] http://contexts.org/articles/winter-2008/the-most-dangerous-crime-rankings/ to read the full article by Rosenfeld and [...]


The Controversy Over CQ Press's Crime Rankings - The Numbers Guy - WSJ

December 3, 2010

[...] of criminology and criminal justice at the University of Missouri-St. Louis and a longtime critic of the rankings. “They’re selling information that anyone can get off the FBI website with a keystroke, [...]


St. Louis Population - Gavin News

June 7, 2012

[...] This circumstance inflates the crime rate in cities dwarfed by their suburban areas.”ContextsSo the population numbers are higher and the crime numbers are safer than what some organizations [...]


Comments are closed.