Rating the Rankings
As an online special, we’re making this article available in its entirety. You may choose to read either the html version or a PDF version.
In August it’s the colleges, in April the graduate schools.
The annual rankings of universities and their programs result in copies of U.S. News & World Report flying off the shelves, and great fanfare follows.
Local news outlets report the lists and the Internet goes all abuzz with discussions about the year’s gains and losses in standing.
Reactions within schools are also significant, ranging from champagne and bonuses to emergency meetings and fears of pink slips.
Despite the fact that most educators are critical of the methods used to create these measures and very much resent their influence, they can’t afford to ignore the impact of changes in rank on application numbers, alumni perceptions, and employers’ interest in their graduates.
“For your own survival, you have to respond to the rankings,” one administrator said.
In publishing clear evaluations for prospective students, rankings have transformed the landscape of higher education in the United States and, increasingly, around the world. They’ve created an authoritative, public definition of school status and produced tremendous pressure for schools to conform to it.
And in the process, these measures have caused a wide variety of unintended consequences that most view as detrimental to the quality of the education these institutions strive to deliver.
The relative nature of rankings creates intense competition for each ordinal position as one school’s rise necessarily leads to another’s fall. This dynamic encourages schools to devote substantial resources to improving their numbers regardless of the educational merit of their actions.
Our research on law schools examines the unintended consequences of rankings to gain a better understanding of the effects—both obvious and subtle—that these public evaluations have had on higher education. This research is supplemented by studies of business schools and undergraduate education as well as a growing body of sociology research on a whole variety of rankings. Not only does our work help identify the processes by which rankings have come to exert so much influence on higher education, it also explains how these measures, designed only to reflect educational quality, actually create and reinforce distinctions among schools, shaping the whole landscape of higher education in the process.
A better understanding of these effects will help us respond more productively to rankings and draw attention to the often-overlooked potential hazards of quantitative assessment.
History and Debate
While a variety of organizations and individuals produced rankings of U.S. universities sporadically throughout the 20th century, they typically designed them for academic insiders. Only in the 1980s did popular media regularly begin producing rankings of colleges and graduate programs intended for consumers.
U.S. News & World Report, the most significant force in this arena, helped pioneer media involvement when, in 1983, it published its first college rankings. These surveys and those that followed after 1985 were relatively simple measures focusing on reputations. Then, in 1988, the magazine started publishing annual rankings that incorporated statistics submitted by colleges and other public sources. In 1990 it followed up with an annual issue dedicated to rankings of graduate and professional schools.
These rankings have proven popular and powerful, and although they’ve spawned many imitators both within the United States (Forbes, The Princeton Review, and Washington Monthly) and internationally (Times Higher Education and Shanghai Jiao Tong University’s rankings), U.S. News & World Report still dominates the rankings market in most fields.
The appeal of rankings seems straightforward—they provide useful information about complicated organizations to busy people. But their effects aren’t simple and their appeal changes as different groups find uses for them. As those who produce the rankings are quick to point out, they offer valuable, and otherwise unavailable, comparative information about colleges and universities.
Choosing where to attend school is an expensive decision, and most prospective students and their families lack first-hand knowledge about their choices. They face the difficult task of choosing among options that may look alike or deciding whether an expensive school is actually better than one with lower tuition. These families are bombarded with messages from teachers, counselors, the media, and college marketing materials that school selectivity matters, schools really are different, and the perfect “fit” between child and college is hugely significant.
One professor we interviewed who routinely asks his students about how they use rankings told us, “they approach them like they were consumers … just like they were going to buy a car. [They] look at education as an investment and they are going to see what you get in return.”
Most students believe the reputation of the school is an important determinant of career trajectories. “The prestige of your law school really does give you some capital later in your career. At every stage of your career, where you went to law school might help you in some way,” a second-year law student explained. Asked how he defined “prestigious,” the student quickly replied: “U.S. News & World Report. It’s the only way to go.”
U.S. News & World Report and its supporters say rankings make (relative) school quality more clear to outside audiences. Although school quality is notoriously difficult to measure, the magazine’s very public reports on how schools fare on particular indicators creates a type of accountability for higher education. External audiences, such as alumni, employers who hire graduates, trustees, and state legislators, are now able to see how a particular school measures up on a wide variety of criteria (for instance, the quality of incoming students, faculty resources, and graduates’ employment successes) compared to its competitors. According to this view, comparative information should also help schools identify their own relative strengths and weaknesses, thus motivating them to address areas in need of improvement.
Critics of rankings, though, question the methods used to evaluate schools, charging that bad information isn’t necessarily better than less information. Some argue the rankings place too much emphasis on standardized tests, while others point to important qualities absent from rankings, like evaluations of teaching, scholarship, or students’ first-hand experiences at schools. Journalist Peter Sacks has described the dangers of using standardized or even universal metrics to evaluate schools doing fundamentally different jobs of offering specialized forms of education. Judging schools according to a single set of criteria, he writes, ignores the fact that schools have different aspirations and punishes those with distinctive or non-elite missions.
While these methodological issues are important, a less readily apparent set of problems surfaces because of these rankings—the unintended consequences that precise quantitative evaluations produce.
Rankings are designed to be reflections of existing school characteristics and quality, to report—in a disinterested and objective fashion—how schools compare to each other on selected criteria. However, we’ve found that rankings actually shape the hierarchy of the institutions they’re trying to assess. Over time, schools change their activities and policies to optimize their standings on the criteria laid out by U.S. News & World Report and other rankers.
Reactions to Rankings
As we know, people react to being measured. Those who run colleges and universities do so with concerted efforts to improve on the criteria that determine their relative position. Consequently, rankings stop being neutral measures of school quality and start transforming the characteristics of the schools they evaluate.
“Almost everything we do now is prefaced by, ‘How will this affect our ranking?’” one law school dean told us. Many administrators characterize rankings as an omnipresent concern, saying they feel compelled to change how they manage in order to maintain or improve their rank.
This pressure to scrutinize and improve one’s rank has produced significant effects on higher education. Rankings influence who is admitted to which schools, how scholarship money is allocated, which programs are well-funded and which aren’t, as well as other serious forms of redistribution of both resources and opportunities. Rankings are also used to fire and reward administrators, allocate budgets across universities, and may even challenge the mission of schools whose goals aren’t captured in rankings factors.
Our research on law schools provides clear examples of how rankings can change educational practice. Law schools, for instance, have dramatically increased spending on advertising themselves to those who may fill out reputational surveys for U.S. News & World Report. This means many schools spend hundreds of thousands of dollars on glossy brochures and publications that are mass-mailed only to administrators and faculty at other schools.
In interviews, administrators bemoaned the fact that this money would be better spent on the development of new programs, faculty salaries, student scholarships, or tuition reductions, but most felt they couldn’t risk the drop in rankings that might result from less marketing. Paradoxically, most also acknowledge that they rarely read the materials others send to them.
“Every time I go to my mailbox I get another mailing from a law school telling me how great they are. I don’t even open them. I just throw them right in the recycling pile,” one dean said.
Many law schools have also increased money spent on scholarships for students with high test scores and decreased spending on scholarships based on need. The driving force behind this change is the increased emphasis on the average LSAT score of their incoming classes, a prominent ranking criterion. This criterion in particular has analogous effects at colleges and other professional schools.
The work performed in schools has also changed in relation to rankings. Those in admissions, career services, and other administrative offices report they must now focus on the “bottom line” numbers more so than in the past. This changes job requirements, reduces professional autonomy, and often shifts the content of job routines.
For example, according to career services personnel, they now spend much more time and energy tracking down the job status of every last graduate so as to optimize their job placement numbers. This work comes at the expense of career counseling, contacting employers, or other forms of mentoring that were once central to their work.
These administrators also report occasional conflicts of interest as they decide between advising students to take the first job offered to them, rather than waiting for a better one, in order to ensure a student counts as “employed” when the program’s statistics are due. This shift in the focus of their work is also stressful because those who fail to improve placement figures risk losing their jobs.
The most controversial tactic adopted by schools is to “game” rankings. Gaming strategies, the topic of gossip and occasional exposés, manipulate the numbers used to construct rankings in ways that serve little or no educational purpose.
New graduates of one college, for example, were offered $5 sandwich vouchers in exchange for $1 donations to their school as a means of boosting their average alumni giving rate. Some schools encourage, or even require, faculty members to take spring leaves to optimize student-faculty ratios, which are calculated in the fall. Still others temporarily move admitted students with lower test scores to part-time or night programs to improve selectivity scores.
Rankings make extremely precise distinctions among the schools they judge. Just one-tenth of a point difference in a school’s score on one criterion can generate changes in overall rank or determine in which “tier” a school falls. Schools understand this phenomenon is an artifact of measurement, but they also know these apparent differences are real in their consequences because important constituents like students or legislators will make decisions based on these outcomes.
So it’s not surprising schools feel strong pressure to maximize their rankings. Their fears that rankings will become self-fulfilling prophecies are hardly paranoid, thus, attempting to boost rankings may not be as unprincipled or self-serving as critics might charge.
A Rankings Evolution?
Educators are taking steps to reign in the power of the rankings. In 2007, 12 colleges boycotted U.S. News & World Report by refusing to complete the magazine’s reputational survey and 19 elite liberal arts schools pledged not to use the magazine’s rankings in promotional materials.
While these actions focus attention on the fact that these rankings are limited in what they measure and may encourage improvements in methodology, administrators see little chance U.S. News & World Report or other media will stop producing what’s clearly a popular and lucrative enterprise. “I think they are a reality,” one dean said. “I can’t imagine life without them now.”
This leaves the question of what can be done to limit the harmful effects of rankings while still providing useful information about schools to broad audiences.
Many educators lobby for improved methods, but such a strategy faces political challenges that will be hard to meet. There are many viable, if competing, definitions of educational quality. As well, any changes in methods will be controversial because—given the zero-sum nature of rankings—they will always hurt some schools as they benefit others. Thus, broad agreement about changes will be hard to come by.
More importantly, methodological changes won’t address the unintended consequences that result from such a public and relative evaluation. Effective changes will require more than methodological tinkering.
Creating alternative rankings might be a place to start. Business schools are ranked by a half-dozen or so prominent media and enjoy greater autonomy than colleges and law schools, over which U.S. News & World Report retains almost monopolistic power. Multiple rankings create more ambiguity about standing, make random oscillations in a single ranking less meaningful, and allow business schools to craft their reputations around the ranking source they feel best suits their school’s philosophy. Although worrisome to advocate for more quantification as a means for redressing the problems rankings have created, these outcomes suggest law schools and undergraduate institutions would benefit from it.
One challenge in implementing this approach is that U.S. News & World Report enjoys huge advantages from having captured the rankings market, so it would be difficult for accrediting organizations or schools themselves to create consensual rankings with broad legitimacy. However, professional organizations can encourage other magazines or news sources to create alternative rankings, they can fund research and initiatives directed at developing new models for evaluating schools, and they can consider developing new systems of classifying and accrediting schools with different missions and interests.
Another useful response would be to develop a cheap and accessible source by which prospective students or employers could manipulate the criteria and weights of ranking components to allow individualized assessments of schools. As many critics of the rankings have pointed out, the weights assigned to the criteria play a significant role in determining overall rank and are assigned arbitrarily: there’s no good reason, for instance, to make reputation scores twice as influential as school selectivity. However, Jeffrey Stake, a law professor at Indiana University, has developed The Law School Ranking Game, which allows users to assign weights to criteria according to their own preferences, resulting in a list of schools that will best suit them as individuals.
Decision guides like this would be even more effective if sponsored by accrediting bodies, foundations, or other professional organizations. This institutional backing would remove any doubts about the objectivity of the guide while also helping it reach a wider audience. Encouraging students to provide their own weights would personalize the information to fit their interests and might help them see how vulnerable rankings are to small changes in criteria. Moreover, such a tool could be an advertising boon—one that provides prospective students with an algorithm that best approximates that school’s own particular strengths and missions. This would allow schools an opportunity to define themselves and their missions while still providing students with comparative information.
A final strategy for mitigating the negative effects of rankings would simply involve doing more to educate consumers about the limitations. One way to get prospective students to take small differences less seriously is use a public format to explain more clearly just what these differences mean.
Understanding the broad impact of different modes of evaluation is a pressing problem. Pressures for accountability, transparency, and productivity have increased dramatically in many institutional fields around the world. However, the transparency that quantification promises is only apparent. Numbers powerfully direct attention in ways that obscure as well as illuminate. The biases and assumptions embedded in measurement regimes are hard to disclose and we often take their authority at face value.
Wendy Nelson Espeland and Michael Sauder. “Rankings and Reactivity: How Public Measures Recreate Social Worlds,” American Journal of Sociology (2007) 113 (1): 1–40. Discusses the process by which rankings alter the behavior of schools and their administrators.
Michèle Lamont. How Professors Think: Inside the Curious World of Academic Judgment (Harvard, 2009). An in-depth study of how experts in the social sciences and humanities define excellence in their evaluations of fellowships and grants.
Theodore M. Porter. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton, 1995). A compelling history of the development of quantification and objectivity during the 19th and 20th centuries.
Mitchell L. Stevens. Creating a Class: College Admissions and the Education of Elites (Harvard, 2007). A description and analysis of the admissions process at an elite college, including a discussion of how the U.S. News & World Report rankings influence the decisions of both administrators and students.
Marilyn Strathern. Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy (Routledge, 2000). Twelve contributions address the causes and consequences of the rise of accountability measures in higher education.