Ranking the College Rankings: A User’s (and Critic’s) Guide


Ranking the College Rankings: A User’s (and Critic’s) Guide

LALAKA/ Shutterstock

What is the 97th “best” college in America? Is it really better than the 108th? Or worse than the 73rd?

On their face, these are preposterous questions. Yet dozens of respectable media outlets and websites devote considerable effort to answering them. That, after all, is the raison d’etre of so-called “best-college” rankings: to present an ordinal, hierarchical list of literally hundreds of colleges and universities, graded from best to . . . well, presumably, worst.

It seems to me that turnabout is fair play. If colleges can be ranked, surely college rankings can be ranked too. I have chosen five popular annual rankings, all of which purport to inform their readers which are the overall “best” or “top” colleges in America: namely, those published annually by Forbes, Niche, U.S. News & World Report (USN), the Wall Street Journal (WSJ), and Washington Monthly (WM). There are many others, but these five seem to dominate the field of comprehensive college rankings.

To rank these five publications, I assign letter grades based on five criteria, compute their grade-point averages, and compare their GPAs. I draw my five criteria from the “Berlin Principles of Ranking Higher Education Institutions,” adopted in 2006 by a group of 47 experts on higher education drawn from 17 countries.

1. Purpose: According to the Principles, those who rank schools should “be clear about their purpose and their target group.” I take this to mean that a ranking should articulate a particular vision of higher education that can inform the ranker’s conception — and the reader’s understanding — of what makes one school “better” than another.

None of the rankings surveyed here provide anything approaching a crystalline answer to that question. Forbes and WM (both receiving a B grade from me) offer the clearest guidance. Forbes seeks to reward schools that “graduate high-earners and propel students to become successful entrepreneurs and influential leaders in their fields.” WM describes its goal as evaluating colleges “based on what they do for the country.” The WSJ provides a somewhat vaguer statement, characterizing its undertaking as one that “puts student success and learning at its heart.” It receives a C. By contrast, USN (grade D) says virtually nothing about the overall role of higher education in society, other than a passing reference to “academic quality.” And Niche (grade D) tosses over 100 variables into an omnibus formula designed to “capture the full experience” of going to college, including “academics, value, diversity, and, yes, even party scene.”

2. Institutional diversity: In the words of the Berlin Principles, rankings should “recognize the diversity of institutions and take the different missions and goals of institutions into account.”

The five publications all make a nod in this direction by either offering some specialized listings by institutional type or permitting readers to construct their own lists. But their attention-grabbing “best-college” rankings all violate the diversity principle by lumping hundreds of wonderfully diverse institutions into broad, one-size-fits-all categories. At least USN and WM (grade B) separate national universities, liberal arts colleges, and regional schools. By contrast, Niche, WSJ, and Forbes (grade C) simply toss over 600 institutions into a single ordinally-ranked grab-bag.

3. Focus on outcomes: The Berlin consortium counsels that rankings should measure outcomes (such as graduation rates) — especially “value-added” outcomes (such as graduation rate “performance”) — in preference to inputs (such as spending or “student selectivity”).

The ranking publications surveyed here have all made an effort in recent years to focus less on inputs and more on outcomes. All of the measures used by Forbes (grade B) are outcome-related, though true value-added measures receive only about a quarter of the overall weight in its formula. Roughly two-thirds of the variables used by WSJ and WM (grade C) focus on outcomes; about a quarter of the weightings go to value-added measures. The Niche and USN formulas (grade D) give only about 45% weight to outcome measures and only about 15% weight to value-added measures.

4. Transparency of methodology: The Berlin Principles urge rankers to be “transparent regarding the methodology used for creating the rankings.” To their credit, each of the rankings surveyed here provides its readers with a methodology statement that identifies the variables used in their formulas, the sources of their data, and the weightings assigned to those variables.

But there are problems. Most rankings fail to disclose the algorithms they use to construct certain “performance” and “index” metrics, making it difficult for scholars to evaluate the plausibility of their resulting figures. And none of the publications offer anything better than vapid explanations for their choice of variables and the numerical weights assigned to those variables. (All receive C grades.)

5. Reliability of data: Rankings, said the Berlin conferees, should “use audited and verifiable data whenever possible” and “include data that are collected with proper procedures for scientific data collection.” All five rankings draw from two primary sources of information: statistical data reported by the colleges and the results of surveys administered to groups such as students, graduates, or educators.

Both are troublesome. Aside from financial data, most self-reported academic data are not subject to any independent audit or other means of verification. And rankers typically provide very sparse documentation of the steps taken to assure that the sample of survey respondents is large enough and representative of the relevant population.

WM (grade B) utilizes only data reported by colleges to the U.S. Education Department — albeit, mostly unaudited — and generated by reputable third parties. Forbes, Niche, and WSJ (grade C) rely somewhat on questionably representative surveys of students and alumni, as well as unverified, self-reported data. USN (grade D) computes most of its ranking scores from answers to a proprietary statistical questionnaire that has produced repeated accusations of misreporting throughout its history. It also relies heavily on a much-criticized survey that asks academic administrators to rate hundreds of their competitors on overall “academic quality.”

Summarizing my grades and computing the GPAs (based on — admittedly arbitrary — equal weightings), here are the results:

CriterionForbesNicheUSNWSJWM
1. PurposeBDDCB
2. Institutional DiversityCCBCB
3. Outcomes FocusBDDCC
4. TransparencyCCCCC
5. Data ReliabilityCCDCB
GPA2.401.601.602.002.60

Ranked by GPA, then, my “best-rankings” list is: 1. Washington Monthly; 2. Forbes; 3. Wall Street Journal; 4. Niche and U.S. News (tie).

But, before you decide which of these rankings to use, it’s only fair to ask: Would you want to choose a college by relying on any ranking with these GPAs?


Disclaimer: HigherEdJobs encourages free discourse and expression of issues while striving for accurate presentation to our audience. A guest opinion serves as an avenue to address and explore important topics, for authors to impart their expertise to our higher education audience and to challenge readers to consider points of view that could be outside of their comfort zone. The viewpoints, beliefs, or opinions expressed in the above piece are those of the author(s) and don’t imply endorsement by HigherEdJobs.



Source link