Higher Education Reform

World University Rankings: The Haves Have It Higher Education Reform
How sweet it is to be No. 1L: The Milikan Library at the California Institute of Technology, AKA Caltech.

World University Rankings: The Haves Have It

October 2, 2014 1504

Caltech's Milikan Library

How sweet it is to be No. 1: The Milikan Library at the California Institute of Technology, AKA Caltech.

From the “best beaches” to the “best slice of pizza” to the best hospital to have cardiac surgery in, we are inundated with a seemingly never-ending series of reports ranking everything that can be ranked and even things that probably shouldn’t be.

Over the last few decades, schools and universities in many parts of the world have become targets of this ranking mania. Some of these are official, state sponsored rankings, such as those used in various performance-based funding formulas or, in the US, President Barack Obama’s yet undefined plan to rank American universities. Some are unofficial, such as the World University Rankings just released by the Times Higher Education (THE) magazine, or the Shanghai Rankings and US News & World Report ranking of American universities also recently released.

The Conversation logo

This article by Steven C. Ward originally appeared at The Conversation, a Social Science Space partner site, under the title “What do world university rankings actually mean?”

A real measure of quality?

But do all these rankings actually tell us anything substantive about the relative quality of universities? Should US or UK universities be concerned with their slippage in the latest THE rankings? There are 74 US universities on the new list – the most from any country – but this was down from 77 last year, and 60 percent of American institutions lost ground in the rankings. The UK, in second place, now has 29 universities in the top 200, two fewer than last year.

Do such rankings separate the chaff from the wheat, or the pure from the dangerous, or are they distractions and distortions to achieving a real equality of quality? Are they incentivizing universities to perform better or actually inviting cynical “gaming” of the system?

Natural pecking order

Despite the various methodological caveats contained in the small print, in their more positivist moments, the agencies and magazines who produce university rankings contend that rankings do in fact unearth and quantify a naturally occurring pecking order. There is never much difference at the very top. In this year’s 2014-15 ranking, California Institute of Technology retained its top spot, followed by Harvard University in second and Oxford University in third.

University rankings are merely rational social science in action and, when done properly following sound methodological procedures, they constitute a valid and value free means for quantifying amorphous things called “excellence” or “world class.”

Some proponents extend this further by arguing that rankings also serve moral and political purposes, particularly in these neoliberal times of austerity and tight governmental budgets. They shame the slackers and arouse the complacent into “upping their game.”

In the end, ranking universities creates a “consumer sovereignty” where finicky students and in-demand professors use rankings to carefully determine which university they will attend or work at. All this creates quasi-market conditions for universities, fueling competition and creative destruction. At the same time, and in terms that would have made British sociologist Herbert Spencer and the social Darwinists proud, it sheds the weaklings and hones the fittest.

Politics behind the metrics

But the metrics used in these rankings tend to hide all the “value work” that takes place underneath them and the political purposes they often serve. They also tend to greatly overvalue the “haves” at the expense of the “have nots.”

Rankings often conceal a purposeful politics hiding underneath the veneer of objectivity. Just as GDP uses selective measures to determine the underlying health of an economy and becomes factual as it is continually used, university rankings use items selected from a sea of possibilities to establish a higher education reality. The THE rankings use 13 indicators, including research income, citation impact and percentage of international students. Deliberately or not, they steer organizations toward particular political outcomes.

University rankings are merely rational social science in action and, when done properly following sound methodological procedures, they constitute a valid and value free means for quantifying amorphous things called “excellence” or “world class”

Rankings do have real effects on what universities value and how they operate. Their increased popularity also signals a profound historical change in our attitude toward equality and quality in public institutions. No longer can public institutions be “good enough” or “roughly equal.” Under the ethos of competition, some now must necessarily be better than others. There must always be winners who go up the rankings and losers who go down in order to stoke the fires of competition.

Pursuit of excellence

Certainly the case can be made that all our public institutions should be of the highest quality, and, dare I say, “excellent.” So the question must be asked: does the generation of competition in public enterprises actually make them better?

Rather than encouraging excellence they may actually extinguish it as universities turn from doing their job to “gaming,” and “juicing the results.” An ongoing testing scandal in Atlanta is a good example of how this can happen lower down the educational chain.
In higher education, reports and research have examined what lengths colleges go to in order to boost their ranking.

So as administrators around the world now anxiously pore over the THE rankings to see if their university went up or down, it is perhaps important to keep in mind that rankings may not tell us much about actual quality, except in blunt and often obvious ways: the haves will always have. But they do say a great deal about the political direction of current global higher education reform.The Conversation


teven C. Ward is professor of sociology at Western Connecticut State University. He is the author of three books, Reconfiguring Truth: Postmodernism, Science Studies and the Search for a New Model of Knowledge; Modernizing the Mind: Psychological Knowledge and the Remaking of Society and Neoliberalism and the Global Restructuring of Knowledge and Education. His current research examines the link between knowledge society policies and global university and school reform efforts.

View all posts by Steven C. Ward

Related Articles

Deciphering the Mystery of the Working-Class Voter: A View From Britain
Insights
November 14, 2024

Deciphering the Mystery of the Working-Class Voter: A View From Britain

Read Now
Julia Ebner on Violent Extremism
Insights
November 4, 2024

Julia Ebner on Violent Extremism

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do
International Debate
October 23, 2024

Emerson College Pollsters Explain How Pollsters Do What They Do

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
‘Settler Colonialism’ and the Promised Land

‘Settler Colonialism’ and the Promised Land

The term ‘settler colonialism’ was coined by an Australian historian in the 1960s to describe the occupation of a territory with a […]

Read Now
Webinar: Banned Books Week 2024

Webinar: Banned Books Week 2024

As book bans and academic censorship escalate across the United States, this free hour-long webinar gathers experts to discuss the impact these […]

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments