Higher Education Reform

One Size Does Not Fit All Higher Education Reform
Unfortunately, hacking through the Gordian knot of today's wicked problems is just as mythological as it was in Alexander's time. (Image: Detail from a fresco at Bologna's Palazzo Pepoli Campogrande)

One Size Does Not Fit All

February 4, 2016 1822

Gordian knot

Unfortunately, successfully hacking through the Gordian knot of today’s wicked problems is just as likely to be mythological as it was in Alexander’s time. (Image: Detail from a fresco at Bologna’s Palazzo Pepoli Campogrande)

In his Thinking, Fast and Slow, Daniel Kahneman made the classic observation that “when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.”

In a world teeming both with wicked problems –the concentrated form of the difficult question – and increasing reams of data, it’s not unnatural then that we default to easily derived answers, conveniently clad in quantitative rigor, to address those wicked problems.  Arguably, the most pernicious concerns arise in policy and academe, where McNamara fallacy, the idea that what can’t be counted doesn’t count, finds a congenial home. It reflects, as the learned wags at The Duck of Minerva blog once noted in a different context, “quantitative and experimental researchers’ claims to epistemologial (or at least disciplinary) hegemony.”

The ongoing review of Britain’s Research Excellence Framework by Lord Nicholas Stern, for example, suggests it expects a heavier dose of the quantitative. Its ‘terms of reference,’ for example, calls on “focusing on a simpler, lighter-touch method of research assessment, that more effectively uses data and metrics while retaining the benefits of peer review.” (In response, James Wilsdon has written that “metrics provide few easy wins.”)

Sometimes though, as Ziyad Marar has noted, we don’t even get to the right questions, much less solve for the best answers. Marar, the global publishing director for SAGE, in an essay for The Sociological Review last fall cited the price to be paid if evaluation prizes the McNamara standard: “… parts of social science (and the humanities) will continue to be under-valued, with the consequence that important, if messy, problems could be neglected in favour of the more tractable.”

The repercussions will be felt from policy to poetry. Rowan Williams, the former Archbishop of Canterbury, in a more spirited than spiritual defense of “actual human beings” in the academic machine, railed against the “new barbarity” in language that this numbers-first approach engenders.

Metric Tide coverThis month’s print publication of The Metric Tide – appearing in hard copy six months after its online debut – offers a good vantage to review that report’s key takeaway that assessing success, especially in a system as complex as higher education, requires more than mere measurement. Human judgment – think of, but don’t be bound by, peer review –has a seat at the table with the mute output of scientometricians.

The Independent Review of the Role of Metrics in Research Assessment and Management, the specially formed panel chaired by the University of Sheffield’s Wilsdon tasked with producing The Metric Tide, focused on higher education as a whole. It was, and is, and is, inextricably bound up in the REF, which is as much about funding decisions as it is about research or excellence.

What about research? And research methodologies, and in particular social and behavioral research? How do we assess the results of social science research – and how is social science itself assessed with, and without, numbers attached?  The Metric Tide, for instance, found the correlation between what could be mapped out with h-indices and what could be judged valuable by human beings was, as Marar put it, “particularly poor” in the humanities and social sciences.

“The explosion of technological innovation, data sources, open movements and new modes of collaboration has changed the scholarly landscape in many ways, intended and unintended,” Marar wrote. “In many, many cases new research modes are thriving (especially those with a quantitative bent). The question is whether fields with a more interpretive mode are losing out in prestige and funding as a result of this shift. This technology led approach to research evaluation and impact has potentially created new norms which are being read across too uncritically into fields that require interpretation and nuance, possibly rewarding narrower and more tractable research agendas.”

And so, he adds, “If the digital republic of letters has squeezed the space for more imaginative, unbounded modes of inquiry, we need a way to re-balance the argument.”

Various data points suggest this is less an exercise in overcoming inertia and more a case of sustaining momentum:

  • HEFCE, for example, has commissioned a qualitative study on interdisciplinarity research in the UK, with a specific focus on what’s both new and innovative (not always the same thing).
  • The crew behind The Metric Tide helped found ResponsibleMetrics.org, dedicated “to ensur[ing] that indicators and underlying data infrastructure develop in ways that support the diverse qualities and impacts of research.” The site’s first Bad Metrics Award is scheduled for April.
  • In 2014, even before the The Metric Tide arrived, the British Academy and the Social Science Research Council, based in New York, along with publishers, librarians and academics to address exactly this question, discussed the implications for research of the ‘digital culture.’

As the Higher Education Funding Council for England, which oversaw the panel that produced The Metric Tide, noted when the report first appeared, “quantitative evaluation should support – but not supplant – qualitative, expert assessment.”

This is an echo of the hoary quantitative vs. qualitative conflicts so dear to the last generation of social scientists. As Faulkner once wrote, the past isn’t dead, it’s not even past. But assuming we can banish the spectre of that often take-no-prisoners dispute, and agree that there is a home for both responsible metrics and condign judgement in social science research, the trick lies in finding the right balance. Or more accurately, balances.

“One size,” reads a portion of The Metric Tide, “is unlikely to fit all: a mature research system needs a variable geometry of expert judgement, quantitative and qualitative indicators. Research assessment needs to be undertaken with due regard for context and disciplinary diversity. Academic quality is highly context-specific, and it is sensible to think in terms of research qualities, rather than striving for a single definition or measure of quality.”

It is a process that in itself requires a human touch.

“We look forward to seeing how the recommendations made in The Metric Tide play out in the coming years, both here in the UK and in academia worldwide, and to working together towards a future where metrics are used intelligently as part of a much wider scholarly agenda,” wrote Stacy Konkiel, who had a key role in birthing the altmetics-oriented nonprofit Impactstory, shortly after The Metric Tide appeared. (She dubbed it “an excellent read,” by the way.)

This is the time, the “critical juncture,” as Marar said, to start making those calls. “It’s an intoxicating time to be involved in scholarly communication,” Jason Priem and Heather Piwowar – the co-founders of Impactstory, wrote in the forward to last year’s Meaningful Metrics. “We’ve begun to see the profound effect of the Web here, but we’re just at the beginning. Scholarship is on the brink of Cambrian explosion, a breakneck flourishing of new scholarly products, norms, and audiences. In this new world, research metrics can be adaptive, subtle, multi­dimensional, responsible. We can leave the fatuous, ignorant use of Impact Factors and other misapplied metrics behind us.”

In short, where some see the limits of measurement, they see the frontiers.

This doesn’t mean no numbers, but the right numbers, analyzed by the right methods. “The social sciences are undergoing a dramatic transformation from studying problems to solving them,” Gary King, director of Harvard’s Institute for Quantitative Social Science, wrote in article describing his organization’s own engagement with is debate. Again, the accent is on balance, with new and innovative methods creating not veneers but alloys. Addressing the quantitative/qualitative divide, King – in an article titled “Restructuring the Social Sciences,” writes:

Fortunately, social scientists from both traditions are working together more often than ever before, because many of the new data sources meaningfully represent the focus and interests of both groups. The information collected by qualitative researchers, in the form of large quantities of field notes, video, audio, unstructured text, and many other sources, is now being recognized as valuable and actionable data sources for which new quantitative approaches are being developed and can be applied. At the same time, quantitative researchers are realizing that their approaches can be viewed or adapted to assist, rather than replace, the deep knowledge of qualitative researchers, and they are taking up the challenge of adding value to these additional richer data types.

There is value for money. But it must be earned.

In that vein, academic publishers, are among those grappling with — and solving — these wicked challenges. SAGE Publishing, for example, has been engaged in supporting the academic community both through innovative products centered on social research and direct involvement in wider debates that ultimately consume all stakeholders in the scholarly enterprise. In its product lines, for example, in addition to traditional printed materials like its groundbreaking ‘little green books’ and ‘little blue books’ that taught generations of social and behavioral scientists how to actually do the quantitative and then qualitative research, through its current SAGE Research Methods suite the company is using real-world case studies and video to show a new generation how to apply new methods to unstick those thorny questions.

Beyond that, the company backs efforts to make peer review more transparent and understandable though investments in projects like Publons and Kudos; it supports organizations (and publishes their output like The Metric Tide and The Business of People) that make social and behavioral science’s case to the public and policy makers, such as the Campaign for Social Science or Sense About Science; and the company and its founder endow efforts such the Social Science Research Council’s SAGE Fund for Research Methods to, as the SSRC’s president Ira Katznselson said, “improve and deepen existing methods, help embark on new possibilities, especially those made possible by the digital revolution, and invent new tools to advance the disciplined curiosity of the social sciences.”

(SAGE Publishing is the parent of Social Science Space.)


Related Articles

Young Scholars Can’t Take the Field in Game of  Academic Metrics
Infrastructure
December 18, 2024

Young Scholars Can’t Take the Field in Game of Academic Metrics

Read Now
From the University to the Edu-Factory: Understanding the Crisis of Higher Education
Industry
November 25, 2024

From the University to the Edu-Factory: Understanding the Crisis of Higher Education

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Exploring the Citation Nexus of Life Sciences and Social Sciences
Industry
November 6, 2024

Exploring the Citation Nexus of Life Sciences and Social Sciences

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Tom Burns, whose combination of play — and plays – with teaching in higher education added a light, collaborative and engaging model […]

Read Now
Neuromania – Or Where Did the Person Go?

Neuromania – Or Where Did the Person Go?

David Canter bemoans how people are disappearing as ‘brains’ take over.

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments