Public Policy

Even an Imperfect Metrics Regime Has Value Public Policy
Rather than giving up on measuring impact, let's go a bit slow until we've worked out some of the bugs. (Photo: Patty O'Hearn Kickham/Flickr/CC BY 2.0)

Even an Imperfect Metrics Regime Has Value

August 12, 2015 3862

Snail on a ruller

Rather than giving up on measuring impact, let’s go a bit slow until we’ve worked out some of the bugs. (Photo:
Patty O’Hearn Kickham/Flickr/CC BY 2.0)

LSE Impact logo

This article by Jane Tinkler originally appeared on the LSE Impact of Social Sciences blog as “Rather than narrow our definition of impact, we should use metrics to explore richness and diversity of outcomes” and is reposted under the Creative Commons license (CC BY 3.0).
For the full report, supplementary materials, and further reading, visit the LSE’sHEFCEmetrics section.

If you consider that nearly 7,000 impact case studies were recently submitted to the REF and (I’m guessing) every single one of them contained some kind of indicator to evidence their impact claims, you might expect the academic community to be more enthusiastic about the use of impact metrics than for some other types of quantitative indicators. But during the course of our consultation for the HEFCE Metric Tide report, we found equally as many concerns about the use of impact metrics as for these other types.

One of the most common concerns that colleagues discussed with us is that impact metrics focus on what is measurable at the expense of what is important. But, as the report highlights in relation to excellence, it’s more than this. When you design a metric for impact you are explicitly constructing a definition of what impact is and when you go on to use that metric, you are locking in that definition. What we know is that impact is multi-dimensional, the routes by which impact occurs are different across disciplines and sectors, and impact changes over time. We would need a really broad range of metrics to be able to usefully show this variety.

Source: Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363

Source: Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363

And it seems that impact case studies authors did use a wide variety of impact indicators. Evidence from the Digital Science and King’s College London evaluation of REF impact case studies found that from the nearly 7,000 impact case studies, authors used almost as many metrics to evidence the impact of their research. They did so because they were able to choose the ones that seems most appropriate to evidence their research, and the impact it had created. (A few examples are listed in Figure 1). And that is another key concern with the use of impact metrics in the next REF. If HEFCE were to pick even a ‘basket’ of metrics for the next REF, would there be enough overlap between disciplines, sectors and so on for those chosen to effectively describe the outcomes around impact of research and be compared as a result? The Digital Science and Kings College London report says not:

The quantitative evidence supporting claims for impact was diverse and inconsistent, suggesting that the development of robust impact metrics is unlikely … impact indicators are not sufficiently developed and tested to be used to make funding decisions.

So for the impact component of the REF, the Metric Tide report recommended that it is not currently feasible to use quantitative indicators in place of narrative impact case studies. There would be a danger that by doing this the concept of impact might narrow and become too specifically defined by the easy availability of indicators for some types of impact and not for others. For an exercise like the REF, where HEIs are competing for funds, defining impact through quantitative indicators is likely to mean universities ‘play safe’ about which impact stories have greatest currency and therefore should be submitted. This would mean showing less of the diversity and richness of the impacts that we create from our research.

Another reason not to encourage any funder to specify a set of impact metrics at a particular point in time is the growth in the number of tools that can provide some indication of impact. Individual academics are collecting more information about impact-relevant activities and their effects, and universities are making better use of the information they and others already hold to do the same. The recommendations in the Metric Tide report around the improvement of research infrastructure and the greater use of identifiers such as ORCID were made in the hope that this will get easier. It would be a shame if we were not able to make best use of any new tool, just because it was not on some specified list.

But that is not to say impact metrics are not useful or needed. We are in fairly early days of our understanding of the ways in which impact happens and both qualitative and quantitative indicators can be a source of learning how impact works in each of our disciplines, locations or sectors. So we should use the dataset of impact case studies as a learning tool about the ways in which successful impact was created, using what methods and with what effects. Although as yet many impact metrics only give partial information, for me some information is always better than none.


Jane Tinkler is a research fellow at the London School of Economics.

View all posts by Jane Tinkler

Related Articles

The Authors of ‘Artificial Intelligence and Work’ on Future Risk
Innovation
December 4, 2024

The Authors of ‘Artificial Intelligence and Work’ on Future Risk

Read Now
Why Might RFK Jr Be Good for US Health Care?
Public Policy
December 3, 2024

Why Might RFK Jr Be Good for US Health Care?

Read Now
Tenth Edition of The Evidence: Why We Need to Change the Narrative Around Part-Time Work
Bookshelf
December 2, 2024

Tenth Edition of The Evidence: Why We Need to Change the Narrative Around Part-Time Work

Read Now
Joshua Greene on Effective Charities
Social Science Bites
December 2, 2024

Joshua Greene on Effective Charities

Read Now
The End of Meaningful CSR?

The End of Meaningful CSR?

In this article, co-authors W. Lance Bennet and Julie Uldam reflect on the inspiration behind their research article, “Corporate Social Responsibility in […]

Read Now
Deciphering the Mystery of the Working-Class Voter: A View From Britain

Deciphering the Mystery of the Working-Class Voter: A View From Britain

How is class defined these these days – asking specifically about Britain here but the question certainly resonates globally – and when […]

Read Now
Doing the Math on Equal Pay

Doing the Math on Equal Pay

In the UK, it’s November 20. In France, it’s today, November 8. For the EU, it’s November 15. It’s the day of […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
mahfuz

Another reason not to encourage any funder to specify a set of impact metrics at a particular point in time is the growth