With the REF, We Can Evaluate the Impact of Impact
For the first time, the “impact” of academic research on the wider world has been included in a large-scale assessment of the quality of university research, which has just been published. One-fifth of the overall score awarded to each university research department that submitted academics for assessment in the Research Excellence Framework (REF) was based on the impact of their research. It is thought that this weighting will increase to at least a quarter for the next round of assessment in 2020.
Impact could cover the socio-economic or cultural effects of research, as well as its impact on quality of life. It had to be beyond the world of academia, have taken place between 2008 and 2013 and be linked to “internationally recognised” research.
The results showed that across all the research submitted for assessment, 44 percent was rated as “outstanding”, or four-star, and another 40 percent was judged as “very considerable”, or three-star. These judgements were made by panels of academics and “end users” of research, from across business, the public sector and charities.
Anecdotally, the impact aspect of the REF exercise seems to have run fairly smoothly within universities and across the assessment panels. Yet there are issues going forward. The question is whether it is sustainable in terms of assessment, desirable in terms of unintended consequences and beneficial.
Some subjects easier
Some of the subjects or “units of assessment” in the REF were more easily geared up for “impactfulness” than others. For example, one would expect education and clinical medicine to do well with little effort, while music and philosophy, say, might need a broader definition in order to score as highly.
This is not to say that areas like music and philosophy do not have impact – they clearly do – it’s just that how impact is defined has had to be broadened so that the funding councils can use the same assessment criteria across all disciplines.
Yet this may have unintended negative consequences for the more impactful subjects: as they get increasingly considered “practical” subjects they may be pushed to the margins of theoretical academia. The greater the practical expectations made of a subject, the fewer incentives academics have to develop theory.
The way it is defined to preclude impact within the academic world leaves disciplines such as pure mathematics (now included within “mathematical sciences”) in danger of being amalgamated and merged in order to make them more assessable. This is a classic problem in the world of educational assessment generally: we are in danger of valuing most what can most easily be measured.
Dangers of short-term thinking
On a practical level, there is an issue with limiting impact to a given period and the encouragement this gives to short-termism. This is relatively easy and sensible to do for publications and research income metrics, but it is not so simple for impact.
For the REF 2014, research impact could only be claimed if it occurred during the period 2008-13, but the research that gave rise to it could go back more than a decade before that. Quite properly, that underpinning research must itself have been high quality, but the issue remains for subsequent REFs, starting in 2020, as to how they will deal with overlap and institutions claiming slightly different or extended impacts for the same underpinning research.
Game-playing
Impact within the REF 2014 was assessed on the production of case studies. One case study was required for every ten staff submitted, plus one. So, for example, a department submitting 34 staff would need four case studies or a department submitting 76 staff would need nine.
Needless to say, this resulted in a certain amount of game-playing, so university departments might choose to reduce the number of staff submitted in order to reduce the number of case studies, or vice-versa. My experience would suggest that very few departments increased the number of staff they submitted in order to include an additional case study. Any manipulation was in the downward direction, reducing the size of the staff returned to avoid another case study.
My own experience of serving on the REF panel for education was that impact, as with the other aspects of the exercise, was taken very seriously and reviewed conscientiously. Moderation within and between the sub-panels assessing each subject area was frequent, thorough and necessary. This was time-consuming and costly, but if the Higher Education Funding Council for England, which runs the REF, decides to increase the impact component beyond its current 20 percent in 2020 – and all the signs are that it will – it will mean more extensive cross-checking to make sure it is fair across the board.
***
Anthony Kelly was a member of the Research Excellence Framework 2014 sub panel for Education. He receives funding from the Engineering and Physical Sciences Research Council, the Jersey government, Audit Commission and the Department of Education.