Assessing Australia’s Poor Record of Impact Assessment
In March, the results of Australia’s first ever round of the Engagement and Impact Assessment (EIA) were released. This included a National Report with university score cards, as well as the publication of impact studies that achieved ratings of “high.” You’d be forgiven for missing this event. Apart from the fact that it was an unpopular exercise with doubts about its ability to generate meaningful findings, the results were overshadowed by the announcement of Excellence in Research for Australia (ERA) outcomes only days earlier. The well-established ERA is a far larger exercise, which comprehensively evaluates the quality of research undertaken in Australian universities.
Over the years, Australia has had a confused relationship with the impact agenda, with much of this grounded in the vagaries of government. When the idea of a national exercise to evaluate research was first touted in the form of the Research Quality Framework, the focus was to be on both quality and impact of research. This was abandoned in 2007 in favor of ERA, following a change in government.
But the matter of impact did not disappear. Concerns about Australia’s poor performance in industry engagement, as measured by OECD statistics, and an interest in understanding returns on investment (e.g. Boosting the Commercial Returns from Research 2014), kept leading back to the same question: what value is university-based research bringing to society and what are universities doing to make sure their work is reaching end-users?
After 2007 the agenda changed from pure impact to the conditions that enable impact. This culminated in the release of the 2015 National Innovation and Science Agenda proposing a nation-wide assessment exercise which – according to the authors – would magically lead to improvements in how universities engage with industry. Even though, like ERA, the exercise would not be used to inform funding to universities and it was unclear how the outcomes would inform federal policy.
With a strong interest in not just impact, but the mechanisms that enable impact, the ARC designed an exercise that would conduct evaluations in three separate areas: Engagement, Impact and Approach to Impact. The Unit of Assessment is the very broad two-digit Field of Research (FoR) code and the ratings are low, medium and high. The Engagement component was to be largely data-focused (with a commendable commitment to reusing research income data collected through ERA); however, this morphed into an intensive narrative exercise, when it was found that the available data offered only a limited picture. For Impact, universities – regardless of size and scale of activity – were required to present one impact study that acted as a proxy for the entire FoR. The third narrative was for Approaches to Impact (i.e. a description of how institutions aim to achieve impact). Being able to tell a good story was always going to be the skill that mattered in achieving high ratings.
Over the same period of the development of the EIA, the interest in impact started to materialize in other forms, notably grant applications, e.g. with applicants for ARC funding being required to prepare Benefit and Impact Statements outlining the contribution that their research will make to the economy, society, environment or culture. However, things backfired when it was revealed that in 2017, the then-minster of education had interfered in funding decisions by knocking back a number of projects that he deemed unworthy of funding on the basis of national interest. As a consequence, in 2018 the ARC announced a new National Interest Test. While seemingly innocuous in that grant applicants were still required to address matters of benefit, it demonstrated a largely parochial view on how impact and benefit were to be approached. Announcing the NIT, the new education minister declared, “Introducing a national interest test will give the minister of the day the confidence to look the Australian voter in the eye and say, ‘your money is being spent wisely.’” Yet again, public confidence was presented as the rationale.
Interested in this topic? After you’ve finished reading, check out a previous Social Science Space article, “Meanwhile, Impact Down Under.”
The EIA report raises many questions, both in of itself and also in the light of the parochial NIT. According to the panels that evaluated submissions, the sector is doing okay for Engagement, Impact and Approaches to Impact, with 85 percent, 88 percent and 76 percent respectively being rated at ‘medium’ or ‘high’ across the sector. What are we to make of this information? For Impact, nothing. The fact that one narrative alone can accurately represent an entire two-digit FoR is a nonsense. Consider FoR 16, Studies in Human Society, which covers anthropology, demography, criminology, human geography, policy and administration, political science, social work and sociology.
Another mystery is how impact intersects, if at all, with the ‘national interest’ agenda. The impact studies released by the ARC make for thought-provoking reading. One of the most interesting is on “The History of the Tentmakers of Cairo” in FoR 19 (Studies in Creative Arts and Writing) which leaned on the work of a sole researcher. Would the worthy project behind the study have passed the minister’s judgement if it had been submitted as a grant application? We will never know, but given its international scope it is questionable. Regardless, as a result of the EIA’s “proxy” approach in assessing impact, the university which presented the study will be for now known as an institution that produces highly impactful research in FoR 19 (an area in which, incidentally, the same university was rated as “below world average” in ERA).
And what are we make to make of ratings for Approach to Impact and Engagement? Should we be concerned if a university that scores well for impact bombs in the other two components? Surely if the Australian government is so fixated on matters of public care, the concern should be about the benefit that research has produced for society and not the means by which this has occurred. But how is this to be reconciled with the government’s longstanding interest of seeing enhanced university-industry engagement?
What happens next really hinges upon what the policy makers in Australia are planning to do with the intelligence they have gathered. But with a poor track record in using assessment outcomes to inform policy, it is difficult to be optimistic. It is ironic that an exercise as comprehensive as the EIA is likely to end up so limited in its utility.