Impact

Assessing Australia’s Poor Record of Impact Assessment

January 14, 2020 2284
Australia flag outline

In March, the results of Australia’s first ever round of the Engagement and Impact Assessment (EIA) were released. This included a National Report with university score cards, as well as the publication of impact studies that achieved ratings of “high.” You’d be forgiven for missing this event. Apart from the fact that it was an unpopular exercise with doubts about its ability to generate meaningful findings, the results were overshadowed by the announcement of Excellence in Research for Australia (ERA) outcomes only days earlier. The well-established ERA is a far larger exercise, which comprehensively evaluates the quality of research undertaken in Australian universities.

Over the years, Australia has had a confused relationship with the impact agenda, with much of this grounded in the vagaries of government. When the idea of a national exercise to evaluate research was first touted in the form of the Research Quality Framework, the focus was to be on both quality and impact of research. This was abandoned in 2007 in favor of ERA, following a change in government.

But the matter of impact did not disappear. Concerns about Australia’s poor performance in industry engagement, as measured by OECD statistics, and an interest in understanding returns on investment (e.g. Boosting the Commercial Returns from Research 2014), kept leading back to the same question: what value is university-based research bringing to society and what are universities doing to make sure their work is reaching end-users? 

LSE-impact-blog-logo
This article by Ksenia Sawczak originally appeared on the LSE Impact of Social Sciences blog as Assessing Impact Assessment – What can be learnt from Australia’s Engagement and Impact Assessment? and is reposted under the Creative Commons license (CC BY 3.0).

After 2007 the agenda changed from pure impact to the conditions that enable impact. This culminated in the release of the 2015 National Innovation and Science Agenda proposing a nation-wide assessment exercise which – according to the authors – would magically lead to improvements in how universities engage with industry. Even though, like ERA, the exercise would not be used to inform funding to universities and it was unclear how the outcomes would inform federal policy.

With a strong interest in not just impact, but the mechanisms that enable impact, the ARC designed an exercise that would conduct evaluations in three separate areas: Engagement, Impact and Approach to Impact. The Unit of Assessment is the very broad two-digit Field of Research (FoR) code and the ratings are low, medium and high. The Engagement component was to be largely data-focused (with a commendable commitment to reusing research income data collected through ERA); however, this morphed into an intensive narrative exercise, when it was found that the available data offered only a limited picture. For Impact, universities – regardless of size and scale of activity – were required to present one impact study that acted as a proxy for the entire FoR. The third narrative was for Approaches to Impact (i.e. a description of how institutions aim to achieve impact). Being able to tell a good story was always going to be the skill that mattered in achieving high ratings.

Over the same period of the development of the EIA, the interest in impact started to materialize in other forms, notably grant applications, e.g. with applicants for ARC funding being required to prepare Benefit and Impact Statements outlining the contribution that their research will make to the economy, society, environment or culture. However, things backfired when it was revealed that in 2017, the then-minster of education had interfered in funding decisions by knocking back a number of projects that he deemed unworthy of funding on the basis of national interest. As a consequence, in 2018 the ARC announced a new National Interest Test. While seemingly innocuous in that grant applicants were still required to address matters of benefit, it demonstrated a largely parochial view on how impact and benefit were to be approached. Announcing the NIT, the new education minister declared, “Introducing a national interest test will give the minister of the day the confidence to look the Australian voter in the eye and say, ‘your money is being spent wisely.’” Yet again, public confidence was presented as the rationale.

Interested in this topic? After you’ve finished reading, check out a previous Social Science Space article, “Meanwhile, Impact Down Under.”

The EIA report raises many questions, both in of itself and also in the light of the parochial NIT. According to the panels that evaluated submissions, the sector is doing okay for Engagement, Impact and Approaches to Impact, with 85 percent, 88 percent and 76 percent respectively being rated at ‘medium’ or ‘high’ across the sector. What are we to make of this information? For Impact, nothing. The fact that one narrative alone can accurately represent an entire two-digit FoR is a nonsense. Consider FoR 16, Studies in Human Society, which covers anthropology, demography, criminology, human geography, policy and administration, political science, social work and sociology.  

Another mystery is how impact intersects, if at all, with the ‘national interest’ agenda. The impact studies released by the ARC make for thought-provoking reading. One of the most interesting is on “The History of the Tentmakers of Cairo” in FoR 19 (Studies in Creative Arts and Writing) which leaned on the work of a sole researcher. Would the worthy project behind the study have passed the minister’s judgement if it had been submitted as a grant application? We will never know, but given its international scope it is questionable. Regardless, as a result of the EIA’s “proxy” approach in assessing impact, the university which presented the study will be for now known as an institution that produces highly impactful research in FoR 19 (an area in which, incidentally, the same university was rated as “below world average” in ERA). 

And what are we make to make of ratings for Approach to Impact and Engagement? Should we be concerned if a university that scores well for impact bombs in the other two components? Surely if the Australian government is so fixated on matters of public care, the concern should be about the benefit that research has produced for society and not the means by which this has occurred. But how is this to be reconciled with the government’s longstanding interest of seeing enhanced university-industry engagement?

What happens next really hinges upon what the policy makers in Australia are planning to do with the intelligence they have gathered.  But with a poor track record in using assessment outcomes to inform policy, it is difficult to be optimistic. It is ironic that an exercise as comprehensive as the EIA is likely to end up so limited in its utility. 

Ksenia Sawczak is Director of Research Services at the University of Canberra, where she oversees the preparation of data submissions to government agencies. She is also responsible for policy development and maintains a close observation of developments in the Australian government research policy landscape.

View all posts by Ksenia Sawczak

Related Articles

Emerson College Pollsters Explain How Pollsters Do What They Do
Communication
October 23, 2024

Emerson College Pollsters Explain How Pollsters Do What They Do

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Webinar: Banned Books Week 2024
Event
September 24, 2024

Webinar: Banned Books Week 2024

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 829 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments