Impact

In Research, Engagement Is Not the Same As Impact

April 12, 2016 1043

tape measure for news pieceIn Australia’s Innovation Statement late last year, the federal government indicated a strong belief that more collaboration should occur between industry and university researchers.

At the same time, government, education and industry groupings have made numerous recommendations for the “impact” of university research to be assessed alongside or in addition to the existing assessment of the quality of research.

How should we measure research?

But what should we measure and, more importantly, why should we measure it?

The Conversation logo

This article by Stephen Taylor originally appeared at The Conversation, a Social Science Space partner site, under the title “When measuring research, we must remember that ‘engagement’ and ‘impact’ are not the same thing”

In accounting, we stress that the measurement basis of something inevitably reflects the purpose for which that measure is to be used.

So what is the purpose of measuring engagement, impact or, for that matter, quality?

The primary reason for measuring quality seems fairly self-evident – as a major stakeholder in terms of funding (especially dedicated research-only funding), the government wants an assessment of just “how good” by academic standards such research really is.

Looking ahead, measures of quality such as the Excellence in Research for Australia (ERA) rankings have been speculated to potentially influence future funding via prestigious competitive schemes (such as the Australian Research Council), block funding for infrastructure and the availability of government support for doctoral students via Australian Postgraduate Awards.

So the demand for a measure of research quality and the potential uses of such a measure are pretty clear.

But what valid reasons are there for investing significant resources in the measurement of research impact or engagement?

If high-quality research addresses important practical problems (large or small), surely we would expect impact would follow?

In this sense, the extent of impact is really a joint product of the quality (or robustness) of research and the choice of topic (i.e., practical versus more esoteric).

Research impact needs time

But over what period should impact be measured?

Recent exercises such as that conducted by the Australian Technology Network and Group of Eight have a relatively short-term focus, as would any “impact assessment” tied to the corresponding period covered by the existing ERA time frame (say the last six years).

I and many others maintain that impact can only be assessed over much longer periods, and that in many cases short-term impact is potentially misleading.

How often have supposedly impactful results subsequently been rejected or overturned?

Such examples inevitably turn out to reflect low quality (and in some cases outright fraudulent) research.

Ranking impact

Finally, how can impact be ranked? Is there a viable measure that can distinguish between high and low impact? Existing case-study approaches are unlikely to yield any form of quantifiable measurement of research impact.

Equally puzzling is the call to measure research engagement. What is the purpose of such an exercise? Surely in a financially constrained research environment, universities readily recognize the importance of such engagement and pursue it constantly.

We don’t need a national assessment of engagement to encourage universities to engage.

Motive aside, one approach canvassed is the quantum of non-government investment in research (i.e., non-government research income).

This is arguably one rather limited way to measure engagement, and is focused on input rather than output. If the purpose of any measurement is to capture outcomes, does it make sense to focus exclusively on inputs? The logic of this escapes me.

Engagement and impact are not the same thing

Even more worryingly, some use the terms engagement and impact interchangeably.

They would have us believe that a simple (but useful) measure of impact is the extent to which university researchers receive industry funding. Surely this is, at best, a measure of engagement, not impact.

Although the two are likely correlated, the extent will vary greatly across discipline areas.

Further, in business disciplines, much of the “knowledge transfer” that occurs via education (including areas such as executive programs) reflects the impact of the constant process of researching better business practices across areas such as accounting, finance, economics, marketing and so on.

Discretionary expenditure on such programs by business is surely an indication of the extent to which business schools and industry are engaged, yet this would be ignored if we focused on research income alone.
We must not lose sight that quality (i.e., rigor and innovativeness) is a necessary but not sufficient condition for broader research impact.

Engagement is not impact, and simple measures such as non-government research income tell us very little about genuine external engagement between universities and industry.

As accountants know, performance measurement reflects its purpose. What we need before any further national assessment of attributes such as impact or engagement is clear understanding of the purpose of such an exercise.

Only when the purpose is clearly specified can we have a sensible debate about measurement principles.The Conversation


Sage, the parent of Social Science Space, is a global academic publisher of books, journals, and library resources with a growing range of technologies to enable discovery, access, and engagement. Believing that research and education are critical in shaping society, 24-year-old Sara Miller McCune founded Sage in 1965. Today, we are controlled by a group of trustees charged with maintaining our independence and mission indefinitely. 

View all posts by Sage

Related Articles

Young Scholars Can’t Take the Field in Game of  Academic Metrics
Infrastructure
December 18, 2024

Young Scholars Can’t Take the Field in Game of Academic Metrics

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 1255 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
Webinar: Fundamentals of Research Impact

Webinar: Fundamentals of Research Impact

Sage 1094 Event, Impact

Whether you’re in a research leadership position, working in research development, or a researcher embarking on their project, creating a culture of […]

Read Now
Paper Opening Science to the New Statistics Proves Its Import a Decade Later

Paper Opening Science to the New Statistics Proves Its Import a Decade Later

An article in the journal Psychological Science, “The New Statistics: Why and How” by La Trobe University’s Geoff Cumming, has proved remarkably popular in the years since and is the third-most cited paper published in a Sage journal in 2013.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments