Impact

Think of Impact Statements As Maps, Not Short Stories

December 9, 2020 2230
Once upon a time written on graph paper

Impact statements, or pathways to impact, are often required in grant applications. Many researchers, however, describe the writing of impact statements as works of fiction. The reason is simply that it is not usually possible to predict or forecast what might happen—and what impacts they can achieve—after the research project. Chubb and Watermeyer have reported that impact statements are often perceived as ‘lies, stories, disguise, hoodwink, game-playing, distorting fear, distrust, over-engineering, flower-up, bull-dust, disconnected, narrowing” by grant applicants.

At the stage of writing a grant application, impact is a prediction. Hence, ex ante impact evaluations (those that take place prior to a research project) are judgements based on guesses—not evidence—of impact. It has been reported that random judgments are common in the review process of the United States National Science Foundation’s Broader Impact Initiatives. Earlier this year, the UKRI announced the suspension of the impact requirement in their grant proposals as they deliberate the steps needed to increase the efficiency and effectiveness of the application and evaluation processes. Indeed, problems and issues, such as attribution and time lag, have not been resolved in ex post impact assessment in many REF-related studies. It is therefore a challenge as to how we evaluate ex ante impact without any evidence or proofs.

LSE-impact-blog-logo
This article by Lai Ma originally appeared on the LSE Impact of Social Sciences blog as “Works of fiction? Impact statements should focus on pathways to impact over short-term outcomes” and is reposted under the Creative Commons license (CC BY 3.0).

In a recent study I undertook with Junwen Luo, Thomas Feliciani, Kalpana Shankar, we analyzed reviewers’ comments on impact statements for the Science Foundation Ireland Investigators Programme in 2016. In the call document, impact has been defined broadly, from fostering relationships with businesses and industry to enhancing quality of life, health and creative output. The definition of impact, however, does not differentiate the stages of impact in a typical linear (or logical) model—inputs (funding), outputs (scientific publications), outcomes, and impact. Nevertheless, we found that peer reviewers seemed to favor short-term, tangible impacts in their review as they commented on process-oriented (formative) impact in a more concrete and elaborative manner than on outcome-oriented (summative) impact. As some reviewers also explained, it is impossible for them to evaluate the long-term impact because that is very much dependent on the results and findings of research projects.

Based on the findings of our study, the evaluation of ex ante impact seems to be most useful for funding programs that have shorter-term and specific goals in, for example, academic-industry collaboration and the manufacturing of certain end-products. In other words, research where likely impacts are mostly pre-determined. The research proposal and the pathways to impact would be more like a business or feasibility plan with various work packages that aim to produce patents, licenses, and spin-off companies. These impacts can be foreseen and commented upon more concretely because they are stated as the primary goals and objectives of the research projects. However, most research projects, especially in basic science, do not have ‘end-products’ in mind, nor can they predict what their impact will be in five, 10, or 25 years. One characteristic of research, after all, is to embrace and challenge the unknowns. For much research and scholarship, making predictions of their impact seem to make no sense whatsoever.

Hence, we must rethink how to evaluate ex ante impact if it is to remain an important criterion in funding decisions. Here are our suggestions:

  • The criteria of ex ante impact evaluation should be designed and developed in accordance with the objectives of funding programs. For programs aim to foster academic-industry collaboration and to produce foreseeable outcomes such as patents, licenses, and end-products, the impact evaluation should focus on the feasibility, timeframe, and perhaps the commercial values of the products.
  • For programs that support more exploratory research, the impact evaluation should not be based on the impact that *might* be achieved, but the processes by which impact can be achieved. In other words, the pathways that will lead to impact outcomes. One can say, for example, they have planned to have public engagement activities, visits to schools, podcasts, and seminars with R&D departments in the industry. While the broader and long-term impact is unknown, these activities can be assessed based on applicants’ concrete plans and their existing partnership with non-governmental organizations, charity organizations, industry partners, policymakers and so on, as well as the infrastructure and support the research institutions can provide. Such criteria can be more concrete and hence more appropriate in ex ante impact evaluation.
  • Funding agencies should not use a wish list of impact in their call documents and review process. Although the definition and description of impact can be useful, the term ‘impact’ can mean outputs, outcomes, and broader impacts and each of these ‘stages’ can and should use different evaluative criteria. It should be considered that impact is often difficult to trace and track ex post, not to mention before a research project has been undertaken.

Funding allocation can have significant implications for knowledge production and solving important problems such as poverty, the climate crisis and pandemics. Shifting the focus to formative, process-oriented impacts can reduce the uncertainties and randomness and increase fairness and transparency in ex ante impact evaluation, while the writing of impact statements prompts plans and activities to generate and achieve impacts—rather than prediction of impact.

Lai Ma is an assistant professor at School of Information and Communication Studies at University College Dublin, Ireland. Her research is concerned with the interrelationship between epistemology, information infrastructure (primarily bibliographic and citation databases), and its cultural and social affordances and implications. Her ORCID iD is 0000-0002-0997-3605.

View all posts by Lai Ma

Related Articles

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate
Impact
September 18, 2024

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Read Now
Webinar: Fundamentals of Research Impact
Event
September 4, 2024

Webinar: Fundamentals of Research Impact

Read Now
Paper Opening Science to the New Statistics Proves Its Import a Decade Later
Impact
July 2, 2024

Paper Opening Science to the New Statistics Proves Its Import a Decade Later

Read Now
A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

The idea of an autonomous vehicle – i.e., a self-driving car – isn’t particularly new. Leonardo da Vinci had some ideas he […]

Read Now
Why Social Science? Because It Can Help Contribute to AI That Benefits Society

Why Social Science? Because It Can Help Contribute to AI That Benefits Society

Social sciences can also inform the design and creation of ethical frameworks and guidelines for AI development and for deployment into systems. Social scientists can contribute expertise: on data quality, equity, and reliability; on how bias manifests in AI algorithms and decision-making processes; on how AI technologies impact marginalized communities and exacerbate existing inequities; and on topics such as fairness, transparency, privacy, and accountability.

Read Now
Digital Scholarly Records are Facing New Risks

Digital Scholarly Records are Facing New Risks

Drawing on a study of Crossref DOI data, Martin Eve finds evidence to suggest that the current standard of digital preservation could fall worryingly short of ensuring persistent accurate record of scholarly works.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments