Impact

Open Scholarship Means We Must Rethink How to Measure Impact

January 31, 2019 3259

Measuring impact

New digital research infrastructures and the advent of online distribution channels are changing the realities of scientific knowledge creation and dissemination. Yet, the measurement of scientific impact that funders, policy makers, and research organizations perpetuate fails to sufficiently recognize these developments. This situation leaves many researchers stranded as evaluation criteria are often at odds with the reality of knowledge creation and good scientific practice. We argue that a debate is urgently needed to redefine what constitutes scientific impact in light of open scholarship. Open scholarship being scholarship that makes best use of digital technology to make research more efficient, reproducible, and accessible. To this end, we present four ongoing systemic shifts in scholarly practice, which common impact measures fail to recognize.

Shift one: Impact originates from collaboration

The increasing number of coauthors in almost every scientific field and rising incidences of hyperauthorship (articles with several hundred authors) (Cronin 2001), suggest that meaningful insights can often only be generated by a complex combination of expertise. This is supported by the fact that interdisciplinary collaborations are associated with higher impact (Chen et al. 2015Yegros-Yegros et al. 2015). Research is increasingly becoming a collaborative enterprise.

LSE-impact-blog-logo
This article by Sascha Friesike, Benedikt Fecher and Gert. G. Wagner originally appeared on the LSE Impact of Social Sciences blog as “Now is the time to update our understanding of scientific impact in light of open scholarship” and is reposted under the Creative Commons license (CC BY 3.0).

The authorship of scientific articles, which is the conceptual basis for most common impact metrics, fails in conveying the qualitative contribution of a researcher to complex research insights. A long list of authors tells a reader little about the contribution of an individual researcher to a project. This becomes apparent even in small groups: Last year the Forum of Mathematics, π, published a formal proof for the 400-year-old Kepler conjecture (Hales et al., 2017). The paper lists 22 authors. While we can attribute the original problem to Kepler, in this instance it is impossible to understand what each of the 22 authors actually contributed. In the experimental and empirical sciences papers with more than 1,000 authors are not uncommon. In these instances, the published article is an inadequate object through which to capture complex forms of collaboration and distill the individual contribution and impact of a researcher.

At the same time, there are projects that are conducted by a small number of researchers or even a single author and the single authored article or book remains commonplace in many fields, especially the humanities. It’s obvious nonsense to assess the contribution of an author with dozens or hundreds of co-authors the same way we assess the work of a single author. But that is exactly what Google Scholar does when it shows lifetime citation numbers, which are not discounted by the number of co-authors, or the H-Index, which does not differentiate if a paper is single-authored, or has 1,000 authors. By subsuming different levels of contribution and forms of expertise under the umbrella concept of authorship, we compare apples with oranges.

Our understanding of impact dilutes the idea of authorship and fails to capture meaningful collaborations.

Shift two: Impact comes in different shapes

Researchers increasingly produce results that come in forms other than articles or books. They produce scientific software that allows others to do their research, they publish datasets that lay the foundation for entire research projects, or they develop online resources like platforms, methodological resources, or explanatory videos that can play a considerable role in their respective fields. In other words: Research outputs are becoming increasingly diverse.

While many researchers have considerable impact with outputs other than research articles or books, our conventional understanding of impact fails to record and reward this. Take as an example Max Roser, the economist behind the platform Our World in Datawhich shows how living conditions are changing over time. The platform is a popular resource for researchers and journalists alike. Roser has an avid twitter base and is a sought-after expert on global developments. His academic work clearly has societal impact. Judged by conventional impact metrics however his impact is relatively small. Another example is the programming language R which benefits from the works academics put into it. The versatility of the available packages have contributed to R’s popularity among data-analysts–in and outside of the academic system. However, the undeniable value that a researcher creates when programming a popular piece of software (or generally contributes to the development of critical research infrastructure) is not captured by our understanding of impact. Scholars that are investing time and effort in alternative products or even public goods (as in the case of R) face disadvantages when it comes to the assessment of their work and ultimately career progression.

For this reason, researchers are compelled to produce scientific outputs that are in line with mainstream measures of impact. For example, number of articles published in specific outlets or number of citations, despite the fact that many peer-reviewed articles receive marginal attention. Larivière and colleagues found that 82 percent of articles from the humanities, 27 percent of natural science articles, and 32 percent of social science remain uncited, even five years after publication (Larivière et al., 2009). At the same time, researchers are deterred from other meaningful activities and motivated to withhold potentially useful research products in order to maximize the number of articles they can publish (Fecher et al., 2017).

Our understanding of impact perpetuates an analogue mainstream, neglects the diverse form of impact, scientific work and demotivates innovation.

Shift three: Impact is dynamic

We live in a world in which our biggest encyclopedia is updated seconds after important news breaks. Research products are basically information goods and therefore likewise prone to constant change (e.g., tables and graphs that are being updated with live data, blog posts that are revised). Even a conventional article, a seemingly static product, changes in the publication process as reviewers and editors ask for clarifications, additional data, or a different methodological approach.

Traditional impact measures fail to capture the dynamic nature of novel scholarly products. For many they are not considered citable. For example, the radiologist Sönke Bartling maintained a living document that covered the opportunities blockchain technology holds for the scientific community. With the attention the technology received, Bartling’s frequently updated document attracted considerable attention from researchers and policymakers. His work certainly had impact as he maintained a key resource on a novel technology. However, Bartling stopped updating the document when he came across several instances in which authors had copied aspects of his document without referencing it.

The web allows researchers to produce and maintain dynamic products that can be updated and changed regularly. The traditional measurement of scientific impact however expects academic outputs to remain static. Once an article is accepted for publication it becomes a fixed object. In the case that a change is needed, it is published as a separate publication in a specific section of a journal: “errata”, which is Latin for errors. Thus, the only way to update a traditional journal publication is by publicly admitting to an error (see Rohrer 2018).

Our understanding of impact neglects the dynamic nature of research and research outputs.

Open Scholarship as a framework for impact assessment

While it seems impossible to capture the full picture of research impact, it is absurd that we are neglecting valid and important pathways to scientific and societal impact. Impact is not monolithic; it comes in different shapes, differs across disciplines, and is subject to change in part due to modern communication technology. In an academic world that is increasingly adopting open scholarship, bibliometric impact measures assess a shrinking section of the actual impact that is happening.

Here, we see significant room for improvement. Impact assessment needs to capture the bigger picture of scholarship, including new research practices (data sharing), alternative research products (software), and different forms of expertise (conceptual, empirical, technical, managerial). We believe that open scholarship is a suitable framework to assess research.

In this respect, impact arises if an output is not only accessible but reusable, if a collaboration is not only inclusive but leads to greater efficiency and effectiveness, and if a result is not only transparent but reproducible (or at least comprehensible). This entails adapting our quality assurance mechanisms to the reality of academic work, allowing for modular review techniques (open peer review) for different research outputs (data and code). In many respects, the hyper-quantification we experience in the quest to identify scientific impact would be better suited to safeguarding scientific quality and integrity.

Change is therefore necessary to motivate academics to focus on actual impact—instead of the outdated assumptions behind the measurement of impact— and now is the time to renegotiate academic impact in light of open scholarship.


Related Articles

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate
Impact
September 18, 2024

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Read Now
Webinar: Fundamentals of Research Impact

Webinar: Fundamentals of Research Impact

Whether you’re in a research leadership position, working in research development, or a researcher embarking on their project, creating a culture of […]

Read Now
Paper Opening Science to the New Statistics Proves Its Import a Decade Later

Paper Opening Science to the New Statistics Proves Its Import a Decade Later

An article in the journal Psychological Science, “The New Statistics: Why and How” by La Trobe University’s Geoff Cumming, has proved remarkably popular in the years since and is the third-most cited paper published in a Sage journal in 2013.

Read Now
A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

The idea of an autonomous vehicle – i.e., a self-driving car – isn’t particularly new. Leonardo da Vinci had some ideas he […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments