Impact

Size Still Matters: Discoverability, Impact and ‘Big’ Journals

June 19, 2020 2562
Even as the academic world increasingly embraces open access, the shadow cast by the ‘big’ journals is still outsize. (Photo: Maasaak / CC BY-SA)

‘Sometimes you have to do what you don’t like, to get to where you want to be.’ 

Tori Amos

We recently completed a citation analysis of 10 years of randomized controlled trials published by the UK National Institute of Health Research (NIHR). The biggest public funder of trials in the UK produces their own monograph series, the wholly open-access Health Technology Assessment (HTA) journal and we wanted to analyse the reach and impact of these works when republished in subsequent commercial journals. Such trials cost a great deal of money, so there is a clear interest in determining the impact of this research.

Impact can of course be assessed in many ways, but we decided to assess the health policy impact of these trials using citation analysis, by looking at how many times they were cited in key policy documents, or types of research that are known to inform policy, that is systematic reviews and meta-analyses. We also looked closely at how the trials were used in these documents. After all, trials are conducted to help decision-making by health professionals and policy-makers. They need to be easily discoverable and useful. 

LSE-impact-blog-logo
This article by Chris Carroll and Andy Tattersall originally appeared on the LSE Impact of Social Sciences blog as “You can publish open access, but ‘big’ journals still act as gatekeepers to discoverability and impact” and is reposted under the Creative Commons license (CC BY 3.0).

The sample was 133 trials published by the NIHR from 2006 to 2015. As noted above, these trials were all published in the NIHR’s own open-access HTA journal (a model of making publicly-funded research available to the public since its first volume in 1997). The HTA monograph is a peer-reviewed journal with each issue dedicated to a single project – such as a randomised controlled trial – and contains the full report of each trial. This might include not only the trial’s effectiveness findings, but also an economic evaluation and, in some cases, additional but related work, such as a qualitative study. These separate elements of the project might also be published in other peer-reviewed journals, which have paywalls and more restrictive word-counts, but also have the potential to increase the visibility and discoverability of the research.

Some of the trials published in our sample (82/133) had elements of the trial research published separately in traditional ‘subscription’ journals, such as The Lancet and the British Medical Journal (BMJ). These are the ‘big’ journals in medicine and public health with massive readerships and impact factors (from around ‘25’ to ‘60’, compared to approximately ‘4 or 5’ for the HTA journal). When conducting the analysis, we included these additional publications of the trials in the ‘impact’ assessment.

The citation analysis findings for these additional publications outstripped the impact numbers for the HTA journal. We found that these related publications achieved twice the mean number of citing reviews and more than four times the mean number of citing policy documents than the HTA journal publication: 125 vs 25 citations per trial; 7.16 vs 3.32 reviews per trial; 3.59 vs 0.80 policy documents per trial. These additional publications therefore appeared to generate much larger numbers of key citations, in policy documents and reviews, compared with their equivalent HTA journal publications.

So, can we conclude that the additional publication of elements of these trials in subscription journals, including possibly paying their open access charges, enhanced their impact? Well, not really. Good quality research and guidance documents should have found the HTA publication and its data anyway (a proportion cited both the HTA and its additional publication). A direct comparison of citation data to answer this question unequivocally is not possible. However, the numbers for the additional publications, including unique citations in policy documents, are large enough to be compelling. 

Publishing trial data in big journals such as The Lancet and BMJ might make the data far more ‘discoverable’, and thus enhance the potential impact of publicly-funded research. There might therefore be value for researchers, policy-makers and the public in a publishing model that combines full open-access publication (a must for publicly-funded research, surely) with selective additional publication in certain, select, influential subscription journals (while being aware that ‘salami slicing’ publication strategies do not necessarily represent ‘good practice’).

Indeed, public funders could maintain their own lists of appropriate journals for publication of their research, in addition to the open-access versions, though of course, for some, that raises questions concerning academic freedom. Of course, the ideal would be for funders to develop their own high-impact, wholly open-access journals that can compete with the ‘big’ journals but, even if this could be realized, it is a way off. In the meantime, public funders of research should very selectively exploit the system that is there, by ‘using’ the publishers (and certain journals only). This is an arguably a novel ‘reversal’ of the current perceived relationship between publishers and academics,  where the author supplies what is in effect ‘second hand’ content to be considered by the journal. In normal circumstances, such works could be declined due to the publisher not having first access to the research. Yet it is the high intellectual value of these works that not only ensures they are published again, but they are given a place in high impact journals. 

See the findings
The full analysis and findings of Chris Carroll and Andy Tattersall’s study are HERE.

Chris Carroll is reader in systematic review and evidence synthesis at The School of Health and Related Research at The University of Sheffield. Andy Tattersall is an Information Specialist at The School of Health and Related Research and writes, teaches and gives talks about digital academia, technology, scholarly communications, open research, web and information science, apps, altmetrics, and social media. In particular, their applications for research, teaching, learning, knowledge management and collaboration.

View all posts by Chris Carroll and Andy Tattersall

Related Articles

Young Scholars Can’t Take the Field in Game of  Academic Metrics
Infrastructure
December 18, 2024

Young Scholars Can’t Take the Field in Game of Academic Metrics

Read Now
From the University to the Edu-Factory: Understanding the Crisis of Higher Education
Industry
November 25, 2024

From the University to the Edu-Factory: Understanding the Crisis of Higher Education

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Exploring the Citation Nexus of Life Sciences and Social Sciences
Industry
November 6, 2024

Exploring the Citation Nexus of Life Sciences and Social Sciences

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Tom Burns, whose combination of play — and plays – with teaching in higher education added a light, collaborative and engaging model […]

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 1271 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments