Is Everything a Scholar Writes Automatically Scholarly?
What is it that sets academic publications apart from articles on The Conversation? Peer review might be your first answer. While The Conversation is built around a journalistic model, there is a big growth in online, open-access journals each with different approaches to peer review. But peer review is impossible to define and reviewing research before it is published can be fraught with problems.
This is part of the reason why so many published research findings are false. Alternative publishing models have developed in response to this. Open access and post-publication peer review are now common.
This new regime raises questions about what defines academic publishing. Blog posts and journalistic articles can be open access and subject to post-publication peer review, but are they scholarly? New publishing models have also developed their own shortcomings. One problem is the proliferation of predatory open access publishers. Some of these appear happy to accept randomly generated articles for publication, apparently following peer review.The importance of transparency
So, what should be considered scholarly output? The key to quality research is that we know what went into producing the reported results. All empirical work should be preceded by a published protocol. This should set out – transparently – the methods that were used.
Without one, it’s difficult to reproduce research findings and identify errors. There are plenty of journals that will now publish protocols, such as BMJ Open, PeerJ or SpringerPlus. But publication of a protocol in an open access repository would be sufficient – it isn’t necessary for it to appear in a peer-reviewed journal.
It’s important to make any present or potential conflicts of interest clear. This should apply to authors, reviewers and editors. Journal’s disclosure rules are a start, though these are subject to limitations. We need more sophisticated mechanisms for use alongside initiatives like ORCID, which assigns a unique ID to all researchers.
In most cases, scholars can share the data they have collected and analysed. Making data and analysis files available can help uncover simple errors. The Reinhart-Rogoff-Herndon incident is a case in point. Research findings by two Harvard economists were used to justify austerity policies, but these findings were undermined when a fundamental error was found in an Excel file.
My own field of research – health economics – should make cost-effectiveness models open. These models often form the basis of decisions about whether or not a particular drug will be available to patients, and yet the methods are often unclear to everyone but the authors. Where data relates to individual participants – and cannot be anonymized – this should be made clear to readers and reviewers.
Shine a light on peer review
Evidence suggests that two or three peer reviewers will not be able to identify all errors in a manuscript. This is one of the main problems with pre-publication peer review. It’s also one reason why open access is so important in the definition of good science. Paywalls on traditional academic journals restrict the number of people who can check the quality of a publication and can encourage mistaken consensus over published errors. All scholarly output must be open access.
And so peer review itself should also be transparent. Pre-publication peer review reports should be open and accessible through the journal or a service like Publons, a facility for researchers to record their peer review activity. Mechanisms to support post-publication peer review should also be supported. Reviewers should be identifiable as experts in their field. PubMed Commons is an example of such a tool.
Peer review is important, but I believe that post-publication approaches can be more effective. An additional benefit of open evaluation is the potential for better metrics.
Redefining scholarly output
Scholarly writing should be distinguishable from other forms of publication by its transparency. We should know exactly how authors arrive at their findings. Findings published in academic journals should be given special credence because of this.
Academic publishing should be defined by the presence of strict regulations to maximise transparency. Articles that do not meet transparency criteria should not be eligible for research quality assessments, such as the UK’s Research Excellence Framework. Journalists and academic bloggers will not be subject to such strict rules, and their output will differ accordingly.
Make “good” science clearer
I am by no means the first to call for such measures. But previous calls have focused on ideas for improving scholastic writing rather than the more fundamental challenge of defining it.
Transparency no doubt has its costs, at least in the short term. But without it, true scholarly output will become increasingly indistinguishable from academics’ other forms of writing.
Good science should not be defined by whether or not pre-publication peer review takes place, but by the transparency of the research. Some fear that abandoning our current system might allow more “bad science” to get through. But we have bad science now, and lots of it. Sunlight is the best disinfectant.