Communication

Modernizing the Monograph Ecosystem Can Save Them From Extinction Communication
Monographs have a long and honorable history in Western scholarship, but increasingly they have fallen out of favor in many disciplines and present thrny issues when it comes to open access. (Photo Tom Woodward/Flickr)

Modernizing the Monograph Ecosystem Can Save Them From Extinction

August 16, 2019 2794
Old monographs
Monographs have a long and honorable history in Western scholarship, but, argues Mike Taylor, the infrastructure that surrounds them must modernize for monographs to thrive. (Photo: Tom Woodward/Flickr)

That monographs might not have a future seems absurd. For many disciplines, the monograph is a central part of scholarly communications. Humanities, social sciences and the arts all take this form very seriously: they’re often central to the way that we think about academic contributions. Monographs are an academic’s opportunity to introduce new ways of thinking to their colleagues, to have the time and space to thoroughly explore a topic. They are especially important in the non-English language world. Monographs are personal, they are slow, they are long-lasting: I must have read Bruno Latour’s Reassembling the Social two or three times, cover to cover.

LSE-impact-blog-logo
This article by Mike Taylor originally appeared on the LSE Impact of Social Sciences blog as “Do monographs have a future? Publishers, funders and research evaluators must decide” and is reposted under the Creative Commons license (CC BY 3.0).

When I worked in publishing, working with authors to shepherd their book from manuscript to marketing, there was no mistaking the passion that authors had for their work, or the level of their commitment. And yet, when a group of monograph enthusiasts gathered last year to talk about the future of the monograph – and especially the monograph in a world increasingly dominated by open access – we found ourselves increasingly concerned about the challenges that they face.

Scholarly infrastructure, from discovery tools and metrics to business models, is moving on, and is in danger of leaving the monograph behind. In, “The State of Open Monographs,” the authors (Peter Potter – Virginia Tech, Charles Watkinson – University of Michigan, Sara Grimme, Cathy Holland and myself, of Digital Science) came to a number of worrying conclusions.

Firstly, most monographs do not get a digital object identifier (DOI) assigned to them. DOI’s for monographs may seem like a trivial technical detail to many, but the reality runs a lot deeper. The act of assigning a DOI to a piece of work means that the metadata and location of that book is freely available to multiple systems around the world. Most importantly, it means that a citation to a book can be recognised and counted – thus giving an author that all-important credit for their work.

Secondly, books do get cited and shared. They have citation data, they have Altmetric data. But this data accrues at a much slower pace than that of articles, and for altmetrics, it occurs in different places. A three-year window is an absurd period in which to assess the impact of a book. That Latour monograph? I don’t think I’ve cited it yet – despite it informing the way I think about impact, I simply haven’t written a paper about it – yet. Books are influential in policy papers – but it can take 10 years for a policy citation to be generated. My personal view is that books need to be evaluated over the entire span of a researcher’s career, not arbitrarily discarded on a timescale customized to suit articles.

Thirdly, many book publishers have not yet invested in workflows that enable full-text XML tagging, management and archiving of the content. In contrast, articles have been largely XMLed for twenty years. As a consequence, book content often gets ‘trapped’ inside PDF files, which makes the cost of repurposing the content into new electronic forms – innovative apps, ebooks and chapter downloads – prohibitive.

Fourthly, many players in the monograph arena either don’t talk to each other, or don’t trust each other. This affects open monographs disproportionately. For example, in the absence of sales figures, we (publishers, authors, evaluators) need other data to understand impact. But citations are under-reported, and altmetrics are not yet central. So we’d love to turn to usage figures, but obtaining compatible data from the myriad distributors of ebooks, platform aggregators and other distribution providers, is virtually impossible. All despite this problem having been solved by COUNTER for many, many years.

Fifthly, and finally, there’s the problem of funding open monographs. We know that academics value monographs – they write them, they read them, they eventually cite them – but funders have yet to offer sufficient funds to support ‘book processing charges.’ I alluded to the care taken by book publishers to ‘shepherd’ the manuscripts through production. Each one is its own special case, and each relationship between an author and a publisher, unique. In contrast, a major journal publisher might process dozens of submitted articles per hour, entirely automatically, at a very low cost. Book publishing, like book authoring, is a slower and more curated process. It costs more to publish a good quality book than the equivalent number of articles, and funders (and universities) need to get real about these costs.

The evidence is that the scholarly world marches to the drum of the article: in particular, the English-language, northern/western hemisphere, STEM drum – and everything else is in danger of falling behind. We know that this is not intentional: it’s simply that this represents 60-70 percent of the world’s research output – and relatively speaking – is cheaper and fast to publish. Everything else … is a special case.

We estimate that monographs represent – in terms of raw titles (not pages!) – about 3 percent of global scholarly output. For some countries, this could be as high as 40 percent. Solving the problems that confront monographs is no small challenge – but the good news is that momentum is growing, and these challenges are solvable.

We need to fix issues around existing metadata – it’s serving monographs poorly and is a drag on discovery and citation counts. The number of books with DOIs seems stuck at about 25 percent. Work can be done to increase this number: it needs to be done, in order for monographs to remain discoverable by a community increasingly using apps and web technologies to find content. We need better reporting of usage, sharing and citations, and not only to show both the importance and value of the monograph – DOIs will help with this. But none of this will happen if books are allowed to wither on the vine: it’s time for funders, publishers and evaluators to come together and develop an infrastructure to support the monograph of the future.

Mike Taylor is head of data insights at Digital Science, where he specializes in quantitative and qualitative analyses of academic trends using Dimensions, Altmetric and other data sources. Before joining Digital Science, he had a long career at Elsevier, working in various groups. He is on Twitter @herrison.

View all posts by Mike Taylor

Related Articles

To Better Forecast AI, We Need to Learn Where Its Money Is Pointing
Innovation
April 10, 2024

To Better Forecast AI, We Need to Learn Where Its Money Is Pointing

Read Now
Second Edition of ‘The Evidence’ Examines Women and Climate Change
Bookshelf
March 29, 2024

Second Edition of ‘The Evidence’ Examines Women and Climate Change

Read Now
Free Online Course Reveals The Art of ChatGPT Interactions
Resources
March 28, 2024

Free Online Course Reveals The Art of ChatGPT Interactions

Read Now
Why Social Science? Because It Makes an Outsized Impact on Policy
Industry
March 4, 2024

Why Social Science? Because It Makes an Outsized Impact on Policy

Read Now
Did the Mainstream Make the Far-Right Mainstream?

Did the Mainstream Make the Far-Right Mainstream?

The processes of mainstreaming and normalization of far-right politics have much to do with the mainstream itself, if not more than with the far right.

Read Now
Why Don’t Algorithms Agree With Each Other?

Why Don’t Algorithms Agree With Each Other?

David Canter reviews his experience of filling in automated forms online for the same thing but getting very different answers, revealing the value systems built into these supposedly neutral processes.

Read Now
The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

Retractions are generally framed as a negative: as science not working properly, as an embarrassment for the institutions involved, or as a flaw in the peer review process. They can be all those things. But they can also be part of a story of science working the right way: finding and correcting errors, and publicly acknowledging when information turns out to be incorrect.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments