Modernizing the Monograph Ecosystem Can Save Them From Extinction
That monographs might not have a future seems absurd. For many disciplines, the monograph is a central part of scholarly communications. Humanities, social sciences and the arts all take this form very seriously: they’re often central to the way that we think about academic contributions. Monographs are an academic’s opportunity to introduce new ways of thinking to their colleagues, to have the time and space to thoroughly explore a topic. They are especially important in the non-English language world. Monographs are personal, they are slow, they are long-lasting: I must have read Bruno Latour’s Reassembling the Social two or three times, cover to cover.
When I worked in publishing, working with authors to shepherd their book from manuscript to marketing, there was no mistaking the passion that authors had for their work, or the level of their commitment. And yet, when a group of monograph enthusiasts gathered last year to talk about the future of the monograph – and especially the monograph in a world increasingly dominated by open access – we found ourselves increasingly concerned about the challenges that they face.
Scholarly infrastructure, from discovery tools and metrics to business models, is moving on, and is in danger of leaving the monograph behind. In, “The State of Open Monographs,” the authors (Peter Potter – Virginia Tech, Charles Watkinson – University of Michigan, Sara Grimme, Cathy Holland and myself, of Digital Science) came to a number of worrying conclusions.
Firstly, most monographs do not get a digital object identifier (DOI) assigned to them. DOI’s for monographs may seem like a trivial technical detail to many, but the reality runs a lot deeper. The act of assigning a DOI to a piece of work means that the metadata and location of that book is freely available to multiple systems around the world. Most importantly, it means that a citation to a book can be recognised and counted – thus giving an author that all-important credit for their work.
Secondly, books do get cited and shared. They have citation data, they have Altmetric data. But this data accrues at a much slower pace than that of articles, and for altmetrics, it occurs in different places. A three-year window is an absurd period in which to assess the impact of a book. That Latour monograph? I don’t think I’ve cited it yet – despite it informing the way I think about impact, I simply haven’t written a paper about it – yet. Books are influential in policy papers – but it can take 10 years for a policy citation to be generated. My personal view is that books need to be evaluated over the entire span of a researcher’s career, not arbitrarily discarded on a timescale customized to suit articles.
Thirdly, many book publishers have not yet invested in workflows that enable full-text XML tagging, management and archiving of the content. In contrast, articles have been largely XMLed for twenty years. As a consequence, book content often gets ‘trapped’ inside PDF files, which makes the cost of repurposing the content into new electronic forms – innovative apps, ebooks and chapter downloads – prohibitive.
Fourthly, many players in the monograph arena either don’t talk to each other, or don’t trust each other. This affects open monographs disproportionately. For example, in the absence of sales figures, we (publishers, authors, evaluators) need other data to understand impact. But citations are under-reported, and altmetrics are not yet central. So we’d love to turn to usage figures, but obtaining compatible data from the myriad distributors of ebooks, platform aggregators and other distribution providers, is virtually impossible. All despite this problem having been solved by COUNTER for many, many years.
Fifthly, and finally, there’s the problem of funding open monographs. We know that academics value monographs – they write them, they read them, they eventually cite them – but funders have yet to offer sufficient funds to support ‘book processing charges.’ I alluded to the care taken by book publishers to ‘shepherd’ the manuscripts through production. Each one is its own special case, and each relationship between an author and a publisher, unique. In contrast, a major journal publisher might process dozens of submitted articles per hour, entirely automatically, at a very low cost. Book publishing, like book authoring, is a slower and more curated process. It costs more to publish a good quality book than the equivalent number of articles, and funders (and universities) need to get real about these costs.
The evidence is that the scholarly world marches to the drum of the article: in particular, the English-language, northern/western hemisphere, STEM drum – and everything else is in danger of falling behind. We know that this is not intentional: it’s simply that this represents 60-70 percent of the world’s research output – and relatively speaking – is cheaper and fast to publish. Everything else … is a special case.
We estimate that monographs represent – in terms of raw titles (not pages!) – about 3 percent of global scholarly output. For some countries, this could be as high as 40 percent. Solving the problems that confront monographs is no small challenge – but the good news is that momentum is growing, and these challenges are solvable.
We need to fix issues around existing metadata – it’s serving monographs poorly and is a drag on discovery and citation counts. The number of books with DOIs seems stuck at about 25 percent. Work can be done to increase this number: it needs to be done, in order for monographs to remain discoverable by a community increasingly using apps and web technologies to find content. We need better reporting of usage, sharing and citations, and not only to show both the importance and value of the monograph – DOIs will help with this. But none of this will happen if books are allowed to wither on the vine: it’s time for funders, publishers and evaluators to come together and develop an infrastructure to support the monograph of the future.