Research Ethics

Hype Terms In Research: Words Exaggerating Results Undermine Findings

June 23, 2023 3470
A spread of open books.
(Photo: Kerttu/Pixabay)

The claim that academics hype their research is not news. The use of subjective or emotive words that glamorize, publicize, embellish or exaggerate results and promote the merits of studies has been noted for some time and has drawn criticism from researchers themselves. Some argue hyping practices have reached a level where objectivity has been replaced by sensationalism and manufactured excitement. By exaggerating the importance of findings, writers are seen to undermine the impartiality of science, fuel skepticism and alienate readers. In 2014 the editor of Cell Biology International, for example, bemoaned an increase in ‘drama words’ such as drastic decrease, new and exciting evidence and remarkable effect, which he believed had turned science into a ‘theatrical business’. Nor are the humanities and social sciences unaffected by this, as writers have been observed to explicitly highlight the significance of their work in literary studies and applied linguistics.

All this, of course, is the result of an explosion of publishing fueled by intensive audit regimes, where individuals are measured by the length of their resumes, as much as the quality of their work. Metrics, financial rewards and career prospects have come to overwhelm and dominate the lives of academics across the planet, creating greater pressure, more explicit incentives and fiercer competition to publish. The rise of hyperbole in medical journals has been illustrated by Vinkers, Tijdink and Otte, who found a nine-fold increase in 25 ‘positive-sounding words’ such as novel, amazing, innovative and unprecedented in PubMed journals between 1974 and 2014.  Looking at four disciplines and more hype terms, myself and Kevin Jiang found twice as many ‘hypes’ in every paper compared with 50 years ago. While increases were most marked in the hard sciences, the two social science fields studied, sociology and applied linguistics, out-hyped the sciences in terms of their use of these words.

This is the logo for the London School of Economics and Political Science.
This article by Ken Hyland is taken from The London School of Economics and Political Science’s LSE Impact Blog, a Social Science Space partner site, under the title “Crucial! New! Essential! – The rise of hype in research and impact assessment.” It originally appeared at the LSE Press blog.

While hype now seems commonplace in the scramble for attention and recognition in research articles, we shouldn’t be surprised to find it appearing in other genres where academics are evaluated. Foremost among these are the impact case studies submitted in bids for UK government funding. Now familiar to academics around the globe, evaluation of research to determine funding to universities was extended beyond ‘scientific quality’ to include its real-world ‘impact’ with the Research Excellence Framework in 2014. More intrusive, time-consuming, subjective and costly than judging the contribution of published outputs, the ‘impact agenda’ seeks to ensure that funded research offers taxpayers value for money in terms of social, economic, environmental or other benefits.

For some observers, the fact that impact is being more highly valued is a positive step away from the Ivory Tower perception of research-for-research-sake. A serious problem, however, is how the evaluating body, the University Grants Council, chose to define and ‘capture’ impact. The structure they hit upon, the submission of a 4-page narrative case study supporting claims for positive research outcomes, was almost an invitation to embellish submissions. With over £4 billion on offer, this is an extremely competitive and high-stakes genre, so it isn’t surprising that writers rhetorically beef up their submissions.

Our analysis of 800 ‘impact case studies’ submitted to the 2014 REF shows the extent of hyping. Using the cases on the REF website, we searched eight target disciplines for 400 hype terms. We found that hyping is significantly more common in these impact cases than in research articles, with 2.11 per 100 words compared with 1.55 terms in leading articles using the same inventory of items. Our spectrum of eight focus disciplines shows that writers in the social sciences were prolific hypers, but by no means the worst offenders. In fact, there is a marked increase in hyping along a scale with the most abstract and rarefied fields taking greater pains to ensure that evaluators get the message. Research in physics and chemistry, for instance, are likely to be several steps removed from informing real-world applications than social work and education.

We also found differences in the ways that fields hype their submissions. Terms emphasising certainty dominate the most frequent items, comprising almost half the forms overall. Words such as significantimportant, strong and crucial serve to boost the consequences of the claims made with a commitment that almost compels assent. Apart from certainty, the fields deviated in their preferred hyping styles. The STEM disciplines tended to employ novelty as a key aspect of their persuasive armoury, with references to the originality and inventiveness of the work (first, timelynovel, unique). Social scientists, on the other hand, stressed the contribution made by their research, referring to its value, outcomes or take-up in the real world (essential, effective, useful, critical, influential). So, while social scientists may not be the most culpable players, they have, unsurprisingly, bought into the game and are heavily invested in rhetorically boosting their work.

The study shows something of how impact criteria have been interpreted by academics and how they influence their narrative self-report submissions. The results point to serious problems in the use of individual, evidence-based case studies as a methodology for evaluating research impact, as they not only promote the selection of particularly impressive examples but encourage hyping in presenting them. The obvious point that arises is whether we should regard authors as being in the best position to make claims for their work. Essentially, impact is a receiver experience: how target users understand the effect of an intervention. We have to wonder, then, how far, even in a perfect world, the claims of those with a vested interest in positive outcomes should be relied on. While the impact agenda is undoubtedly well intentioned, perhaps there are lessons to be learnt here. One might be that is never a good idea to set up evaluative goals before you are sure how they might best be measured.

Ken Hyland is an honorary professor at the University of East Anglia School of Education and Lifelong Learning.

View all posts by Ken Hyland

Related Articles

Young Scholars Can’t Take the Field in Game of  Academic Metrics
Infrastructure
December 18, 2024

Young Scholars Can’t Take the Field in Game of Academic Metrics

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
Lee Miller: Ethics, photography and ethnography

Lee Miller: Ethics, photography and ethnography

Kate Winslet’s biopic of Lee Miller, the pioneering woman war photographer, raises some interesting questions about the ethics of fieldwork and their […]

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 1291 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments