Research Ethics

Hype Terms In Research: Words Exaggerating Results Undermine Findings

June 23, 2023 1809
A spread of open books.
(Photo: Kerttu/Pixabay)

The claim that academics hype their research is not news. The use of subjective or emotive words that glamorize, publicize, embellish or exaggerate results and promote the merits of studies has been noted for some time and has drawn criticism from researchers themselves. Some argue hyping practices have reached a level where objectivity has been replaced by sensationalism and manufactured excitement. By exaggerating the importance of findings, writers are seen to undermine the impartiality of science, fuel skepticism and alienate readers. In 2014 the editor of Cell Biology International, for example, bemoaned an increase in ‘drama words’ such as drastic decrease, new and exciting evidence and remarkable effect, which he believed had turned science into a ‘theatrical business’. Nor are the humanities and social sciences unaffected by this, as writers have been observed to explicitly highlight the significance of their work in literary studies and applied linguistics.

All this, of course, is the result of an explosion of publishing fueled by intensive audit regimes, where individuals are measured by the length of their resumes, as much as the quality of their work. Metrics, financial rewards and career prospects have come to overwhelm and dominate the lives of academics across the planet, creating greater pressure, more explicit incentives and fiercer competition to publish. The rise of hyperbole in medical journals has been illustrated by Vinkers, Tijdink and Otte, who found a nine-fold increase in 25 ‘positive-sounding words’ such as novel, amazing, innovative and unprecedented in PubMed journals between 1974 and 2014.  Looking at four disciplines and more hype terms, myself and Kevin Jiang found twice as many ‘hypes’ in every paper compared with 50 years ago. While increases were most marked in the hard sciences, the two social science fields studied, sociology and applied linguistics, out-hyped the sciences in terms of their use of these words.

This is the logo for the London School of Economics and Political Science.
This article by Ken Hyland is taken from The London School of Economics and Political Science’s LSE Impact Blog, a Social Science Space partner site, under the title “Crucial! New! Essential! – The rise of hype in research and impact assessment.” It originally appeared at the LSE Press blog.

While hype now seems commonplace in the scramble for attention and recognition in research articles, we shouldn’t be surprised to find it appearing in other genres where academics are evaluated. Foremost among these are the impact case studies submitted in bids for UK government funding. Now familiar to academics around the globe, evaluation of research to determine funding to universities was extended beyond ‘scientific quality’ to include its real-world ‘impact’ with the Research Excellence Framework in 2014. More intrusive, time-consuming, subjective and costly than judging the contribution of published outputs, the ‘impact agenda’ seeks to ensure that funded research offers taxpayers value for money in terms of social, economic, environmental or other benefits.

For some observers, the fact that impact is being more highly valued is a positive step away from the Ivory Tower perception of research-for-research-sake. A serious problem, however, is how the evaluating body, the University Grants Council, chose to define and ‘capture’ impact. The structure they hit upon, the submission of a 4-page narrative case study supporting claims for positive research outcomes, was almost an invitation to embellish submissions. With over £4 billion on offer, this is an extremely competitive and high-stakes genre, so it isn’t surprising that writers rhetorically beef up their submissions.

Our analysis of 800 ‘impact case studies’ submitted to the 2014 REF shows the extent of hyping. Using the cases on the REF website, we searched eight target disciplines for 400 hype terms. We found that hyping is significantly more common in these impact cases than in research articles, with 2.11 per 100 words compared with 1.55 terms in leading articles using the same inventory of items. Our spectrum of eight focus disciplines shows that writers in the social sciences were prolific hypers, but by no means the worst offenders. In fact, there is a marked increase in hyping along a scale with the most abstract and rarefied fields taking greater pains to ensure that evaluators get the message. Research in physics and chemistry, for instance, are likely to be several steps removed from informing real-world applications than social work and education.

We also found differences in the ways that fields hype their submissions. Terms emphasising certainty dominate the most frequent items, comprising almost half the forms overall. Words such as significantimportant, strong and crucial serve to boost the consequences of the claims made with a commitment that almost compels assent. Apart from certainty, the fields deviated in their preferred hyping styles. The STEM disciplines tended to employ novelty as a key aspect of their persuasive armoury, with references to the originality and inventiveness of the work (first, timelynovel, unique). Social scientists, on the other hand, stressed the contribution made by their research, referring to its value, outcomes or take-up in the real world (essential, effective, useful, critical, influential). So, while social scientists may not be the most culpable players, they have, unsurprisingly, bought into the game and are heavily invested in rhetorically boosting their work.

The study shows something of how impact criteria have been interpreted by academics and how they influence their narrative self-report submissions. The results point to serious problems in the use of individual, evidence-based case studies as a methodology for evaluating research impact, as they not only promote the selection of particularly impressive examples but encourage hyping in presenting them. The obvious point that arises is whether we should regard authors as being in the best position to make claims for their work. Essentially, impact is a receiver experience: how target users understand the effect of an intervention. We have to wonder, then, how far, even in a perfect world, the claims of those with a vested interest in positive outcomes should be relied on. While the impact agenda is undoubtedly well intentioned, perhaps there are lessons to be learnt here. One might be that is never a good idea to set up evaluative goals before you are sure how they might best be measured.

Ken Hyland is an honorary professor at the University of East Anglia School of Education and Lifelong Learning.

View all posts by Ken Hyland

Related Articles

Paper Opening Science to the New Statistics Proves Its Import a Decade Later
Impact
July 2, 2024

Paper Opening Science to the New Statistics Proves Its Import a Decade Later

Read Now
Megan Stevenson on Why Interventions in the Criminal Justice System Don’t Work
Social Science Bites
July 1, 2024

Megan Stevenson on Why Interventions in the Criminal Justice System Don’t Work

Read Now
A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular
Impact
June 27, 2024

A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

Read Now
How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment
Insights
June 14, 2024

How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

Read Now
How Social Science Can Hurt Those It Loves

How Social Science Can Hurt Those It Loves

David Canter rues the way psychologists and other social scientists too often emasculate important questions by forcing them into the straitjacket of limited scientific methods.

Read Now
Why Social Science? Because It Can Help Contribute to AI That Benefits Society

Why Social Science? Because It Can Help Contribute to AI That Benefits Society

Social sciences can also inform the design and creation of ethical frameworks and guidelines for AI development and for deployment into systems. Social scientists can contribute expertise: on data quality, equity, and reliability; on how bias manifests in AI algorithms and decision-making processes; on how AI technologies impact marginalized communities and exacerbate existing inequities; and on topics such as fairness, transparency, privacy, and accountability.

Read Now
Digital Scholarly Records are Facing New Risks

Digital Scholarly Records are Facing New Risks

Drawing on a study of Crossref DOI data, Martin Eve finds evidence to suggest that the current standard of digital preservation could fall worryingly short of ensuring persistent accurate record of scholarly works.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments