Impact

Impact and Assessing Public Engagement

July 18, 2019 3164

A survey team photo taken at a public engagement workshop in Salavan, Lao People’s Democratic Republic. (Photo: Amphayvone Thepkhamkong)
LSE-impact-blog-logo
This article by Marco J. Haenssgen originally appeared on the LSE Impact of Social Sciences blog as “Developing a finer grained analysis of research impact: Can we assess the wider effects of public engagement?” and is reposted under the Creative Commons license (CC BY 3.0).

Authors writing for the LSE Impact Blog have often argued for the relevance and importance of public engagement, which remains high on researchers’ and funders’ agendas, especially in the medical sciences. The UK Medical Research Council (MRC) advises for instance that, “effective public engagement is a key part of the MRC’s mission and all MRC-funded establishments are encouraged to dedicate resources to support this area of work.” Over the 2005-2018 period, the Wellcome Trust also awarded more than £30 million for dedicated public engagement projects.

However, much can go wrong in public engagement. Some observers have stressed the risks to researchers through the misrepresentation of scientific research, possible reputational consequences of an active social media presence, or the harm that can be caused by toxic comments online. Target and non-target groups can also experience negative consequences and outright harms. As a form of health communication, public engagement can also create misunderstanding, resistance, or actions with problematic and unanticipated consequences. Notably, in Denmark, efforts to raise the public awareness of drug resistance led to a leafleting campaign urging readers not to have sex with pig farmers.

After several years of practice, can we say with confidence what public engagement has achieved, where it may be a good and a bad use of money, and what design principles we should employ to minimise its unintended consequences? I would argue the answer is no.


This post draws on the author’s co-authored paper, “Translating antimicrobial resistance: a case study of context and consequences of antibiotic-related communication in three northern Thai villages,” published in Palgrave Communications.


Methods for the evaluation of public engagement do exist, have been advocated in this blog, and initiatives like The Global Health Network have even established comprehensive evaluation databases. But the practical implementation of evaluation designs is often rudimentary (e.g. based on “evaluation forms” handed out during an event), and typically limited to the positive and intended outcomes of an activity. What if the seemingly successful activity was financially wasteful, undermined the coherence of a broader public engagement program, led people to behave worse in areas that were not of interest for the researchers, or its positive effects evaporated immediately after the event? We should not only measure “impact” with its positive connotations, but also “grimpact” as the unintended negative side-effects of research and public engagement.

To improve evaluation practice in health-related public engagement, we can look for guidance from development aid evaluation, which routinely uses five criteria to assess development projects and programmes:

  1. Effectiveness: To what extent have our objectives been achieved? These objectives can pertain to the target population, but they can also address for instance collaborative relationships or new research insights.
  2. Efficiency: Operational efficiency considers whether resources were used appropriately to produce the activity; cost-effectiveness considers total costs relative to the population reached or per effective engagement; and allocative efficiency considers if resources could have been employed more usefully to achieve the same goal.
  3. Impact: What are the positive and negative, intended and unintended consequences of the project, and the associated equity implications? Larger-scale programmes may also relate to broader societal-level impacts like mortality or enrolment rates.
  4. Relevance: Do the engagement objectives correspond to target group requirements, national and global priorities and partner/donor policies? Relevance also addresses whether the activity suggested a plausible mechanism to achieve its objectives, and whether it aligned with parallel engagement activities.
  5. Sustainability: Are the effects and impacts likely to persist beyond the end of the activity?

To illustrate the application of these criteria, let us take the example of a recent interdisciplinary health behavior research project about drug resistance in Southeast Asia, which involved knowledge exchange workshops with 150 participants in five villages in Thailand and Laos, an international photo exhibition with 500+ visitors showcasing traditional healing in Thailand, and social media work that reached 350,000 impressions on Facebook, Twitter, LinkedIn, and Reddit. The research project collected survey data, interviews, observations, and oral and written feedback, all of which enable an informal review of effectiveness, efficiency, relevance, impact, and sustainability. Our objectives were to (1) share information about drug resistance and local forms of treatment with our research participants, to (2) learn from them about medicine use and health behaviours locally and internationally, and to (3) spark interest in our research among the non-academic public.

Photo credit, Amphayvone Thepkhamkong: Public engagement workshop in Salavan, Lao PDR.

On the face of it, we achieved these objectives (effectiveness). For example, survey data showed that the workshop participants had 30 percentage-points higher awareness of drug resistance three months after the event (compared to 17 percentage points in the villages more generally), and we received positive event feedback and extensive engagement with our social media campaigns (e.g. 12,900 engagements on Facebook/Twitter). The engagement also enabled us to formulate new research hypotheses based on the insights from the workshop participants, and testimonials from exhibition visitors included statements such as “So enlightening and so inspiring – who knew medicine was so fun!” Yet, if we adhere to the five evaluation criteria, then we could not automatically consider the engagement a success only because it achieved its stated goals.

The broader assessment was indeed more mixed if we go beyond effectiveness as goal achievement. For example, we also observed negative impacts as some villagers increased their antibiotic use in a potentially detrimental way, and one workshop participant even felt sufficiently informed about antibiotics to start selling them in her local grocery store. The relevance of the activities against the backdrop of drug resistance as one of 10 threats to global health in 2019 also might be obvious for global health researchers and practitioners. This would entail in principle a positive assessment of the relevance criterion, but drug resistance is less clearly a priority issue for rural populations that often face several livelihood constraints like fluctuating incomes, discrimination, or the risk of droughts and floods. The isolated engagement activities can also not easily claim sustainable outcomes, which again weakens the overall assessment. (The costs of reaching the target groups ranged from £0.85 per 1,000 social media impressions, £16 per exhibition visitor, to £35 for each workshop participant, but we cannot judge efficiency in the absence of more extensive reference values.)

Goal achievement or “effectiveness” should therefore be only one criterion alongside efficiency, impact, relevance, and sustainability according to which we evaluate public engagement. To improve evaluation practice and build a knowledge base of the benefits and risks of public engagement, funders and academic institutions should support researchers with teams of experienced external evaluators to accompany public engagement projects from the design phase onward – if only on a sample of projects. While these evaluations should be independent, researchers and evaluators could work closely together to inform each other, and subsequently co-own the evaluation findings and publish them jointly to add to the body of public engagement knowledge.


Marco J Haenssgen is an assistant professor in global sustainable development at the University of Warwick and an associate fellow at the Institute of Advanced Study. He is a social scientist with a background in management and international development and experience in aid evaluation, intergovernmental policy making, and management consulting. His research emphasizes marginalization and health behavior in the context of health policy implementation, technology diffusion, and antimicrobial resistance with a geographical focus on Southeast Asia.

View all posts by Marco J. Haenssgen

Related Articles

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate
Impact
September 18, 2024

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Read Now
Webinar: Fundamentals of Research Impact

Webinar: Fundamentals of Research Impact

Whether you’re in a research leadership position, working in research development, or a researcher embarking on their project, creating a culture of […]

Read Now
Paper Opening Science to the New Statistics Proves Its Import a Decade Later

Paper Opening Science to the New Statistics Proves Its Import a Decade Later

An article in the journal Psychological Science, “The New Statistics: Why and How” by La Trobe University’s Geoff Cumming, has proved remarkably popular in the years since and is the third-most cited paper published in a Sage journal in 2013.

Read Now
A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

The idea of an autonomous vehicle – i.e., a self-driving car – isn’t particularly new. Leonardo da Vinci had some ideas he […]

Read Now
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments