Getting a Handle on Both Societal and Scientific Impact
Impact assessments are widespread – from policy to business, as well as scientific research. Increasingly, universities and other knowledge institutions feel the need (or pressure) to demonstrate their societal value. There is plenty of scientific and policy literature around that proposes how to go about this. Researchers and knowledge transfer professionals publish evaluation ‘frameworks,‘ ‘methods‘ and ‘approaches,‘ and discuss their use in specific areas, such as health, agriculture or educational sciences. Taking heed of this, are the almost annual review publications that sum up the state of the field.
What ‘impact’ is and how it works depends on many factors, ranging from disciplines and sectors to culture and geography. Specific methods have been developed in particular contexts of, for example, a French agricultural public research institute (ASIRPA) or English research council funding programs (Flows of Knowledge). This is desirable, as the criteria and methods of an evaluation should match with the expectations and routines of particular scientific practices. As for example proposed in this recent blogpost on guiding principles to choose between frameworks.
Besides context, there is another aspect to impact assessment that requires attention: the performativity of evaluation. Or in other words, the way in which specific evaluation methods assume, and thereby produce, different understandings of research and impact. In a recent paper (open access) we explore this very question, building on previous constructivist studies that have proposed how evaluation significantly shapes the contours of the object it valuates. For 10 impact assessment approaches we assessed what evaluation object of ‘research with societal value’ they created. This is significant as it not only makes clear what fits best to your situation, but also what you do to that situation. Evaluations are not innocent: by rewarding particular activities or achievements, they influence perceptions and behavior of the people evaluated.
To guide thoughts and choices in the theory and practice of research evaluation, we therefore created an analytical framework that allows comparison of different methods. It consists of four aspects:
- Types and roles of actors in the production, exchange and evaluation of research: what types of people, things and organizations does the method assume to participate in this process?
- Interaction mechanisms that are considered crucial to the production of societal value. Taking our cue from knowledge utilization and transfer studies, we distinguished between linear, cyclical and co-production models, with increasingly blurry boundaries between knowledge producers and users.
- Concepts of societal value that assessment methods imply. Whether they operationalize it as research results with potential use (product or output), adoption or implementation in practice (use or outcome) or benefits from this use, matters a great deal to what counts as impact.
- Lastly, we use these three aspects to highlight how a method shapes the relationships between the scientific value and the societal value of research. Whereas some methods treat these as distinct characteristics, others deal with them in an integrated way because the underlying processes strongly overlap.
Comparing 10 methods along these lines led to some surprising insights. For example, it appeared that methods which look at scientific practice at higher levels of aggregation (e.g. entire institutes and funding programs) take more dimensions of societal value into account than ones that stay closer to a particular research process. This points to pertinent and open question: can different concepts of societal value (product, use and/or benefit) apply at all scales, from individual researchers and research groups to institutions and funding programs?
We also found that the epistemological assumptions about knowledge production – whether it takes place in isolation from or in interaction with societal parties, or if it is even co-creation between academics and non-academics – strongly shape the boundary between the scientific and societal value of research. We summarized this correlation in the figure below.
It is not surprising that many hold on to an analytic distinction between the scientific and social value. There is a strong scientometric tradition of mapping the scientific value of science, based on citation data, and policy bodies requested societal impact assessments of research units explicitly in addition to this. Nevertheless, this distinction is at odds with many empirical studies of scientific practice in the constructivist tradition of science and technology studies that show a much less clear divide between science and society.
The heterogeneity of the actor-networks responsible for the production of new knowledge complicates the relationship between the scientific and societal value of scientific research. However, the practical accountability needs behind many evaluation methods color views of the impact process, so that evaluation typically focuses first on the research practices and only very few put the process of societal change central. An integrated concept of scientific and societal value could help encourage doing what we value most, instead of doing what counts.
The importance of the societal impact of publicly funded scientific research is likely to increase in the coming years. This calls for a shared discourse in policy, practice and science studies research that enables collective critical reflection. If researchers, research managers, policy makers and relevant stakeholders fail to discuss what societal value entails, and what effects it will have when we start evaluating it, then the potential for unintended consequences is magnified. Ideally, evaluation should not be separate from research and impact, as an afterthought, but as an important learning moment in knowledge (co)production. We hope that our work can help guide such discussions and create an environment that enables collective learning: not only about how impact works, but also about what evaluation objects and behavior are produced by impact assessments.