Resources

How Social Media Was Cited in Impact Case Studies?

June 7, 2018 1583

Our previous post discussed the gap between rhetoric and reality regarding social media and its role in achieving impact, wondering whether institutions were investing too much hope in it as a solution to the problem of demonstrating research impact. Concerned by this gap, we began an investigation into how social media was invoked in impact case studies for REF 2014. We reasoned that the best way to understand how the relationship between social media and impact was being conceived within the academy was to look to the case studies as institutional records in which formal claims were made about a causal relationship. Here, we share some of our initial findings and our worries about the trends illustrated by this evidence. Our interest is in how social media is being used by researchers in pursuit of impact, how this use is described, and how the causal relationship between the activity and the outcome is accounted for. Which platforms are invoked in case studies? What claims are being made about them? How are these claims being substantiated?

LSE-impact-blog-logo

This article by Katy Jordan and Mark Carrigan originally appeared on the LSE Impact of Social Sciences blog as “How was social media cited in 2014 REF Impact Case Studies?” and is reposted under the Creative Commons license (CC BY 3.0).

To this end, we conducted an exploratory analysis using the online database of REF 2014 impact case studies. The database comprises a total of 6,637 non-redacted case studies. A series of queries were run in February 2017 in order to identify case studies containing references to a wide range of social media platforms. The list of search terms was constructed by combining lists of popular social media platforms from a range of sources, including studies with a focus on academic social media. In total, 42 terms were included for searches, though 13 yielded no records. The results were exported, combined, and duplicate records removed. 1,675 case studies were included in the sample, 25% of the total number in the database. It is a mixed-methods project; initially, descriptive statistics were used to gain an overview of the platforms mentioned in case studies and differences according to categories within the data. This was followed by explorative qualitative analysis of a small sub-sample of cases. An open coding approach was applied to a random sub-sample of 100 case studies, in order to explore the ways in which social media platforms were referred to in this context.

Which social media are being used?

While “social media” has become a near universally recognised category, we still encounter a striking lack of unanimity about what falls within it. The most popular platforms are widely recognised but some would question whether communications tools like WhatsApp or Telegram should be categorised in this way, with the same applying to predecessors to contemporary social media such as email lists and discussion forums. For this reason, we avoided a strict definition and cast our net wide, taking inspirations from the platforms specified in a range of online lists to arrive at a (loose) operational definition of online software which involved profiles and/or sharing of user-generated content.

As well as “social media” itself, we searched for 41 social media platforms and types of platform. While 29 of these produced results, we found seven keywords which occurred in 200 case studies or more: “blog” (678), “Google Scholar” (352), “YouTube” (348), “social media” (278), “Twitter” (233), “Facebook” (227), and “podcast” (214). Beyond these terms we saw a steep drop in the occurrence of relevant keywords, indicating a “long tail” distribution.

Number of case studies within the REF 2014 impact case studies database in which each of the social media terms are mentioned.

While the overall trend of approximately 25% of case studies containing references to social media was broadly consistent across different institutions, striking differences emerged according to discipline, with social media featuring in a much greater proportion of case studies in Panel D, the Arts and Humanities.

Number of case studies included in the sample according to REF panel (subject areas), and how this compares to the database overall.

The prevalence of certain platforms also differed according to different panels, with the most widely used platforms showing the clearest differences (Table 2).

Percentage of case studies within the sample, according to REF panel, which include mentions of particular social media terms.

Given that a substantially higher proportion of Panel D (Arts and Humanities) case studies mention social media overall, it is not surprising that this panel also leads in terms of mentioning most of the main platforms. However, when broken down according to specific platforms rather than “social media” as a whole, starker contrasts emerge. In the cases of blogging and podcasting, Panel D is clearly ahead, while Panels D and A (Biological and Medical Sciences) show similar levels of use of Twitter and YouTube. Panel B (Physical and Mathematical Sciences) shows some of the lowest levels overall, while making extensive use of Google Scholar.

What claims are being made about them?

We are still undertaking the qualitative component of our investigation. However, several recurrent themes have emerged from our initial analysis, including:

  • Tracking of traditional scholarly publishing: citation counts and rankings through sites such as Google Scholar and Microsoft Academic. These are being incorporated into the evaluative infrastructures of higher education.
  • Mainstream media reflected through social media: for example, television coverage secondarily made available through YouTube, or academic work being featured in a newspaper’s blog. Social media is being used to expand upon and archive coverage in broadcast and print media.
  • Other social media channels: a wide range of third-party organisations (not led by the academics involved in the case studies themselves) which may have featured or referred to the research underpinning the case study. Examples include institutional, political, and corporate social media, and Wikipedia pages.
  • Academic-led dissemination strategies: encompasses a wide range of social media engagement led by the academics themselves, either as personal or project-based accounts. The main examples include blogs, Twitter accounts, and YouTube channels.
  • Social media used as a way of involving participants in research: instances found in the subsample involved using social media to directly communicate with participants, such as holding online discussions and soliciting feedback through social media, and also in co-production of research outputs including blog posts and YouTube videos.
  • Social media as an application of research: a small but distinct theme, where social media platforms were cited as benefitting from the research reported in the case study. For example, YouTube videos having been made with technology developed in the case study.
  • Quantifying impact: figures were often associated with social media mentions in the case studies. Metrics were wide-ranging and several different metrics could be associated with a single platform in different cases; examples included numbers of comments, followers, views, downloads, visits, participants, likes, and mentions.

While a sense of approaching saturation was achieved with the coding, these themes may not be exhaustive as they are derived from a sub-sample of cases at present. Within the 100 cases analysed, the relative prevalence of the themes varies considerably. For example, relatively few mention social media to involve participants in research, while a surprisingly high proportion simply use social media mentions and metrics as a reflection of traditional scholars’ impact or media appearances. We hope to build upon this initial sample analysis through a fuller exploration of the themes and their prevalence within the full body of case studies.

It is far from a surprise to find metrics feature so prominently within the case studies. Each of the platforms mentioned within them offers range of measures through which visibility and engagement are quantified, inevitably producing a temptation to use these in order to make claims about the impact of activity on the platform. It only becomes problematic when these measures are cited without context, as if the number alone provides sufficient grounds to establish the changes brought about by the digital engagement. At risk of stating the obvious, digital engagement is at most a preliminary to social impact. These platforms offer exciting possibilities for those seeking to get their research “out there” but doing this effectively requires a nuanced understanding of when online dissemination constitutes meaningful engagement, as well as the potential for impact to ensue from this. The metrics which platforms provide can act as useful indicators but this requires moving beyond the headline figures and analysing the composition of audiences and the trajectories of engagement.

What conclusions can we draw from this?

Regular Impact Blog readers will be familiar with complaints about the difficulty in operationalising the concept of “impact”. However our fear is that the easily accessible measures which platforms provide, built into their architecture in order to encourage ever increasing user engagement, provide a faux-objectivity which will be drawn upon to solve the problem of impact. The evidence we have compiled does not provide us with a definitive basis upon which to establish this is taking place but it certainly gives weight to the initial concerns which motivated the project. Going forward, we intend to drill further into how social media figures into the language of justification for claiming impact, as well as how this varies between disciplines. We can see clear differences between the panels even at this stage, reflecting different values and conceptions of impact as well as the relative strengths or weaknesses of the component disciplines. How social media is being invoked with these case studies represents a crucial vector through which it is being incorporated into the evaluative infrastructure of the academy.

It is also important to consider the time lag which is likely at work. The impact case studies were submitted for REF 2014 in late 2013, and based on research which had already been undertaken and completed. This goes some way to explaining the frequency with which different social media platforms were invoked in the case studies. Snapchat was not mentioned at all, Pinterest was mentioned only twice, and Instagram only once. Snapchat was only founded in 2011 and did not hit 100 million active users until the end of 2014. While it has recently been the subject of intense hype, largely due to its penetration amongst a younger demographic increasingly turning away from Facebook, it was either non-existent or in its infancy at the time much of the research represented in the case studies was conducted. Pinterest was founded slightly earlier, in March 2010, although sustained growth in its popularity did not begin until 2013. Instagram was founded a little later, in October 2010, though its popularity was at a fraction of its current levels until it was acquired by Facebook for $1 billion in April 2012. There are prima faciegrounds to expect all these platforms to be invoked in greater numbers during the next round of impact case studies, if for no other reason than they have hundreds of millions of additional users who constitute an inviting target for engagement activity. It is also notable that WhatsApp, itself acquired by Facebook for $19.3 billion in February 2014, does not figure in the case studies we analysed. While some would question its categorisation as “social media”, its user base of over one billion and the widely used limited group interaction capacities built into it mean it could figure more prominently in future case studies, at least if anecdotal evidence of its use by academics is to be believed. It is also worth noting that the weighting given to case studies has been increased from 20% to 25% for the forthcoming REF 2021.

It matters how we talk about the impact of media in these exercises because this becomes the business case for encouraging digital engagement by researchers and supporting them in this activity. If a narrow, instrumental understanding of social media takes hold, it risks becoming the common-sense view within the sector. This is a problem because it misses out much of what is valuable in researchers using social media, particularly in terms of the possibility these platforms offer to rebuild collegiality within institutions dominated by competitive individualism. There are clear lessons to be found in the emerging literature on digital engagement within the academy; see Carrigan (2016)Daniels and Thistlewaite (2016)Mollett et al (2017), and Reed (2016), for instance. Effective engagement is a matter of building relationships over time, as opposed to simply making material public via social media. An excessive fixation on metrics “bakes in” a dissemination model of social media, obscuring the relational value it can create (and the capacity for impact ensuing from this) in pursuit of ever more impressive engagement metrics in spite of the ambiguity about what, if anything, these entail for real world impact.

Underlying our inquiry is a belief that social media represent powerful tools, capable of facilitating a more publicly orientated and impactful scholarship if deployed correctly. However the often naive and inaccurate claims being made about social media and impact seem likely to hinder this uptake, creating unrealistic expectations liable to generate disappointment amongst researchers and managers alike. The rush to adapt to the impact agenda, seeking to transform the research culture within institutions, risks establishing a limited (and limiting) view of social media for academics as tools to get research “out there”. If framed in such a narrow, technical way, as devices for dissemination, then the more subtle possibilities they offer for building relationships, generating solidarity and facilitating co-production are liable to be marginalised in a rush for biggerbetter, and more.

If social media is to be part of the research assessment landscape, it should be an object of serious discussion in its own right. Platforms are increasingly on the political agenda around the world, yet debate about their effective institutionalisation within higher education remains in its infancy. The issues we have raised here are a small part of a much bigger picture, encompassing matters such as scholarly publishing, critical social science, and human agency. However the institutionalisation of social media as part of research assessment is a crucial factor in the unfolding of this new landscape of research. Furthermore, it is one that has tended to be overlooked until now and we are convinced that much hinges on whether it is done well or badly.


Katy Jordan is currently a visiting fellow in the Institute of Educational Technology at the Open University, UK. Her research interests focus on the intersection of the internet and higher education, and she has published research on topics including academic social networking sites, openness in education, massive open online courses, and semantic technologies for education. ------------------------------------------------------------------------------------------------------------ Mark Carrigan is Digital Engagement Fellow at The Sociological Review Foundation and Postdoctoral Research Associate in the Faculty of Education at University of Cambridge. His research explores the institutionalisation of social media within higher education, as well as the theoretical and methodological challenges posed by the proliferation of platforms. He is the author of Social Media for Academics, published by Sage in 2016.

View all posts by Katy Jordan and Mark Carrigan

Related Articles

Developing AFIRE – Platform Connects Research Funders with Innovative Experiments
Resources
July 16, 2024

Developing AFIRE – Platform Connects Research Funders with Innovative Experiments

Read Now
Critical Thinking and Global Democracy: Strategies for Navigating a Fraught Political Landscape 
Resources
July 16, 2024

Critical Thinking and Global Democracy: Strategies for Navigating a Fraught Political Landscape 

Read Now
AI Database Created Specifically to Support Social Science Research
Tools
July 9, 2024

AI Database Created Specifically to Support Social Science Research

Read Now
Free Online Course Reveals The Art of ChatGPT Interactions
Resources
March 28, 2024

Free Online Course Reveals The Art of ChatGPT Interactions

Read Now
Apply for Sage’s 2024 Concept Grants

Apply for Sage’s 2024 Concept Grants

Three awards are available through Sage’s Concept Grant program, which is designed to support innovative products and tools aimed at enhancing social science education and research.

Read Now
New Podcast Series Applies Social Science to Social Justice Issues

New Podcast Series Applies Social Science to Social Justice Issues

Sage (the parent of Social Science Space) and the Surviving Society podcast have launched a collaborative podcast series, Social Science for Social […]

Read Now
New Dataset Collects Instances of ‘Contentious Politics’ Around the World

New Dataset Collects Instances of ‘Contentious Politics’ Around the World

The European Research Center is funding the Global Contentious Politics Dataset, or GLOCON, a state-of-the-art automated database curating information on political events — including confrontations, political turbulence, strikes, rallies, and protests

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments