News

How can Social Media Help After Terror Attacks?

June 13, 2019 1304

The deadly attack on two mosques in Christchurch, New Zealand, in which 50 people were killed and many others critically injured, was streamed live on Facebook by the man accused of carrying it out. It was then quickly shared across social media platforms.

Versions of the live-stream attack video stayed online for a worrying amount of time. A report by the Guardian found that one video stayed on Facebook for six hours and another on YouTube for three. For many, the quick and seemingly unstoppable spread of this video typifies everything that is wrong with social media: toxic, hate-filled content which goes viral and is seen by millions.

The Conversation logo
This article by Bertie Vidgen originally appeared at The Conversation, a Social Science Space partner site, under the title “Four ways social media platforms could stop the spread of hateful content in aftermath of terror attacks.”

But we should avoid scapegoating the big platforms. All of them (Twitter, Facebook, YouTube, Google, Snapchat) are signed up to the European Commission’s #NoPlace4Hate program. They are committed to removing illegal hateful content within 24 hours, a time period which is likely to come down to just one hour.

Aside from anything else, they are aware of the reputational risks of being associated with terrorism and other harmful content (such as pornography, suicide, paedophilia) and are increasingly devoting considerable resources to removing it. Within 24 hours of the Christchurch attack, Facebook had banned 1.5m versions of the attack video – of which 1.2m it stopped from being uploaded at all.

Monitoring hateful content is always difficult and even the most advanced systems accidentally miss some. But during terrorist attacks the big platforms face particularly significant challenges. As research has shown, terrorist attacks precipitate huge spikes in online hate, overrunning platforms’ reporting systems. Lots of the people who upload and share this content also know how to deceive the platforms and get round their existing checks.

Mourning the victims of the Christchurch mosque attacks. EPA-EFE

So what can platforms do to take down extremist and hateful content immediately after terrorist attacks? I propose four special measures which are needed to specifically target the short term influx of hate.

1. Adjust the sensitivity of the hate detection tools

All tools for hate detection have a margin of error. The designers have to decide how many false negatives and false positives they are happy with. False negatives are bits of content which are allowed online even though they are hateful and false positives are bits of content which are blocked even though they are non-hateful. There is always a trade off between the two when implementing any hate detection system.

The only way to truly ensure that no hateful content goes online is to ban all content from being uploaded – but this would be a mistake. Far better to adjust the sensitivity of the algorithms so that people are allowed to share content but platforms catch a lot more of the hateful stuff.

2. Enable easier takedowns

Hateful content which does get onto the big platforms, such as Twitter and Facebook, can be flagged by users. It is then sent for manual review by a content moderator, who checks it using predefined guidelines. Content moderation is a fundamentally difficult business, and the platforms aim to minimize inaccurate reviews. Often this is by using the “stick”: according to some investigative journalists, moderators working on behalf of Facebook risk losing their jobs unless they maintain high moderation accuracy scores.

During attacks, platforms could introduce special procedures so that staff can quickly work through content without fear of low performance evaluation. They could also introduce temporary quarantines so that content is flagged for immediate removal but then re-examined at a later date.

3. Limit the ability of users to share

Sharing is a fundamental part of social media, and platforms actively encourage sharing both on their sites (which is crucial to their business models) and between them, as it means that none of them miss out when anything goes viral. But easy sharing also brings with it risks: research shows that extreme and hateful content is imported from niche far-right sites and dumped into the mainstream where it can quickly spread to large audiences. And during attacks it means that anything which gets past one platform’s hate detection software can be quickly shared across all of the platforms.

Platforms should limit the number of times that content can be shared within their site and potentially ban shares between sites. This tactic has already been adopted by WhatsApp, which now limits the number of times content can be shared to just five.

4. Create shared databases of content

All of the big platforms have very similar guidelines on what constitutes “hate” and will be trying to take down largely the same content following attacks. Creating a shared database of hateful content would ensure that content removed from one site is automatically banned from another. This would not only avoid needless duplication but enable the platforms to quickly devote resources to the really challenging content that is hard to detect.

Removing hateful content should be seen as an industry-wide effort and not a problem each platform faces individually. Shared databases like this do also exist in a limited way but efforts need to be hugely stepped up and their scope broadened.

In the long term, platforms need to keep investing in content moderation and developing advanced systems which integrate human checks with machine learning. But there is also a pressing need for special measures to handle the short-term influx of hate following terrorist attacks.


In my research I study the drivers and patterns of xenophobic behaviour on social media. I am currently a PhD candidate at the University of Oxford and the Alan Turing Institute. Before that, I completed a BA in History and Politics at the University of Warwick (2013) and an MA in Ideology and Discourse Analysis at the University of Essex (2014).

View all posts by Bertie Vidgen

Related Articles

Alondra Nelson Named to U.S. National Science Board
Announcements
October 18, 2024

Alondra Nelson Named to U.S. National Science Board

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Artificial Intelligence and the Social and Behavioral Sciences
News
August 6, 2024

Artificial Intelligence and the Social and Behavioral Sciences

Read Now
Pandemic Nemesis: Illich reconsidered

Pandemic Nemesis: Illich reconsidered

An unexpected element of post-pandemic reflections has been the revival of interest in the work of Ivan Illich, a significant public intellectual […]

Read Now
How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

Yes, dad jokes can be fun. They play an important role in how we interact with our kids. But dad jokes may also help prepare them to handle embarrassment later in life.

Read Now
Biden Administration Releases ‘Blueprint’ For Using Social and Behavioral Science in Policy

Biden Administration Releases ‘Blueprint’ For Using Social and Behavioral Science in Policy

U.S. President Joseph Biden’s administration has laid down a marker buttressing the use of social and behavioral science in crafting policies for the federal government by releasing a 102-page Blueprint for the Use of Social and Behavioral Science to Advance Evidence-Based Policymaking.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments