Research Ethics

No Longer the Age of Consent: Facebook’s Emotional Manipulation Study

July 1, 2014 1262

Facebook thumb_optSignificant concerns are raised about the ethics of research carried out by Facebook after it revealed how it manipulated the news feed of thousands of users.

In 2012 the social media giant conducted a study on 689,003 users, without their knowledge, to see how they posted if it systematically removed either some positive or some negative posts by others from their news feed over a single week.

The Conversation logo_AU

This article by David Hunter originally appeared at The Conversation, a Social Science Space partner site, under the title “Consent and ethics in Facebook’s emotional manipulation study”

At first Facebook’s representatives seemed quite blasé about anger over the study and saw it primarily as an issue about data privacy which it considered was well handled.

“There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely,” a Facebook spokesperson said.

In the paper, published in the Proceedings of the National Academy of Science, the authors say they had “informed consent” to carry our the research as it was consistent with Facebook’s Data Use Policy, which all users agree to when creating an account.

One of the authors has this week defended the study process, although he did apologize for any upset it caused, saying: “In hindsight, the research benefits of the paper may not have justified all of this anxiety.”

Why all the outrage?

So why are Facebook, the researchers and those raising concerns in academia and the news media so far apart in their opinions?

Is this just standard questionable corporate ethics in practice or is there a significant ethical issue here?

I think the source of the disagreement really is about the consent (or lack thereof) in the study and as such will disentangle what concerns about consent there are and why they matter.

There are two main things that would normally be taken as needing consent in this study:

  1. accessing the data
  2. manipulating the news feed.

Accessing the data
This is what the researchers and Facebook focused on. They claimed that agreeing to Facebook’s Data Use Policy when you sign up to Facebook constitutes informed consent. Let’s examine that claim.

We use the information we receive about you […] for internal operations, including troubleshooting, data analysis, testing, research and service improvement.

It’s worth noting that this in no way constitutes informed consent since it’s unlikely that all users have read it thoroughly. While it informs you that your data may be used, it doesn’t tell you how it will be used.

But given that the data has been provided to the researchers in an appropriately anonymized format, the data is no longer personal and hence that this mere consent is probably sufficient.

It’s similar to practices in other areas such as health practice audits which are conducted with similar mere consent.

So insofar as Facebook and the researchers are focusing on data privacy, they are right. There is nothing significant to be concerned about here, barring the misdescription of the process as “informed consent.”

Manipulating the news feed

This was not a merely observational study but instead contained an intervention – manipulating the content of users’ news feed.

Informed consent is likewise lacking for this intervention, placing this clearly into the realm of interventional research without consent.

This is not say it is necessarily unethical, since we sometimes permit such research on the grounds that the worthwhile research aims cannot be achieved any other way.

Nonetheless there are a number of standards that research without consent is expected to meet before it can proceed:

1. Lack of consent must be necessary for the research

Could this research be done another way? It could be argued that this could have been done in a purely observational fashion by simply picking out users whose news feed were naturally more positive or negative.

Others might say that this would introduce confounding factors, reducing the validity of the study.

Let’s accept that it would have been challenging to do this any other way.

2. Must be no more than minimal risk

It’s difficult to know what risk the study posed – judging by the relatively small effect size probably little, but we have to be cautious reading this off the reported data for two reasons.

First, the data is simply what people have posted to Facebook which only indirectly measures the impact – really significant effects such as someone committing suicide wouldn’t be captured by this.

And second, we must look at this from the perspective of before the study is conducted where we don’t know the outcomes.

Still for most participants the risks were probably minimal, particularly when we take into account that their news feed may have naturally had more or less negative/positive posts in any given week.

3. Must have a likely positive balance of benefits over harms

While the harms caused directly by the study were probably minimal the sheer number of participants means on aggregate these can be quite significant.

Likewise, given the number of participants, unlikely but highly significant bad events may have occurred, such as the negative news feed being the last straw for someone’s marriage.

This will, of course, be somewhat balanced out by the positive effects of the study for participants which likewise aggregate.

What we further need to know is what other benefits the research may have been intended to have. This is unclear, though we know Facebook has an interest in improving its news feed which is presumably commercially beneficial.

We probably don’t have enough information to make a judgement about whether the benefits outweigh the risks of the research and the disrespect of subjects’ autonomy that it entails. I admit to being doubtful.

4. Debriefing & opportunity to opt out

Typically in this sort of research there ought to be a debriefing once the research is complete, explaining what has been done, why and giving the participants an option to opt out.

This clearly wasn’t done. While this is sometimes justified on the grounds of the difficulty of doing so, in this case Facebook itself would seem to have the ideal social media platform that could have facilitated this.

The rights and wrongs

So Facebook and the researchers were right to think that in regards to data access the study is ethically robust. But the academics and news media raising concerns about the study are also correct – there are significant ethical failings here regarding our norms of interventional research without consent.

Facebook claims in its Data Usage Policy that: “Your trust is important to us.”

If this is the case, they need to recognize the faults in how they conducted this study, and I’d strongly recommend that they seek advice from ethicists on how to make their approval processes more robust for future research.The Conversation


David Hunter is a philosopher by background and training but now based in the Southgate Institute for Health, Society and Equity in the Medical School at Flinders University where he coordinates and delivers the Ethics, Law and Professionalism strand within the MD. Previously he spent eight years in the UK at the University of Birmingham, Keele University and the University of Ulster respectively. Prior to that Hunter was based in New Zealand at Massey University and the University of Auckland. He is the editor of the journal Research Ethics, editor in chief of the series Research Ethics Forum, book review editor for Public Health Ethics and an editor for the Journal of Medical Ethics blog.

View all posts by David Hunter

Related Articles

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
NSF Seeks Input on Research Ethics
Ethics
September 11, 2024

NSF Seeks Input on Research Ethics

Read Now
Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence
Research
August 23, 2023

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

Read Now
Hype Terms In Research: Words Exaggerating Results Undermine Findings

Hype Terms In Research: Words Exaggerating Results Undermine Findings

The claim that academics hype their research is not news. The use of subjective or emotive words that glamorize, publicize, embellish or exaggerate results and promote the merits of studies has been noted for some time and has drawn criticism from researchers themselves. Some argue hyping practices have reached a level where objectivity has been replaced by sensationalism and manufactured excitement. By exaggerating the importance of findings, writers are seen to undermine the impartiality of science, fuel skepticism and alienate readers.

Read Now
Five Steps to Protect – and to Hear – Research Participants

Five Steps to Protect – and to Hear – Research Participants

Jasper Knight identifies five key issues that underlie working with human subjects in research and which transcend institutional or disciplinary differences.

Read Now
We Developed a Tool to Make Responsible Research and Innovation Easier

We Developed a Tool to Make Responsible Research and Innovation Easier

Stefan de Jong, Michael J. Bernstein and Ingeborg Meijer describe their work developing a tool that helps researchers and research funders to incorporate responsible research and innovation values into their work.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments