News

On the Ethics of Facebook – and Drawing the Right Conclusions News
Is Facebook all wet? (Photo: mkhmarketing/Flickr)

On the Ethics of Facebook – and Drawing the Right Conclusions

July 16, 2014 2507

Facebook logo under water

Are Facebook’s ethics all wet? Maybe not, argues Robert Dingwall. (Photo: mkhmarketing/Flickr)

The research ethics community has been stirred out of its summer torpor by the chance to debate a paper by Facebook and Cornell researchers that investigated the phenomenon of emotional contagion by manipulating Facebook news feeds. When these were biased towards positive content, there was a small but detectable tendency for users to employ more positive language in their own posts – and vice versa. The design and methods have been broadly accepted as robust, although there are some important theoretical issues about how emotion is understood. Nevertheless, the findings have generally been considered to be interesting: even if the effect is not large, at Facebook scale they represent some hundreds of thousands of users each day.

The real controversy, however, was provoked by the apparent lack of regulatory jurisdiction. As a private organization using its own funds, Facebook is not subject to the Common Rule that governs most Federally funded research in the US, and which many universities extend to cover all research with human subjects. This is the legal foundation for the system of Institutional Review Boards (IRB). Facebook’s academic collaborators only received anonymized data, which exempted them from IRB review. It does not seem that any US laws or regulations have been broken, although other countries are investigating whether data protection offenses have been committed under their domestic legislation. Facebook’s terms of service allow the company to do research with user data, although there is some question about whether these clauses were actually in effect when this study was carried out.

There is, however, a difference between acting legally and acting ethically, which some ethicists have been quick to seize on. This response demands a critical analysis. The immediate response of many bioethicists to something that they think is wrong is to demand more regulation. This might, however, be a time to argue the contrary – that this case really demonstrates the over-regulation of most university social science research due to inappropriate generalization from biomedical models.

Most of the criticism turns on the issue of informed consent, which is conventionally a bedrock principle for research ethics. In practice, as many ethnographers have been pointing out for years, this has been fetishized to the point of absurdity. The atrocity stories about IRBs are legion and I will not bother repeating them. Facebook users – I am not one – voluntarily participate in the company’s service just as they might voluntarily hold supermarket or other loyalty cards. In exchange for the benefits offered, we forego a certain measure of privacy – just how much and on what terms has been a matter of debate for some years but the principle is not news. The company does all kinds of things with the data that are designed to manipulate our behavior. Supermarkets send us coupons that are intended to encourage us to spend money in their store on the items that they would like to sell us. Presumably enough of us do this for that to be a productive strategy. Do we explicitly consent to have our behavior influenced in this fashion? Probably not – but we certainly assent to it by continuing to hold our loyalty cards.

It is hard to see a real difference between this project and, for example, the kind of well-established psychological study that leaves a 10 dollar bill on a park bench and monitors what people do when they see it. On the scale of a typical psychology experiment, participants can be debriefed afterwards but it makes nonsense of the study to ask consent first. The publication of Facebook’s findings could, though, be considered as a mass debriefing. The choice of a journal with a liberal Open Access policy furthers this goal.

What are the risks of this intervention? One commentator observes that some of the people receiving the negative condition might have been clinically depressed and pushed over the edge into suicide. Unfortunately, this argument is rather undermined by misquoting US government data that approximately 6.7 per cent of Americans suffer from a major depressive disorder in the course of a year to suppose that this figure translates to a specific week. With such a small observable effect, how plausible is it really to suppose that gloomy posts in one’s news feed will be the last straw?

Empirically, a good many ethicists do not much like the world they happen to live in. Events like this are a good opportunity to release a visceral dislike of corporations, simply because they are large and make profits. The same response is evident in much of what is written about the pharmaceutical industry. This is absolutely not to say that big companies are saintly actors. Clearly there is much potential for harm from pharmaceutical research and this is rightly regulated in both private and public sector environments. However, as many commentators have observed over the years, this does not justify the generalization of that model to the social sciences, where, for the most part, risks of harm are minimal.

In this case, as a respected group of bioethicists have observed in Nature, we might well want to applaud Facebook’s attempt to get a better understanding of its impact on users and its willingness to share that information in a public interest. The real novelty is Facebook’s decision to publish their findings in an academic journal rather than treating them as a trade secret. In doing so, Facebook have demonstrated the irrelevance of much of the current regulatory regime that crushes social science research in universities. The challenge is not to level the field by regulating Facebook but to deregulate public social science more generally.


Robert Dingwall is an emeritus professor of sociology at Nottingham Trent University. He also serves as a consulting sociologist, providing research and advisory services particularly in relation to organizational strategy, public engagement and knowledge transfer. He is co-editor of the SAGE Handbook of Research Management.

View all posts by Robert Dingwall

Related Articles

Alondra Nelson Named to U.S. National Science Board
Announcements
October 18, 2024

Alondra Nelson Named to U.S. National Science Board

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
‘Settler Colonialism’ and the Promised Land

‘Settler Colonialism’ and the Promised Land

The term ‘settler colonialism’ was coined by an Australian historian in the 1960s to describe the occupation of a territory with a […]

Read Now
NSF Seeks Input on Research Ethics

NSF Seeks Input on Research Ethics

In a ‘Dear Colleague’ letter released September 9, the NSF issued a ‘request for information,’ or RFI, from those interested in research ethics.

Read Now
Daron Acemoglu on Artificial Intelligence

Daron Acemoglu on Artificial Intelligence

Economist Daron Acemoglu, professor at the Massachusetts Institute of Technology, discusses the history of technological revolutions in the last millennium and what they may tell us about artificial intelligence today.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments