Interview

A Behavioral Scientist’s Take on the Dangers of Self-Censorship in Science

February 14, 2024 2294

The word censorship might bring to mind authoritarian regimes, book-banning, and restrictions on a free press, but Cory Clark, a behavioral scientist at the University of Pennsylvania who has been studying censorship in science, is interested in another kind. In a recent paper published in The Proceedings of the National Academy of Sciences, Clark and 38 co-authors argue that, in science, censorship often flows from within, as scientists choose to avoid certain areas of research, or to avoid publishing controversial results. As they write in the paper: “Our analysis suggests that scientific censorship is often driven by scientists, who are primarily motivated by self-protection, benevolence toward peer scholars, and prosocial concerns for the well-being of human social groups.”

Disagreements between and among scientists are not new, nor is the struggle to maintain public trust in science. Still, Clark believes many disputes could be resolved if scientists with differing viewpoints worked together. To that end, she serves as director of the Adversarial Collaboration Project at Penn.

The project, based on ideas first articulated by Daniel Kahneman, a Princeton professor and winner of the 2002 Nobel Prize in economics, seeks to bring scientists with differing ideologies together. As the project’s website puts it, the goal is “to stimulate a culture shift among social and behavioral scientists whose work touches on polarizing topics with policy significance by encouraging disagreeing scholars to work together to make scientific progress.”

Our interview was conducted over both Zoom and email, and has been edited for length and clarity.

Undark: Your recent paper in PNAS looks at censorship in science, including censorship by scientists. Why would scientists censor their own work?

Cory Clark: Scientists are part of a community of scholars and they care a lot about their status within that community. And they really care about avoiding ostracism. So scientists gain status within science by their peers essentially thinking they’re good scientists, because scientists evaluate each other’s work in peer review; scientists are the ones who decide who to hire, who to give awards to, who to promote.

Pretty much everything a scientist is trying to achieve depends upon positive evaluations from their peer group. And so, consequently, they’re very vulnerable to conformity pressure among their peer group, and they’re particularly terrified of saying or doing something that’s going to cause them to pay a huge reputational price.

So scholars are afraid of offending the political views of their peers, or the moral preferences of their peers, and even sometimes the theoretical perspectives of their peers, especially if they’re perceived as a majority opinion.

Undark logo
This article by Dan Falk was originally published by Undark and is reposted with permission. Undark is a non-profit, editorially independent digital magazine exploring the intersection of science and society. It is published with generous funding from the John S. and James L. Knight Foundation, through its Knight Science Journalism Fellowship Program in Cambridge, Massachusetts.

UD: You write about self-censorship being “prosocially motivated” — what does that mean?

CC: I think there are probably three main reasons scientists censor themselves and each other. And that is: to protect their own reputation; to protect the reputation of their peers — so they might make a recommendation to a colleague or to their student not to study a particular topic, because it’s going to be too controversial — or because they’re afraid a particular scientific finding is going to cause harm to society.

So a lot of censorship — at least in my discipline, which is the behavioral sciences — a lot of it is on the topics of race and gender. People are afraid of publishing findings that could have the potential to portray groups that historically have been oppressed or marginalized — from publishing any information that might make those groups look negative in any way.

That’s where a lot of censorship is happening. So people avoid topics that are controversial, so that they don’t pay a social cost themselves. They advise their peers not to study those topics. And then some people don’t want any scholars studying those topics at all, regardless of whether they’re their friends or themselves.

UD: In the paper, you noted some variance in terms of age and also gender with regard to who is most likely to self censor. Can you give me some examples?

CC: In a related paper that I discuss in this PNAS paper, we find that men, more conservative, and older scholars, are self-censoring more. And I think that’s because they tend to hold views that are more discordant with what they perceive to be the desired reality — of psychology, in this case. But then we also see that younger scholars, women, and more left-leaning scholars are more supportive of censoring other people. So you have one group that’s pushing for more censorship, and consequently, precisely the opposite group that is self-censoring more.

UD: If scientists self-censor because they fear causing harm, isn’t that in fact laudable?

CC: It potentially could be laudable. I think the problem is that we don’t know if their harm concerns are accurate. We don’t know what the true consequences of censoring controversial conclusions really is.

On the one hand, if scholars have 100 percent accurate predictions about how information is going to impact broader society, then maybe you would say it’s a good thing. But that’s a really complex issue. I don’t know that scholars can have 100 percent accurate knowledge about how a scientific finding is going to impact society, and at the same time, we don’t know how withholding the truth could have negative consequences, too. And we point out a handful of potential negative consequences of censorship in the paper.

UD: What might some of the negative consequences be of withholding certain kinds of findings or certain areas of research?

CC: The most obvious one would just be that we are missing out on potentially accurate information, which means that when we design interventions or policies, the knowledge we’re using to design those policies or interventions is unreliable. Which means that those policies and interventions are probably not going to work, because they’re based on false information.

So that’s an obvious consequence. But there are other potential consequences that I think are larger now that scientists can tweet their paper if they want. Even if their paper gets censored, it gets rejected, no journal will publish it, a scientist can go online and say, “I was censored, here’s my paper.”

It can cause the public to distrust science and think that science journals are not really interested in pursuing the truth. Rather, they’re interested in publishing whatever is fashionable at the time. And so, if the public comes to distrust science, because science isn’t just reporting the truth, like reporting the news, then science potentially could lose authority; people will not listen to scientific recommendations — and then you have even bigger problems for public health, and really anything that scholars are consulting on.

UD: Where do the journals fit into this discussion?

CC: In a few different ways. So, one, you have peer review, and scholars can use peer review to censor work that they don’t like. With peer review, it’s very subjective, and scholars can make criticisms. Because all scientific papers have some flaws, peer reviewers can exaggerate those flaws, or make those flaws seem more devastating than they really are, in order to prevent a paper from getting published.

So in the peer review process, people can use that method as a means to censor other people’s work. But then on top of it, just in the past few years, we’ve had a handful of journals explicitly saying that they will reject or retract scientific papers that they perceive as having potentially harmful implications or applications. So they’re adding a sort of moral criterion to their to their evaluations of science — which I would suggest is a type of censorship, because it’s not based on the quality of the science, but rather the perceived consequences.

UD: You note in the paper that a scientific monoculture can stifle progress. Can you expand on that?

CC: People who would challenge the perceived orthodoxy, I suppose, of their peer group — they self-censor more, which creates an even stronger impression that their views are unwelcome. And then those people — either they self-censor completely, and that turns academia into a really unpleasant place to be, or they leave academia altogether, and what you’re left with is a group of people who all share the same beliefs and all have the same desired truths about the world.

Which can really distort what the scientific consensus would be if you had more representation from a broad range of perspectives.

UD: Is there a political dimension to this? Or are you concerned that some people will view your work through a political lens?

CC: I would say those are two different questions. People definitely do view my work through a political lens, and there is a political element to it. Because most behavioral scientists are left-leaning, that tends to be the direction of the biases in science. And that’s where you’re going to see problems, because people on the left, because they’re the majority of people, they have the power to censor ideas that contradict left-wing perspectives.

But if scientists were overwhelmingly right-leaning, I think we’d have the exact same problem. It would just be different issues are being targeted. So it’s not anything about liberals in particular that are concerning; it’s just that in science, liberals have power, so right-leaning perspectives are going to get censored more.

And consequently, because of that, people might say that I’m trying to defend conservatism or right-wing perspectives, but that’s just a — it’s not a coincidence, necessarily — but it’s only an artifact of the community that I’m talking about. If scholars were right-leaning, then I would be talking about censorship of left-leaning ideas, and then people would consider this research left leaning.

UD: Toward the end of your new paper, you note that even among your co-authors, there were some disagreements. What kind of arguments did you have amongst yourselves?

CC: Yeah, we had a lot of emails going around for a long time. We disagreed on a handful of things. But some members of our co-author team thought that there are really no good cases of scientific censorship being the better option, or no obvious cases where scientific papers were the direct cause of something harmful in society. Rather, science is often used by people who already have malicious goals.

And so, consequently, we disagreed about whether censorship is ever appropriate. And some people would say, there probably are some cases where censorship of science is actually the best for society. And some of us thought, there are probably no cases where censorship is actually best for society.

UD: In 2020, you and four colleagues had a paper in Psychological Science dealing with religiosity, crime, and IQ, which you chose to withdraw. Did that experience influence your approach to this new paper at all?

CC: Not really. I have been interested in this topic since around 2012, and I consider data and cultural trends more interesting than personal anecdotes.

Dan Falk is a science journalist based in Toronto. His books include The Science of Shakespeare and In Search of Time.

View all posts by Dan Falk

Related Articles

From the University to the Edu-Factory: Understanding the Crisis of Higher Education
Industry
November 25, 2024

From the University to the Edu-Factory: Understanding the Crisis of Higher Education

Read Now
The End of Meaningful CSR?
Business and Management INK
November 22, 2024

The End of Meaningful CSR?

Read Now
Deciphering the Mystery of the Working-Class Voter: A View From Britain
Insights
November 14, 2024

Deciphering the Mystery of the Working-Class Voter: A View From Britain

Read Now
How Managers Can Enhance Trust
Business and Management INK
November 11, 2024

How Managers Can Enhance Trust

Read Now
Doing the Math on Equal Pay

Doing the Math on Equal Pay

In the UK, it’s November 20. In France, it’s today, November 8. For the EU, it’s November 15. It’s the day of […]

Read Now
Exploring the Citation Nexus of Life Sciences and Social Sciences

Exploring the Citation Nexus of Life Sciences and Social Sciences

Drawing on a bibliometric study, the authors explore how and why life sciences researchers cite the social sciences and how this relationship has changed in recent years.

Read Now
Julia Ebner on Violent Extremism

Julia Ebner on Violent Extremism

As an investigative journalist, Julia Ebner had the freedom to do something she freely admits that as an academic (the hat she […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments