Communication

Citizen Social Scientists Edit Day’s News with New Tool

October 10, 2018 1330

Nick Adams

Nick Adams

When he was teaching at the University of California Berkeley, sociologist Nick Adams had an assignment he used in a social science methods classes he dubbed “Calling Bullshit.” (The name was later sanitized slightly to protect the delicate.) The point was to find and dissect a science news story, examining the author’s thinking, checking the methodology and discussing how the article could be improved or spiked. It was a fun exercise students enjoyed, Adams recalls.

Jump forward a bit, to just before the memes of fake news and intentional misinformation had become a public trope. Adams developed a crowd-sourced content analysis system known as Text Thresher (“human-powered machinery to process these data in high quantity with high quality”). Subsequently named Tag Works, it showed promise as a citizen-science version of his classroom assignment. That new version was named PublicEditor. Volunteer editors assess the day’s most shared news articles hunting for (or applauding the lack of) “inferential mistakes, psychological biases, or argumentative fallacies.” After the assessment, the tool assigns the article a credibility score and badge.

“The badges,” according to the PublicEditor website, “allow readers of all skill levels to instantly assess the credibility of the content they are consuming, providing clear signals of information quality that will drive reader traffic to news sources most dedicated to high-quality content.” Over time, individual authors and news outlets will accumulate their own credibility scores.

“We’re always getting help from others to actually know our reality,” Adams explained in a conversation with Social Science Space editor Michael Todd. “What this system does is take that social epistemological process that we engage in so frequently and formalizes it into an Internet-based technology.”

Adams earned his Ph.D. in sociology from UC Berkeley, where he founded the Computational Text Analysis Working Group at UC Berkeley’s D-Lab and the interdisciplinary Text Across Domains (Text XD) initiative at the Berkeley Institute for Data Science. He is currently the CEO of Thusly, Inc., which provides TagWorks – as Text Thresher was renamed — as a service, and is president and founder of the California nonprofit Goodly Labs. He serves on the Social Science Research Council’s Committee on Digital Culture and as a ‘catalyst’ for the Berkeley Initiative for Transparency in the Social Sciences.

The conversation was condensed and edited for clarity.

Michael Todd: What was the origin of what became PublicEditor? Why did you do it?

Nick Adams: I invented a software, TagWorks, starting back in 2012, so that I could analyze thousands of newspaper articles about the Occupy movement. I was trying to understand police and protester interactions across 184 cities, and the project looked like it might take a decade to do and involve teams of undergraduate research assistants. That just wouldn’t do. But I could imagine a better way to do that work, like getting the whole of the crowd involved, and that’s what TagWorks does. It makes these projects where you have a very complex theory and you have huge amounts of documents tractable, something that you can do in a year instead of something that takes a decade.

Meanwhile, Saul Perlmutter, the Nobel Prize physicist who discovered the universe is expanding at an increasing rate, had become an ambassador of science and had been teaching a very popular course on scientific critical thinking for a few years. He wanted to see if he could have his students use our technology to find examples of various inferential mistakes, argumentative fallacies, cognitive biases. That’s how PublicEditor began and that was back in late 2014, early 2015. We began drafting a design for the system, and that design isn’t too different from what we see today.

So that gets us to 2015, which was a different time psychologically from today or even 2016.

We pushed it along but it was a back-burner project for both of us until the 2016 elections, and then everyone was very concerned about fake news. We decided that we should really get this going in earnest. So I established a working group at the Berkeley Institute for Data Science and we tried to pull in a bunch of people from across campus that included the journalism school, some cognitive scientists and social scientists, and some computer science folks. I started going to various different meetups of technologists interested in improving our democratic apparatus, stuff like that.

Actually it helped get the Credibility Coalition going. They’re a coalition of researchers and technologists trying to tackle these problems. I helped write their mission statement, I helped write their initial data model and we participated with them in a research paper that uses TagWorks to demonstrate the possibility that you can have volunteers analyzing these articles and applying these credibility indicators, as we call them, to the articles.

Did you feel there were people in society who were crying out for this?

I think many, many, many of us have had the thought of, ‘Damn it they’re trying to confuse us or to fool us and I wish there was something I could do about. I wish I could point this out to other people because some people will be fooled by this.’ Yeah, I think there must be thousands of people who have had that thought at some point in their lives and now we’re going to provide that solution for them.

What about the biases that your annotators bring in? How do we guard against that in PublicEditor?

First of all we’re going to be recruiting a crowd of participants to reflect the diversity of political thought and demographics in the United States. (Or when we go into other countries we’ll try to make sure that the community, the volunteer community, reflects the demographics and political constitution of whatever polity is operating the system.)

The contributors then just have a little passage and they have a particular protocol or assessment that they’re supposed to apply to that passage. They don’t know where it came from, they don’t know who’s saying it, they don’t see that the article came from Huffington Post or Breitbart. They just have a little passage of text and then they’re following a protocol, assess the passage’s veracity in terms of how appropriately it’s following the rules of probabilistic thinking or the use of evidence or how consistently it’s avoiding psychological errors and so forth.

Any given assessment of some passage of text via this protocol is going to be completed by at least five people, and those five people will be relatively representative of the whole population. Let’s say four out of five people see the same errors and that’s reflected in their answers to the questions. It’s also reflected by their highlights of some portion of the passage to justify their answer. These follow up questions ask, “How misleading or problematic is the colorful language?” If everyone says, “This is very mildly problematic,” then we’re not going to take off any points. But if four out of five people say, “This is a very problematic, really slanted article,” then we’re going to take off some points for that.

If I become part of the community and I’m consistently an outlier, to put it nicely, or a crank, to put it rudely, the system will recognize this fact and then weight my responses?

We keep track of how much any given contributor tends to be in alignment with the majority on these assessments. And so they’re developing their own credibility and their own reputation score in the system over time.

How do you determine what stories PublicEditor assesses?

Initially, we’re just going to be doing the top most shared articles on social media, on Facebook and Twitter. There’s an organization called BuzzSumo that tracks what’s being shared the most and we think that’s a great set of articles to be processing. Whatever is being shared the most is what needs to be vetted by our system the most.

What is the speed of this assessment? I see some article in the morning and it’s all I want to talk about at the water cooler. Come noon something else pops up and that earlier article “was so an hour ago.”

There’s a quote misattributed to Mark Twain that says, “A lie is halfway around the world while the truth is still trying to put its shoes on.” One of our slick bits of marketing is that with PublicEditor, the truth just got Velcro. We actually process an article in a half an hour.

How will Public Editor add to what fact check initiatives like FactCheck.org or Snopes already do?

We think that number of days an article is out there until it’s corrected is a period when people are rehearsing and reinforcing misinformation are in their own minds.They’re literally having conversations with friends and family members saying, “I read this thing in the news today,” and it’s something that confirms their biases and they’re talking it through and talking about how terrible the other side is or whatever. They’re reinforcing that misinformation as they’re speaking and a fact check might come through a week later and then it has to try to filter through those same networks and correct something.

Walking all of that back with a fact check I think is unrealistic On the other hand, if the first time someone reads an article they see a label from PublicEditor, then they’re questioning the veracity of that misinformation the very first time that they’re encountering it. And so maybe they’re not sharing it with other people or maybe what they’re sharing with other people is that this might be misinformation. We think we can get ahead of all that rehearsal and reinforcement and conversation that makes the backfire effect so strong.

Metaphorically, the concrete hasn’t set yet.

That’s right.

How do I discover the PublicEditor credibility score?

People can read through articles at our website if they want to. We’re working on a browser extension that will, if you go the New York Times or Breitbart or wherever, check to see if the article has been processed by our system. If it has then it automatically decorates the article with our layer of labels and our little hallmark here.

But that’s if I’ve downloaded the extension.

That’s right. We’ll see over the next year or two if we can get various different news outlets and Facebook and everybody to build this straight into their websites, but for now it will have to be a browser extension.

One concern I have is that people are willing participants in confirmation bias …

So the people who most need this might be the people who are least willing to download our browser extension — that’s the concern?

Yes. You may not even accept that on its face but I think it might be legitimate.

I think it might be true, too. We’re going to have to have some kind of campaign on that to give people a sense of responsibility. I think it’s going to be a sense of responsibility for their family members and for what they’re sharing. We also want to give people a sense that the person who’s on their [ideological] team needs to do a good job because that makes their argument stronger.

Everything we’ve looked at so far has been in English and somewhat American. What are the plans for expansion?

The TagWorks technology can analyze and process any language. And the assessments that we’re doing on reasoning and language and evidence and probabilistic reasoning and so forth, we think that stuff generalizes fairly well to other contexts. But we’ll want to hire translators who are aware not just of how to translate these particular questions and answer choices but to translate into different culture contexts. That will take some work.

We anticipate the first two attempts at internationalization to be with someone a little simpler like Canada and maybe a country where the use case is a bit more urgent, like Myanmar. Those are the two big factors that we’re considering: the urgency and how easily we can do it.

Do you anticipate PublicEditor will mostly be used for political things or will it also cover scientific and medical news, or climate change, or celebrity gossip, or whatever?

Yeah, celebrity gossip is not a high priority for us but we understand that that be a good way to popularize PublicEditor! We’re certainly very aware that we need to make slightly different versions for different genres of reporting. There are scientific errors that are pretty egregious in a science context, but less egregious in the context of a straight news piece and so we would need to take off more points.

Similarly, we’d even look at op-eds as a different genre from straight news. We expect some rhetorical flourish in an op-ed. We’re going to have to have different weighting systems.

You mentioned health news, and this is an area where we’ve gotten some expressed interest independently from Facebook and from Google and from the National Academy of Science. That’s an area where the reporting can really be life and death concern, so it’s important for us to be contributing in that space with a strong sense of our responsibility to get it right.

What are your hopes for PublicEditor?

We hope that PublicEditor becomes a household name. Like Person A is talking about what’s going on in the news and it sounds fishy, so Person B says, “That sounds fishy, has that been run through PublicEditor?”

Over time, over months or years using the tool, we hope the entire population becomes a lot more resilient to bullshit and a lot more informed about all the different varieties of misinformation. We want that but also everything is computable, we have all of these numbers and we really like to see Facebook and Google and Twitter and others using our scores, these credibility scores to promote quality content and to demote low quality content.


Social Science Space editor Michael Todd is a long-time newspaper editor and reporter whose beats included the U.S. military, primary and secondary education, government, and business. He entered the magazine world in 2006 as the managing editor of Hispanic Business. He joined the Miller-McCune Center for Research, Media and Public Policy and its magazine Miller-McCune (renamed Pacific Standard in 2012), where he served as web editor and later as senior staff writer focusing on covering the environmental and social sciences. During his time with the Miller-McCune Center, he regularly participated in media training courses for scientists in collaboration with the Communication Partnership for Science and the Sea (COMPASS), Stanford’s Aldo Leopold Leadership Institute, and individual research institutions.

View all posts by Michael Todd

Related Articles

Young Scholars Can’t Take the Field in Game of  Academic Metrics
Infrastructure
December 18, 2024

Young Scholars Can’t Take the Field in Game of Academic Metrics

Read Now
Gazan Publisher, Late Ukrainian Writer Receive Publisher Group’s Prix Voltaire Award
Bookshelf
December 6, 2024

Gazan Publisher, Late Ukrainian Writer Receive Publisher Group’s Prix Voltaire Award

Read Now
From the University to the Edu-Factory: Understanding the Crisis of Higher Education
Industry
November 25, 2024

From the University to the Edu-Factory: Understanding the Crisis of Higher Education

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Deciphering the Mystery of the Working-Class Voter: A View From Britain

Deciphering the Mystery of the Working-Class Voter: A View From Britain

How is class defined these these days – asking specifically about Britain here but the question certainly resonates globally – and when […]

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Tom Burns, whose combination of play — and plays – with teaching in higher education added a light, collaborative and engaging model […]

Read Now
Julia Ebner on Violent Extremism

Julia Ebner on Violent Extremism

As an investigative journalist, Julia Ebner had the freedom to do something she freely admits that as an academic (the hat she […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments