A Few Caveats for Budding Social Media Research Mavens
Behavioral scientists have seized on social media and their massive data sets as a way to quickly and cheaply figure out what people are thinking and doing. But some of those tweets and thumbs ups can be misleading. Researchers must figure out how to make sure their forecasts and analyses actually represent the offline world.
Big Data’s overwhelming appeal
Imagine you’re interested in analyzing society to learn the answers to questions like: how bad is the flu this year? How will people vote in an upcoming election? How do people talk about and cope with diabetes? You could interview people on the street or call them on their phones. That’s what traditional polling firms do – but it takes time and can be quite costly. A promising alternative involves collecting and analyzing social media data – quickly and for free.
Hundreds of millions of people use social media platforms like Facebook and Twitter every day. Individually, they create traces of their activities when they tweet, like and friend each other. Collectively, these users have produced massive, real-time streams of data that offer minute-by-minute updates on social trends – where people are, what people are doing and what they are thinking about. For the last several years, researchers in academia and industry have been developing ways to utilize this flood of data in their investigations and have published thousands of papers drawing on it.
A typical Twitter study could look like the following. Imagine you’re interested in information diffusion after a tragic event. The moment you hear about such an event – for instance, the Boston Marathon bombing – you activate software on your computer that collects in real time Tweets that contain your keywords of interest – maybe Boston in this case. Since there are no Twitter archives available for researchers, you’d utilize Twitter’s data interface and collect all data that come for free. After a couple of hours or days you stop the data collection and start with the analysis.
What to watch out for
Not surprisingly, this effort to measure and predict human behavior from social media data is fraught with pitfalls – both obvious and very subtle. For instance, we know that different social media platforms are preferred by different demographic groups. However, most social media studies don’t carefully account for the fact that Twitter is used mostly in cities or that most Pinterest users are upper middle-class and female. This oversight can introduce serious errors into predictions and measurements.
Many of the “individuals” that populate social media platforms are actually accounts managed by public relations companies (think Justin Bieber or Nike) or not even humans at all but automated robots. Because these accounts aren’t portraying anything that even approximates normal human behavior, studies need to remove such accounts before making predictions. However, finding robot accounts can be quite hard.
Another big issue is how the data are collected to be studied. Academic researchers need free – or at least very cheap – access to social media data to perform their studies. Few social media outlets provide this, with Twitter being the exception. Because social media studies tend to be often based on data that are sampled (researchers get about 1 percent from the free Twitter interface), it’s often the case that what’s available to researchers might not be a representative sample of the overall social media data.
How to do it better
In order to realize the immense potential of social media-based studies of human populations, research must tackle these kinds of issues head-on. In our recent paper in Science on caveats for social media researchers, we discuss the need to control for bias in all the ways it appears – through platform-specific population makeup, data collection and user sampling. This will involve improvements both in how data is collected and in how data is processed: for example, better methods for identifying non-human accounts on social media are needed.
Ultimately, researchers must be more aware of what is being analyzed when they work with social media data. What data are actually being collected? What systems are actually being studied? What social processes are actually being observed? Through greater awareness of and attention to these questions, the research community will be better able to realize the great promise of social media-based studies.
***
Jürgen Pfeffer receives funding from NSF, DOD. Derek Ruths receives funding from SSHRC, NSERC, NSF, Public Safety Canada. He consults for Facebook.