Communication

Nick Seaver on Dissecting the Algorithmic Organism Communication
(Image: By Jorge.maturana/CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)]/ via Wikimedia Commons)

Nick Seaver on Dissecting the Algorithmic Organism

February 16, 2018 3914

When discussing the nexus of computer science and social science, the transaction is usually in one direction – what can computer scientists do for social scientists. But a recent paper from Tufts University anthropologist Nick Seaver reverses that flow, using the tool of ethnography to interrogate the tools of engineering.

Cover of the journal Big Data & Society

His “Algorithms as culture: Some tactics for the ethnography of algorithmic systems” in the journal Big Data & Society has been widely shared – it’s current Altmetric score is an enviable 288 – and proving to be influential outside the confines of his discipline (and the popularity imbalance between mighty tech and not-so-mighty social science is definitely in Seaver’s equation). For example, Ephrat Livni and David Gershgorn over at the tech-and-business-oriented website Quartz wrote about Seaver’s insights and linked them to very current real-world concerns about how algorithms aren’t impartial and implacable, but reflect the attitudes and biases of their creators.

Seaver argues that even the word “algorithm” has moved beyond computer-science definitions and no longer refers to computer code compiled to perform a task. It belongs to all and is fair game for social scientists, like himself, to define in complex terms.

To reach his conclusion, Seaver talked to technologists, engineers, marketers, sociologists, and more. He asked coders at technology companies about their relationship to the word “algorithm” and discovered that even they feel alienated by the projects they work on, in part because they often tackle small pieces of bigger, non-personal projects—tentacles of an algorithmic organism if you will—and end up missing any closeness to the whole.

Seaver, who is also the co-chair of the American Anthropological Association’s Committee for the Anthropology of Science, Technology, and Computing, has listed “the kinship of academic disciplines” as among the subjects he’s focused on at present, and so we decided to dig a little deeper into that thought through the lens of his paper.

From your perch as an anthropologist/ethnographer and not necessarily a computer scientist, what is an algorithm? 

Headshot of Nick Seaver

Nick Seaver: A common starting point definition for “algorithm” is something like “a recipe” or “a sequence of well-defined operations.” You see this both in Computer Science 101 settings and in writing aimed at non-technical audiences. As an ethnographer, I’m biased toward people, so I tend to pay attention to all the people working in and on the things we call “algorithms” that the simple definition doesn’t talk about. CS 101, for instance, won’t help you recognize that an engineering team is picking one algorithm over another, making choices about how to represent the data that algorithm processes, or possibly implementing the algorithm incorrectly!

I’m also more interested in descriptive, rather than prescriptive, approaches to definition: I care more about what people talk about when they talk about “algorithms” than whether they’re talking correctly. So that’s my extremely loose starting definition of an algorithm: it’s whatever someone says it is. Now, taking that approach to definition brings along some challenges, but it also opens up opportunities for studying them.

What do you mean by saying “algorithms are multiple”?

Nick Seaver: If we think that algorithms are whatever people say they are, and many people are saying that they’re many different things, then we have an apparent problem: someone’s got to be wrong, right? Not necessarily. In practice, people have different ideas about the systems they’re working with all the time—those definitions and assumptions inform the actions they take, and what you get in the end is a big mess. That mess, where the algorithm simultaneously seems to be many things to many people, is what I mean by it being “multiple.”

To borrow an example from Laura Devendorf and Elizabeth Goodman, who inspired part of the article, take a dating website: you have many different people interacting with “the algorithm,” ranging from engineers of various sorts to users with various tendencies. These people have different ideas about what the algorithm is and how it works—they have different responsibilities and desires, too. Say I’m a user, and I have some assumptions about how the matching algorithm works, and I adjust my behavior accordingly. You may do something like this on Facebook, for instance, trying to like or not like things in an effort to influence the algorithm. Now our assumptions about how the system works may be technically wrong, but they’re influencing our actions anyway. And, because the system is personalized, these behavioral changes may end up having some kind of effect on the way the system works, if not the one we intend.

For the people engineering this system, our weird behaviors have to be taken into account, even though they’re based on technically faulty assumptions. Voilà, we’ve changed the algorithm, in a sort of roundabout way, even though we were nominally wrong about how it worked and those changes may not have been what we anticipated.

In practice, there are many more people involved in these systems than just one user and one engineer, on both sides of the algorithm, so to speak (so, lots of users with lots of ideas, and lots of people in companies with lots of ideas and different roles). So our multiple multiplies even more: this is not anything like the simple, well-defined CS 101 algorithm. Facebook’s newsfeed, Netflix’s recommender, and so on operate in much more complicated and dynamic ways than those simple recipe-like algorithms. They’re constantly changing, they’re buggy, they’re sensitive to all sorts of unanticipated inputs, and there are people making little decisions all in and around them. So, we have a set of loosely connected ideas about what “the algorithm” is, practices that are informed by those ideas, and nowhere concrete to point at and say “there, that right there is the real algorithm—the thing with the social consequences.”

The philosopher Annemarie Mol, who came up with the “multiple” concept, would say that these various people “enact” different versions of the algorithm. Although these enactments are loosely coordinated with each other—people try to get on the same page, to adjust what they’re doing to fit with what other people are doing—they are not all working on the same underlying object. There is a deeper philosophical point here about the nature of objects and how they relate to practices, but I think the plainer version as I’ve described it here is quite useful for thinking about weird objects like algorithms.

How does that understanding or ‘definition’ compare to a traditional computer scientists’ definition? I ask because questions of ‘correctness,’ currency, mutation and universality seem important to understanding your own call to action.

Nick Seaver: One of the key arguments in the article is that different people use different definitions for different purposes: it’s useful to some computer scientists to understand algorithms in that narrow, recipe-like sense to do things like assess how efficiently they will perform given certain qualities of the data they’re working on. For critical social scientists, this definition is distinctly not useful: we care about things like how technical systems affect people’s lives, how cultural ideas are embedded in them, and so on. The CS 101 definition of “algorithm” (and, along with it, the CS 101 set of techniques for analyzing algorithms) quite precisely cuts these concerns out. So, if we want to know about them, we need to write them back in.

That was the impetus for this article: social scientists like me have become concerned lately that we don’t know what we’re talking about, because when you look at algorithms in practice, they really don’t look like CS 101. We know from classic work in the sociology of science that definitional disputes are a way for disciplines to police their boundaries (so, if I wanted to ridicule someone from my position as an anthropologist, I could say that they don’t understand what “culture” is, for instance). And we see the same thing with algorithms: a computer scientist or practicing software engineer can say to me “you don’t even know what an algorithm is,” and that’s a way to cut me out. And, because no one likes being cut out, some of my fellow critical social scientists and humanists have tried to get back on board with the CS 101 definition, even though it seems poorly suited to the kinds of systems and concerns we’re interested in.

But if we take the approach to definitions I laid out above, then we shouldn’t be so concerned with “correctness”—or at least, we should recognize that people disagree about these terms, even within expert spaces. So, you might have a situation where an engineer uses “algorithm” in the less precise way I’ve been describing, and no one bats an eyelash. If I, as an anthropologist, did the same thing in the same space, then it might be used as evidence that I don’t know what I’m talking about, because .the enforcement of definitions is really more about maintaining social boundaries than talking precisely or accurately.

[T]he enforcement of definitions is really more about maintaining social boundaries than talking precisely or accurately

That is why, in a broad sense, I argue for defining algorithms *as* culture, as collections of human practices. Because, in a very concrete empirical sense, they are: the current state of the Facebook newsfeed algorithm is the result of a great pile of human decisions, and when that algorithm works differently next week, it will be because of human choices to change it. That’s not to say that computer scientists are wrong to define algorithms differently—they have different concerns! (Or at least, they used to: there is now a nicely growing field of computer scientists concerning themselves with “social” issues around algorithms like fairness or accountability. Of course, what “fairness” and “accountability” mean will also vary from group to group.)

I realize this is a large part of the paper, but could you briefly outline some of specific mechanics for conducting an ethnography of algorithms?

Nick Seaver: I end the article with a set of tactics for would-be ethnographers of algorithmic systems, which I wrote mostly to draw together methods literature I wish I had known about when I started doing my own research as a graduate student. Mostly, given the corporate settings these kinds of algorithms get built in, these tactics have to do with dealing with issues of access and secrecy.

But, the key thing here is that these problems are not new or unique to algorithms: there is a large body of anthropological and ethnographic methods literature on dealing with access concerns, distributed cultural phenomena, and secretive practice. (A couple of my favorite recent ethnographies deal with Freemasons and stage magicians, neither of which seem particularly high-tech, but which offer useful models for thinking about how to study knowledge practices that are intentionally hidden.) So, in broad strokes, the methods are nothing new: lots of interviews with people working in different positions relative to the practices you’re interested in, as much participation as you can manage, and if you can’t get much, find socially adjacent sites (hackathons and conferences are particularly good for this topic) where conversations and practices leach out from behind those corporate walls. There is a lot to study if we don’t define our object of study so narrowly as “that thing we can’t get access to.”

Could you give an anecdote (or two) about the process as you conducted it (maybe once as an interviewer and once as a scavenger)? What is it like in general working with tech-oriented subjects, corporations, etc. as compared to other subjects?

Nick Seaver: Fieldwork in “tech” is often superficially boring: I sat in a lot of meetings, watched presentations at conferences, and killed time at a desk in an office. Contrasted with friends of mine who conducted their fieldwork in the Andes or on Indonesian coffee plantations, my research was not that interesting. But one of the nice things about anthropological fieldwork is how it reorients your ideas about what is interesting: if you spend enough time bored out of your mind at a conference where you can’t understand half the technical content because you don’t have the right kind of graduate training (don’t feel bad: half the audience doesn’t understand it any better than you do, because their degrees are in slightly different specialties), you start to find interesting things to focus on.

My research focused on people trying to build music recommender systems, and one thing that started to catch my attention was how, in talks, people used different musicians to demonstrate different qualities of their systems. In their choice of artists, you could learn a bit about their personal tastes, but also their very cultural ideas about what music was like and how it varied—so you’d see a graph with Britney Spears at one end and Finnish black metal at the other, for instance. If you wanted to know about these folks’ cultural life, then this was a nice way to see a bit of that and to see how it interfaced with their technical work. [Ed. – Seaver’s current book project is Computing Taste: The Making of Algorithmic Music Recommendation.]

And one of my favorite tactics was to ask interviewees to explain basic recommendation techniques to me, even when I already had a good sense of how they worked: when people try to teach something, they tend to draw in lots of comparisons to other domains as they try to make sense to you. Those comparisons are very informative, because they tell you something about how the person making them thinks about their work—not in the precise, technical way that they might talk if they were trying to impress people, but in the ordinary, analogical way people think when they’re just going about their day. That’s the kind of thinking that I find under-studied and incredibly important to how these systems work: if an algorithm is the sum of many, many ordinary human decisions, then we need to know something about the frames of reference those decisions are made in if we want to understand why the algorithm has come to work the way it does. People like to guess about those frames—we have plenty of stereotypes about what engineers are like that get used to do this all the time—but in practice, they are much weirder than we might imagine. (I’ve had people describe their work to me as being like gardening, plumbing, architecture, or policing, all of which carry quite different implications for the choices those people make.)

How are computer scientists and the tech community receiving your argument, and by extension, the new field of critical algorithm studies in general?

Nick Seaver: This is an ongoing concern of mine. I had a nice response to the article from many people working on the more humanistic side of these organizations, but of course, software engineers have a reputation for liking strict definitions, and this approach does not work well for them. I don’t think we all need to be working from the same definition to make progress on these issues, but trying to coordinate our various concerns in a way that makes sense to everyone involved is a real challenge. We can’t ignore the fact that, at the moment, the social and economic prestige attached to being a technical expert is much, much higher than that attached to being an anthropologist. So the tech community has the power to push their definitions more than I do, not only about algorithms, but even about things like culture or taste! The social dynamics between our fields are something that more researchers need to take into account, because the playing field is absolutely not level. (And this is not a new concern: see Diana Forsythe’s excellent work on the rise of “expert systems” from the early 1990s, for instance.)

So, I don’t think the goal of critical work on algorithms is to get everyone to agree with our critique or to start using our methods. Rather, I think one of the most important things scholars in critical algorithm studies can do is to try and blow up the narrow worlds of reference people use to design, build, and talk about these systems. Overly narrow and homogeneous worlds of reference are the cause of many of our problems in this domain (on this, see the critical work of Safiya Noble, for instance). So opening up those worlds means bringing in previously ignored voices, like minority groups affected by carelessly designed software, and it also means looking at the empirical reality of how these systems get made and understood by the people who make them. Not everyone who builds algorithmic systems thinks about them in the same way, and too many critiques presume that there is some huge, singular logic that these systems embody. But in practice, there are all sort of weird ideas about the world lurking in the corners of these organizations, and critics can locate those and draw them out, to add to the mix as we try to figure out a way forward that transforms the algorithmic status quo.

Social Science Space editor Michael Todd is a long-time newspaper editor and reporter whose beats included the U.S. military, primary and secondary education, government, and business. He entered the magazine world in 2006 as the managing editor of Hispanic Business. He joined the Miller-McCune Center for Research, Media and Public Policy and its magazine Miller-McCune (renamed Pacific Standard in 2012), where he served as web editor and later as senior staff writer focusing on covering the environmental and social sciences. During his time with the Miller-McCune Center, he regularly participated in media training courses for scientists in collaboration with the Communication Partnership for Science and the Sea (COMPASS), Stanford’s Aldo Leopold Leadership Institute, and individual research institutions.

View all posts by Michael Todd

Related Articles

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Exploring the Citation Nexus of Life Sciences and Social Sciences
Industry
November 6, 2024

Exploring the Citation Nexus of Life Sciences and Social Sciences

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Ninth Edition of ‘The Evidence’: Tackling the Gender Pay Gap 
Communication
October 31, 2024

Ninth Edition of ‘The Evidence’: Tackling the Gender Pay Gap 

Read Now
The Conversation Podcast Series Examines Class in British Politics

The Conversation Podcast Series Examines Class in British Politics

Even in the 21st century, social class is a part of being British. We talk of living in a post-class era but, […]

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do

Emerson College Pollsters Explain How Pollsters Do What They Do

As the U.S. presidential election approaches, news reports and social media feeds are increasingly filled with data from public opinion polls. How […]

Read Now
Diving Into OSTP’s ‘Blueprint’ for Using Social and Behavioral Science in Policy

Diving Into OSTP’s ‘Blueprint’ for Using Social and Behavioral Science in Policy

Just in time for this past summer’s reading list, in May 2024 the White House Office of Science and Technology Policy (technically, […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments