Impact

Interview Describes Biases That Manifest In Artificial Intelligence Systems Impact
(Photo: Gerd Altmann/Pixabay)

Interview Describes Biases That Manifest In Artificial Intelligence Systems

June 7, 2023 1265

Meredith Broussard, a data journalist at New York University, is concerned about the Hollywood version of artificial intelligence — and the public’s readiness to embrace the fictionalized AI that’s often portrayed on screen.

“People tend to over-dramatize the role of AI in the future and imply that there’s some glorious AI-driven future where humans are not going to have to talk to each other, and computers are going to take care of mundane activities, and it’s all going to be sleek and seamless,” she says. “I think that’s unreasonable. I think that our narratives around AI should center not on what’s imaginary, but what’s real.”

What’s all too real, she maintains, is that AI is causing many kinds of harm here and now. Broussard, one of the few Black women doing research in artificial intelligence, would like to see us tackling the problems that have been shown to be prevalent in today’s AI systems, especially the issue of bias based on race, gender, or ability. Those concerns are front and center in her recent book, “More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.”

“When technology reinforces inequality, it tends to be called a glitch, a temporary blip that is easily fixed in the code,” Broussard wrote to Undark in an email. “But it’s much more than this.” The biases that permeate our society are embedded in the data that our computer programs train on, she notes, and ultimately the imperfections of our world are reflected in the AI systems we create. “All of the systemic problems that exist in society also exist in algorithmic systems,” she wrote. “It’s just harder to see the problems when they’re embedded in code and data.”

Our interview was conducted over Zoom and has been edited for length and clarity.

UnDark logo
This article by Dan Falk was originally published by Undark under the title “Interview: Why AI Needs To Be Calibrated For Bias,” and is reposted with permission. Undark is a non-profit, editorially independent digital magazine exploring the intersection of science and society. It is published with generous funding from the John S. and James L. Knight Foundation, through its Knight Science Journalism Fellowship Program in Cambridge, Massachusetts.

Undark: I thought we could begin with OpenAI’s ChatGPT, and their latest offering, GPT-4, which came out last month. As you see the headlines, and see their seemingly impressive capabilities, what goes through your mind?

Meredith Broussard: I would like our conversation to start with not only the potential benefits, but also the potential risks of new technologies. So, for example, with ChatGPT, it is fed with data that is scraped from the open web. Well, when we think about what is on the open web, there’s a lot of really great stuff, and there’s a lot of really toxic stuff. So, anybody who’s expecting GPT technology to be positive has an unreasonable impression of what’s available out there on the internet.

UD: There’s a long list of things that people are concerned about — for example a student will hand in an essay, and the professor will wonder, did the student write this themselves, or did they get help from an AI system? But it’s more complicated than that, right?

MB: There are all kinds of subtle biases that manifest inside AI systems. For example, I just read an article about some Hugging Face researchers that had a generative AI generate pictures, based on some prompts. And when they put in the prompt for “CEO,” they got mostly male images. So, there are these very human biases that manifest inside technological systems.

For a very long time, the dominant ethos in Silicon Valley has been a kind of bias that I call “techno chauvinism” — the idea that computational solutions are superior, that computers are somehow elevated, more objective, more neutral, more unbiased.

What I would argue instead is that computers are really good at making mathematical decisions, and not as good at making social decisions. So when we create systems like ChatGPT, or DALL-E, or Stable Diffusion, or whatever, you’re going to get bias in the outputs of these systems, because you’ve got bias in the inputs, in the data that’s used to construct these systems. And there’s no way of getting away from it, because we do not live in a perfect world. And data represents the world as it is, and our problematic past.

UD: You point out that some of the problems in our algorithms go all the way back to the 1950s. Can you expand on that? What was going on way back then that still manifests today?

MB: 1950s ideas about gender are still embedded in today’s technological systems. You see it in something like the way that forms are designed — the kinds of forms that you fill out all the time — that go into databases.

When I was taught how to program databases in college, back in the dawn of the Internet era, I was taught that gender should be a binary value, and that it was fixed. We know now that gender is a spectrum, and the best practice right now is to make gender an editable field, a field that a user can edit themselves, privately, without talking to customer service or whatever. But it’s not just a matter of, “Oh, I’m going to have to change the way this field is represented on this Google form that I’m making” — because not everything is a Google form.

When you enroll in school, for example, you are making an entry into the student information system. Student information systems are generally these monoliths that were set up decades and decades ago, and just keep getting added on to. People don’t tend to go in and revise their large-scale enterprise systems. It’s the same deal in banking, it’s the same situation in insurance.

The other thing to consider when it comes to gender is that when we talk about the gender binary in the context of computing, it’s literally about zeros and ones — it’s about the memory space in the computer. A binary takes up a small amount of space, and a letter or a word takes up a larger amount of space. And we used to have to write our programs to be very, very small, because memory was really expensive — computers were expensive.

So there was an economic imperative around keeping gender represented as a binary, as well as a dominant social concept that gender was a binary.

Things are different now. We have lots of cheap memory. And we have a different understanding of gender. But our new systems also have to talk to legacy systems, and the legacy systems have this normative aesthetic that dates from the very earliest days of computing. So it’s not inclusive for trans, nonbinary, or gender non-conforming folks.

UD: In your book, you look at the how the impact of AI is being felt in the justice system, and in policing. What are you particularly concerned about when AI makes its way into that realm?

MB: I’m pretty concerned about Hollywood images of AI and the way these dominate people’s imaginations. People imagine that Minority Report is an actual future that they want to make happen. And that is not a future that I particularly want to make happen. And in a democracy we get to talk about this, we get to decide collectively, what is the future we want. I do not co-sign on a future of increased surveillance; of using AI tools for policing that more frequently misidentify people with darker skin. AI tools often just do not work, period. They often work better for people with lighter skin than people with darker skin.

And this is this is true across the board. So when we take these problematic tools, and then use them in something like policing, it generally exacerbates the problems that we already have in America around overpolicing of Black and Brown neighborhoods, the carceral crisis in general.

UD: You also mention the idea of algorithmic auditing. What is that, and how might it be useful?

MB: Two things I’m really excited about are algorithmic auditing, and policy changes on the horizon. Algorithmic auditing is the process of opening up a “black box” and evaluating it for problems.

We have an explosion of work on mathematical conceptions of fairness, and methods for evaluating algorithms for bias. The first step is, obviously knowing that algorithmic auditing exists. The second step is being willing to have hard conversations inside organizations in which people confront the fact that their algorithms are probably discriminating.

I think that it’s important to note that we all have unconscious bias. We’re all trying to become better people every day. But we all have unconscious bias — we embed our unconscious bias in the things that we make, including our technologies. And so when you start looking for problems inside algorithmic systems, you’re going to find them.

We can incorporate bias audits into ordinary business processes. People already have testing processes for software. When you are testing your software for whether it works, it’s a good idea to also test it for bias. And we know about a lot of kinds of bias that exist. There are likely going to be additional kinds of bias that are discovered in the future. We should test for those as well. And if something is so biased as to be discriminatory, maybe it shouldn’t be used.

UD: You used the phrase “black box.” Can you expand on that?

MB: If we’re going to talk about fairness, and we’re going to talk about whether particular computer programs should be used in particular contexts, we do need to get into the math, and we do need to talk more about what’s actually happening inside the software system. And so we need to open up the black box a little bit.

This is one of the things that algorithmic accountability journalists do. Algorithmic accountability journalism is a kind of data journalism. It was pioneered by Julia Angwin in her “Machine Bias” investigation for ProPublica. Julia later went on to found The Markup, which is an algorithmic accountability investigative shop. And what we do as algorithmic accountability reporters is, we interrogate black boxes: We figure out what are the inputs, what are the outputs, and what must be going on inside the system.

It’s a kind of algorithmic auditing. Because when you know the inputs and the outputs, you can figure out what is on the inside. That’s called an external audit. But if you are inside a company, you can do an internal audit, which is much easier because you have access to the model and the code as well as the training data and the test data.

I would also say to any folks reading this who work at corporations, you probably want to do internal audits, algorithmic accountability audits, and bias audits. Because that way, you avoid getting investigative journalists interested in doing external audits of your systems.

Dan Falk is a science journalist based in Toronto. His books include The Science of Shakespeare and In Search of Time.

View all posts by Dan Falk

Related Articles

Young Scholars Can’t Take the Field in Game of  Academic Metrics
Infrastructure
December 18, 2024

Young Scholars Can’t Take the Field in Game of Academic Metrics

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 1271 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
NSF Seeks Input on Research Ethics

NSF Seeks Input on Research Ethics

In a ‘Dear Colleague’ letter released September 9, the NSF issued a ‘request for information,’ or RFI, from those interested in research ethics.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments