Industry

NIST Report: There’s More to AI Bias Than Biased Data

March 28, 2022 1730
graphic that reads Bias in AI
The United States’ National Institute of Standards and Technology contributes to the research, standards and data required to realize the full promise of artificial intelligence to enable American innovation. (Image: NIST)

As a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the U.S. National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed.

The recommendation is a core message of a revised NIST publication, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which reflects public comments the agency received on its draft version released last summer. As part of a larger effort to support the development of trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework that NIST is developing. 

Stylized logo of the acronym NIST
This post originally appeared on the news feed for the National Institute of Standards and Technology.

According to NIST’s Reva Schwartz, the main distinction between the draft and final versions of the publication is the new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used. 

“Context is everything,” said Schwartz, principal investigator for AI bias and one of the report’s authors. “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”

Bias in AI can harm humans. AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant. It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group. The revised NIST publication acknowledges that while these computational and statistical sources of bias remain highly important, they do not represent the full picture.

A more complete understanding of bias must take into account human and systemic biases, which figure significantly in the new version. Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a person’s neighborhood of residence influencing how likely authorities would consider the person to be a crime suspect. When human, systemic and computational biases combine, they can form a pernicious mixture — especially when explicit guidance is lacking for addressing the risks associated with using AI systems. 

To address these issues, the NIST authors make the case for a “socio-technical” approach to mitigating bias in AI. This approach involves a recognition that AI operates in a larger social context — and that purely technically based efforts to solve the problem of bias will come up short. 

“Organizations often default to overly technical solutions for AI bias issues,” Schwartz said. “But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.” 

Socio-technical approaches in AI are an emerging area, Schwartz said, and identifying measurement techniques to take these factors into consideration will require a broad set of disciplines and stakeholders.

“It’s important to bring in experts from various fields — not just engineering — and to listen to other organizations and communities about the impact of AI,” she said.

NIST is planning a series of public workshops over the next few months aimed at drafting a technical report for addressing AI bias and connecting the report with the AI Risk Management Framework. For more information and to register, visit the AI RMF workshop page

Chad Boutin is a public affairs specialist with the National Institute of Standards and Technology. Boutin's focus at NIST centers on advanced communications, cybersecurity, information technology, biometrics, cryptography, and neutron research.

View all posts by Chad Boutin

Related Articles

Revisiting the ‘Research Parasite’ Debate in the Age of AI
International Debate
September 11, 2024

Revisiting the ‘Research Parasite’ Debate in the Age of AI

Read Now
This Anthropology Course Looks at Built Environment From Animal Perspective
Industry
September 10, 2024

This Anthropology Course Looks at Built Environment From Animal Perspective

Read Now
2024 Henry and Bryna David Lecture: K-12 Education in the Age of AI
Event
September 5, 2024

2024 Henry and Bryna David Lecture: K-12 Education in the Age of AI

Read Now
The Public’s Statistics Should Serve, Well, the Public
Industry
August 15, 2024

The Public’s Statistics Should Serve, Well, the Public

Read Now
Where Did We Get the Phrase ‘Publish or Perish’?

Where Did We Get the Phrase ‘Publish or Perish’?

The origin of the phrase “publish or perish” has been intriguing since this question was first raised by Eugene Garfield in 1996. Vladimir Moskovkinl talks about the evolution of the meaning of this phrase and shows the earliest use known at this point.

Read Now
Philosophy Has Been – and Should Be – Integral to AI

Philosophy Has Been – and Should Be – Integral to AI

Philosophy has been instrumental to AI since its inception, and should still be an important contributor as artificial intelligence evolves..

Read Now
Stop Buying Cobras: Halting the Rise of Fake Academic Papers

Stop Buying Cobras: Halting the Rise of Fake Academic Papers

It is estimated that all journals, irrespective of discipline, experience a steeply rising number of fake paper submissions. Currently, the rate is about 2 percent. That may sound small. But, given the large and growing amount of scholarly publications it means that a lot of fake papers are published. Each of these can seriously damage patients, society or nature when applied in practice.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments