Industry

Why Social Science? Because It Can Help Contribute to AI That Benefits Society

May 28, 2024 3868

Artificial Intelligence – “AI” – continues to be the subject of hot debate around the world as governments seek ways to regulate it to protect the public, and developers continue to push towards AI with more human-like capabilities. What’s at stake depends on who you listen to: some extoll the benefits of AI to “transform” the way we live and work, downplaying the potential for negative impacts on society, while others warn of an existential threat to humanity.  Most perspectives land somewhere in between. We see AI, like other technological advances before it, as an exciting tool with tremendous potential. As such, it is not inherently helpful or harmful: its impacts depend on how it is used. Now is the perfect time for the thoughtful and extensive integration of social science evidence and expertise into AI development, deployment, implementation, and use so that AI can be optimally, positively effective while minimizing risks of harm to society.

AI has existed in various forms for decades and until recently, was developed under tightly constrained parameters to do specific tasks. However, the 2022 launch of easy-to-access Large Language Model (LLM) tools such as ChatGPT, which input massive amounts of data and generate responses to questions in conversational language, had leaders across many sectors – from education to business – scrambling to set guidelines and parameters for AI’s use in their domains. Indeed, AI and other technologies are not typically implemented in isolation but in systems. Social science approaches can help us understand and address these technologies’ reach, implications, and impact within these “AI systems.”

What happens when we put AI into use without the proper safeguards or design? Unfortunately, we can point to numerous examples of unintended, negative consequences, including machine learning algorithms that persistently devised ways to avoid hiring female applicants, real estate and financial lending guidance bias, algorithm-related financial market crashes, biased sentencing recommendations, facial recognition and predictive policing biases, and automated vehicles with struck-from-behind crash rates over 4 times that of human drivers, to name a few. 

WhySocialScience logo_
This article was drawn from the Why Social Science? blog from the Consortium of Social Science Associations, and was originally titled “Because It Can Contribute to AI that Benefits Society.”

The good news is that the social and behavioral sciences are poised to inform and improve AI and AI systems through various approaches such as:  interdisciplinary team and human-centered design contributions to AI development; contributions to ethical frameworks and guidelines for AI development and deployment; assessment of risk and impact, and mitigation of bias; and development of social science-informed policy recommendations.

In his Why Social Science? blog, National Academies of Engineering President John Anderson stated, “We should be well past the days when the development of technology is separated from human needs, desires, and behavior.”  Yet, this is precisely how much AI technology has been developed. We envision interdisciplinary teams that include social and behavioral scientists working side-by-side on equitable footing with engineers and technologists to develop AI that benefits society. These social and behavioral scientists would bring deep expertise and insights on human diversity, needs, preferences, and capabilities to ensure that new AI tools are more usable, accessible, and implementable in ways that reduce bias, mitigate risk, benefit people, and have positive societal impact once integrated into systems. 

Social sciences can also inform the design and creation of ethical frameworks and guidelines for AI development and for deployment into systems. Social scientists can contribute expertise: on data quality, equity, and reliability; on how bias manifests in AI algorithms and decision-making processes; on how AI technologies impact marginalized communities and exacerbate existing inequities; and on topics such as fairness, transparency, privacy, and accountability. Further, social scientists can use evaluation and assessment methods to determine societal risks and biases in AI systems, then work with a range of stakeholders to address these challenges at multiple levels.

The vision of the Division of Behavioral and Social Sciences and Education (DBASSE) at the National Academies of Sciences, Engineering, and Medicine includes advancing knowledge and understanding of behavioral and social sciences to make significant contributions to public policy and to a thriving society. Indeed, we are thrilled to already be doing work in the AI and society space and are looking forward to doing significantly more. For example, our Board on Human-Systems Integration (BOHSI) released a consensus study report on Human AI Teaming in 2022, is contributing to an upcoming collaborative event, Human and Organizational Factors in AI Risk Management: A Workshop, and is hosting a webinar entitled AI for the Rest of Us: How Equitable Is the Future of Work for Front-Line Workers? on April 2, 2024. Our Societal Experts Action Network (SEAN) is hosting a webinar Navigating the AI Landscape: Strategies for State and Local Leaders on April 9, 2024. Our Committee on National Statistics (CNSTAT) will host AI Day for Federal Statistics: CNSTAT Public Event on May 2, 2024.  Our Committee on Law and Justice (CLAJ) contributed to the 2024 National Academies consensus study report on Facial Recognition Technology and will host the Law Enforcement Use of Person-based Predictive Policing Approaches: A Workshop later this year.

We are poised to examine AI in education systems through our Board on Science Education (BOSE), and perhaps leverage AI technology to model science, engineering, and technology ecosystems for our nascent Science, Engineering, and Technology Equity Roundtable. Our other units are scoping work in this space as well.

Innovative AI and AI systems hold huge promise for improving quality of life and contributing to societal thriving in the future. For example, emerging AI technologies may include personalized learning for diverse students and smart home technologies for aging adults. However, these benefits can only be fully realized when the potential risks and harms posed by AI are addressed. Social science can help do this by putting people at the center of AI and AI systems and do so using an equity lens.

Carlotta Arthur (pictured) is executive director of the Division of Behavioral and Social Sciences and Education (DBASSE) at the National Academies. Before this role, she served as the director of its Clare Boothe Luce Program for Women in STEM at the Henry Luce Foundation. She has also held various assistant and adjunct assistant professor positions at Meharry Medical College and the Dartmouth Geisel School of Medicine. Arthur was the first African American woman to earn her B.S. in metallurgical engineering from Purdue University and has earned an M.A. in psychology and Ph.D. in clinical psychology from the State University of New York at Stony Brook. Emanuel Robinson is the Board Director of the Board on Human-Systems Integration at The National Academies of Sciences, Engineering, and Medicine. He is also the Director of the Division of Behavioral and Social Sciences and Education (DBASSE) and received his M.S. and PhD from the Georgia Institute of Technology.

View all posts by Carlotta Arthur and Emanuel Robinson

Related Articles

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Exploring the Citation Nexus of Life Sciences and Social Sciences
Industry
November 6, 2024

Exploring the Citation Nexus of Life Sciences and Social Sciences

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 990 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
This Anthropology Course Looks at Built Environment From Animal Perspective

This Anthropology Course Looks at Built Environment From Animal Perspective

Title of course: Space/Power/Species What prompted the idea for the course? A few years ago, I came across the architect Joyce Hwang’s […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments