Public Policy

How Intelligent is Artificial Intelligence? Public Policy
"There is a danger that criticism of the AI project in these terms is seen as romantic resistance to progress, like John Ruskin’s hostility to railways," writes Robert Dingwall. (Image: Hathi Digital Library Trust and the University of Minnesota Library /Victorian Web)

How Intelligent is Artificial Intelligence?

September 27, 2023 1450

Cryptocurrencies are so last year. Today’s moral panic is about AI and machine learning. Governments around the world are hastening to adopt positions and regulate what they are told is a potentially existential threat to humanity – and certainly to a lot of middle-class voters in service occupations. However, it is notable that most of the hype is coming from the industry, allied journalists and potential clients, and then being transmitted to the politicians. The computer scientists I talk to – and this is inevitably a small and potentially biased sample – are a good deal more skeptical.

One of them suggested a useful experiment that anyone can carry out on ChatGPT. Ask it to write your obituary in the style of a serious newspaper. Who could resist the chance to read their own obituary? So I invited the site to produce one. The software had no difficulty in distinguishing me from the handful of other Robert Dingwalls on the Internet. However, it got my birth year and age wrong in the first paragraph and then went on to describe my professional contributions at such a high level of generalization and vagueness as to be pretty meaningless. It knew I had a wife, children and grandchildren but nothing about them: for various reasons both my wife and two of my children minimize their web presence. It correctly identified the areas of sociology to which I had contributed but not in any specific way. It thought my first degree was from Oxford rather than Cambridge and made no mention of my PhD from Aberdeen. I have never held a position at a US university – I have been a visiting fellow at the American Bar Foundation, which is a non-profit that collaborates with Northwestern. The punchline is a rather trite quote from John Donne that could have been used about anybody.

Maybe the approach would work better if I had asked it to write an obituary for a higher-profile public figure like a president or prime minister. On the other hand, I would argue that my web presence is probably greater than many academics and that it is a fairer test of AI to ask it to write about the Z-list than the A-list. The obituary exercise is one area where we can almost guarantee to be more expert than the tool we are using in a way that makes visible its limitations.

What I have yet to see is any acknowledgment that the enthusiasm for AI and Large Language Models rests on a particular stance in relation to philosophy and the social sciences that has been contested since at least the beginning of the 20th century. Specifically, there is a continuing demand for models of human action and social organization that assume this is governed by rules or laws akin to those of Newtonian physics. These have consistently failed to deliver but the demand does not go away. Big Data is the latest iteration. If only we can devote enough computing power to the task, and assemble large enough data sets, then we will be able to induce the algorithms that generate the observable world of actions, ideas and material reality.

I learned social science methods just as SPSS was coming into use. Our first caution was against ‘data dredging’ – correlating every variable with every other variable and then making up a story about the ones that were significant. The mathematics of probability mean that some relationships will inevitably be significant by chance rather than causality and we might make some dangerous mistakes. Machine learning has a distinct whiff of the same problem.

Readers of this blog may have seen my comments on the film Oppenheimer and its representation of the relativist revolution in 20th-century thought. AI takes the opposite side. It assumes that meanings and representations are stable and unidimensional. But Large Language Models do not escape the indeterminacy of language simply by scale. It is not an accident that the US Navy funded Harold Garfinkel to study the ‘etcetera clause,’ the implicit extension to any rule which acknowledges that its operation inevitably reflects the circumstances of its use. Corpus linguistics has told us many interesting things about bodies of text and the history of language but the ability of computing power to generate meaningful talk outside fairly narrow and highly rule-governed contexts is limited. This is a feature not a bug. Popular resentment at ‘Computer says no’ responses from large organizations reflects precisely the algorithm’s insensitivity to context and the local features that humans attend to.

Set against the AI revolution, we have the traditions of understanding humans as skilled improvisers, capable of creating spontaneous order in the moment, taking account of the actions of others and the specifics of the material and non-material resources at hand. As Stanley Lieberson and Howard Becker proposed, from very different starting points, this may offer us a probabilistic view of the possibilities of planned social actions, much as Oppenheimer’s generation moved on from Einstein’s hankering for a rule-governed, wholly predictable, universe.

There is a danger that criticism of the AI project in these terms is seen as romantic resistance to progress, like John Ruskin’s hostility to railways. That would be a serious mistake. The fad for AI takes one side in a debate between serious people in philosophy and the sciences, social and natural, that has been conducted for the last century or so. If the limits of language are not properly understood, all we are left with is the hype of a snake oil industry trying to extract profits from naïve investors and favorable regulation from gullible politicians.

Whatever happened to cryptocurrencies…?

Robert Dingwall is an emeritus professor of sociology at Nottingham Trent University. He also serves as a consulting sociologist, providing research and advisory services particularly in relation to organizational strategy, public engagement and knowledge transfer. He is co-editor of the SAGE Handbook of Research Management.

View all posts by Robert Dingwall

Related Articles

NAS Report Examines Nexus of AI and Workplace
Bookshelf
December 20, 2024

NAS Report Examines Nexus of AI and Workplace

Read Now
When Do You Need to Trust a GenAI’s Input to Your Innovation Process?
Business and Management INK
December 13, 2024

When Do You Need to Trust a GenAI’s Input to Your Innovation Process?

Read Now
The Authors of ‘Artificial Intelligence and Work’ on Future Risk
Innovation
December 4, 2024

The Authors of ‘Artificial Intelligence and Work’ on Future Risk

Read Now
Why Might RFK Jr Be Good for US Health Care?
Public Policy
December 3, 2024

Why Might RFK Jr Be Good for US Health Care?

Read Now
Beware! AI Can Lie.

Beware! AI Can Lie.

David Canter reveals how he discovered Microsoft Copilot acted like a lazy student, inventing responses with apparent confidence that were blatantly wrong. […]

Read Now
Tenth Edition of The Evidence: Why We Need to Change the Narrative Around Part-Time Work

Tenth Edition of The Evidence: Why We Need to Change the Narrative Around Part-Time Work

In this month’s edition of The Evidence newsletter, Josephine Lethbridge explores how new flexible working policies are effectively reducing the gender pay […]

Read Now
Joshua Greene on Effective Charities

Joshua Greene on Effective Charities

Harvard psychology professor Joshua Greene studies the back-and-forth between emotion and reason in how human beings make moral decisions. In this Social […]

Read Now
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments