Innovation

Beware! AI Can Lie. Innovation
Not all liars have such an obvious tell. (Photo: Jametlene Reskp /Unsplash)

Beware! AI Can Lie.

December 3, 2024 2188

David Canter reveals how he discovered Microsoft Copilot acted like a lazy student, inventing responses with apparent confidence that were blatantly wrong.

I recently caught, in passing, an interesting mention on the wonderful BBC Radio 4 to the idea that today smartphones are more fruitfully thought of as places where we live ‘transportal homes.’ I was curious to find a detailed reference to this to explore the nuances and cite the origins for a book I am writing. The BBC website for the program did not give me enough details for that. I therefore asked the ‘Microsoft Copilot’ that had emerged, unasked, on my computer screen, “Which London Professor said an iPhone is really a place?”

Instantaneously I got a confident response that this was a professor at London School of Economics. That sound interesting. I therefore followed up with a request for a detailed citation to where the professor had published this perspective. The seemingly relevant title of a book was miraculously forthcoming.

I had a look at this book and could not see that it dealt at all with the issue I was seeking to understand. It also occurred to me that the person I had heard on the radio talking about the issue was a man and the professor nominated by the artificially intelligent algorithm was a woman.  Going back to the BBC website I found the name of the London professor and from further old-fashioned explorations of Google Schoolar realised he was the origin of the idea I was chasing.

It seemed to me worth following this up with Microsoft’s AI. So, I wrote “You got it wrong. It was Professor Daniel Miller who wrote about the iPhone as a place not Judy Wajcman as you claimed.”

The not very contrite reply was, “You’re absolutely right, my apologies for the mistake. It was indeed Professor Daniel Miller who described the iPhone as “a place” in his research on smartphones and their impact on our lives.

Thank you for pointing that out! Is there anything else you’d like to know about Professor Miller’s work or any other topic?”

It then gave a reference to the book in which Professor Miller had proposed his view on smartphones.

Trying my luck, I asked whether there was a page reference for this.

At the speed of light, I got the response. “I apologize for the oversight. The specific page reference is page 123.”

As it happens the book in question is fully available online. Page 123 has absolutely nothing to do with the issue in question. It is made up mainly of an illustration anyway.

Indeed, it dawned on my that 123 is a simple way of generating a page number if the actual number is unavailable.

The moral here is that I actually found what I needed to know by what might be regarded as ‘conventional’ means. But more importantly the dialogue with AI revealed that it is rather like a posturing student. An algorithm who wants to give an answer even when unsure of the validity of that answer. It is too self-confident and unprepared to admit ignorance, unprepared to lose face. Any seemingly appropriate answer will do. If it were a person it would be accused of lying.

Professor David Canter, the internationally renowned applied social researcher and world-leading crime psychologist, is perhaps most widely known as one of the pioneers of "Offender Profiling" being the first to introduce its use to the UK.

View all posts by David Canter

Related Articles

Long-Term Impact Requires Archiving Research Communication
Impact
March 14, 2025

Long-Term Impact Requires Archiving Research Communication

Read Now
Crystal Abidin on Influencers
Social Science Bites
March 3, 2025

Crystal Abidin on Influencers

Read Now
The Mystery of Xenotransplantation for Social Sciences
Innovation
February 27, 2025

The Mystery of Xenotransplantation for Social Sciences

Read Now
AI is Here, But Is It Here to Help Us or Replace Us?
Bookshelf
February 11, 2025

AI is Here, But Is It Here to Help Us or Replace Us?

Read Now
NAS Report Examines Nexus of AI and Workplace

NAS Report Examines Nexus of AI and Workplace

A 2024 report by the National Academies explores the latest advances in artificial intelligence (AI) technology and their potential effects on economic productivity, job stability, and income inequality. It also highlights key research opportunities and data needs to help workers and policymakers adapt to the evolving AI landscape.

Read Now
When Do You Need to Trust a GenAI’s Input to Your Innovation Process?

When Do You Need to Trust a GenAI’s Input to Your Innovation Process?

In this post, co-authors Frank T. Piller, Tucker J. Marion, and Mahdi Srour reflect on the inspiration behind their research article, “Generative […]

Read Now
The Authors of ‘Artificial Intelligence and Work’ on Future Risk

The Authors of ‘Artificial Intelligence and Work’ on Future Risk

During the final stages of editing the proofs for Artificial Intelligence and Work: Transforming Work, Organizations, and Society in an Age of Insecurity, […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest


This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments