Insights

Philosophy Has Been – and Should Be – Integral to AI

August 6, 2024 5147

New scientific understanding and engineering techniques have always impressed and frightened. No doubt they will continue to. OpenAI recently announced that it anticipates “superintelligence” – AI surpassing human abilities – this decade. It is accordingly building a new team, and devoting 20 perent of its computing resources to ensuring that the behavior of such AI systems will be aligned with human values.

It seems they don’t want rogue artificial superintelligences waging war on humanity, as in James Cameron’s 1984 science fiction thriller, The Terminator (ominously, Arnold Schwarzenegger’s Terminator is sent back in time from 2029). OpenAI is calling for top machine-learning researchers and engineers to help them tackle the problem.

The Conversation logo
This article by Anthony Grayling and Brian Ball originally appeared on The Conversation, a Social Science Space partner site, under the title “Philosophy is crucial in the age of AI.”

But might philosophers have something to contribute? More generally, what can be expected of the age-old discipline in the new technologically advanced era that is now emerging?

To begin to answer this, it is worth stressing that philosophy has been instrumental to AI since its inception. One of the first AI success stories was a 1956 computer program, dubbed the Logic Theorist, created by Allen Newell and Herbert Simon. Its job was to prove theorems using propositions from Principia Mathematica, a three-volume work from 1910 by philosophers Alfred North Whitehead and Bertrand Russell, aiming to reconstruct all of mathematics on one logical foundation.

Indeed, the early focus on logic in AI owed a great deal to the foundational debates pursued by mathematicians and philosophers.

One significant step was the German philosopher Gottlob Frege’s development of modern logic in the late 19th century. Frege introduced the use of quantifiable variables – rather than objects such as people – into logic. His approach made it possible to say not only, for example, “Joe Biden is president” but also to systematically express such general thoughts as that “there exists an X such that X is president,” where “there exists” is a quantifier, and “X” is a variable.

Other important contributors in the 1930s were the Austrian-born logician Kurt Gödel, whose theorems of completeness and incompleteness are about the limits of what one can prove, and Polish logician Alfred Tarski’s “proof of the indefinability of truth.” The latter showed that “truth” in any standard formal system cannot be defined within that particular system, so arithmetical truth, for example, cannot be defined within the system of arithmetic.

Finally, the 1936 abstract notion of a computing machine by the British pioneer Alan Turing drew on such development and had a huge impact on early AI.

It might be said, however, that even if such good old-fashioned symbolic AI was indebted to high-level philosophy and logic, the “second-wave” AI, based on deep learning, derives more from the concrete engineering feats associated with processing vast quantities of data.

Still, philosophy has played a role here, too. Take large language models, such as the one that powers ChatGPT, which produces conversational text. They are enormous models, with billions or even trillions of parameters, trained on vast datasets (typically comprising much of the internet). But at their heart, they track – and exploit – statistical patterns of language use. Something very much like this idea was articulated by the Austrian philosopher Ludwig Wittgenstein in the middle of the 20th century: “the meaning of a word,” he said, “is its use in the language.”

But contemporary philosophy, and not just its history, is relevant to AI and its development. Could an LLM truly understand the language it processes? Might it achieve consciousness? These are deeply philosophical questions.

Science has so far been unable to fully explain how consciousness arises from the cells in the human brain. Some philosophers even believe that this is such a “hard problem” that is beyond the scope of science, and may require a helping hand of philosophy.

In a similar vein, we can ask whether an image-generating AI could be truly creative. Margaret Boden, a British cognitive scientist and philosopher of AI, argues that while AI will be able to produce new ideas, it will struggle to evaluate them as creative people do.

She also anticipates that only a hybrid (neural-symbolic) architecture – one that uses both the logical techniques and deep learning from data – will achieve artificial general intelligence.

Human values

To return to OpenAI’s announcement, when prompted with our question about the role of philosophy in the age of AI, ChatGPT suggested to us that (amongst other things) it “helps ensure that the development and use of AI are aligned with human values.”

In this spirit, perhaps we can propose that if AI alignment is the serious issue that OpenAI believes it to be, it is not just a technical problem to be solved by engineers or tech companies but also a social one. That will require input from philosophers, social scientists, lawyers, policymakers, citizen users, and others.

Indeed, many people are worried about the rising power and influence of tech companies and their impact on democracy. Some argue we need a whole new way of thinking about AI – taking into account the underlying systems supporting the industry. The British barrister and author Jamie Susskind, for example, has argued it is time to build a “digital republic” – one which ultimately rejects the very political and economic system that has given tech companies so much influence.

Finally, let us briefly ask, how will AI affect philosophy? Formal logic in philosophy dates to Aristotle’s work in antiquity. In the 17th century, the German philosopher Gottfried Leibniz suggested that we may one day have a “calculus ratiocinator” – a calculating machine that would help us derive answers to philosophical and scientific questions in a quasi-oracular fashion.

Perhaps we are now beginning to realize that vision, with some authors advocating a “computational philosophy” that literally encodes assumptions and derives consequences from them. This ultimately allows factual and/or value-oriented assessments of the outcomes.

For example, the PolyGraphs project simulates the effects of information sharing on social media. This can then be used to computationally address questions about how we ought to form our opinions.

Certainly, progress in AI has given philosophers plenty to think about; it may even have begun to provide some answers.

Anthony Clifford Grayling (pictured) CBE FRSA FRSL is a British philosopher and author and professor at Northeastern University London. He has written many 30 books on philosophy, biography, history of ideas, human rights and ethics, including The Refutation of Scepticism (1985), The Future of Moral Values (1997), Wittgenstein (1992), What Is Good? (2000), The Meaning of Things (2001), The Good Book (2011), The God Argument (2013), The Age of Genius: The Seventeenth Century and the Birth of the Modern Mind (2016) and Democracy and its Crises (2017). Grayling was a trustee of the London Library and a fellow of the World Economic Forum, and is a fellow of the Royal Society of Literature and the Royal Society of Arts. Brian Ball is an associate professor of philosophy AI and information ethics at Northeastern University London. His primary research interest is in the metaphysics of intentional states and acts – which is to say that he explores the natures of such things as knowledge, belief, judgment, and assertion, all of which have the distinctive characteristic of being about something.

View all posts by Anthony Grayling and Brian Ball

Related Articles

The End of Meaningful CSR?
Business and Management INK
November 22, 2024

The End of Meaningful CSR?

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Deciphering the Mystery of the Working-Class Voter: A View From Britain
Insights
November 14, 2024

Deciphering the Mystery of the Working-Class Voter: A View From Britain

Read Now
Our Open-Source Tool Allows AI-Assisted Qualitative Research at Scale
Innovation
November 13, 2024

Our Open-Source Tool Allows AI-Assisted Qualitative Research at Scale

Read Now
How Managers Can Enhance Trust

How Managers Can Enhance Trust

How to stimulate interpersonal trust in organizations? How can performance management contribute to trust? And, can other types of management control also […]

Read Now
Doing the Math on Equal Pay

Doing the Math on Equal Pay

In the UK, it’s November 20. In France, it’s today, November 8. For the EU, it’s November 15. It’s the day of […]

Read Now
Exploring the Citation Nexus of Life Sciences and Social Sciences

Exploring the Citation Nexus of Life Sciences and Social Sciences

Drawing on a bibliometric study, the authors explore how and why life sciences researchers cite the social sciences and how this relationship has changed in recent years.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Eric Taylor

It would seem that philosophy in AI is inevitable.Its certainly integrated at the fundamental level.Can it be used to dig deeper into the the fundamentals of Human Nature?Why not?It’s the lever to a leap in evolution.And it’s inevitable.The genie is out of the box.The real question here is can it be made profitable in ways that its predisesor couldn’t.Or does the payoff come from its influence on humanity and their other endevors.Either way it’s insanely exciting to experience this in my lifetime.

Grant Castillou

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition… Read more »