Innovation

Why Don’t Algorithms Agree With Each Other?

February 21, 2024 1464

There is a tendency to think of automatic online processes as neutral and mathematically logical. They are usually described as algorithms. That means they consist of a series of pre-planned calculations that can be reduced to the most elementary operations, such as plus, minus, if then. It would seem obvious to assume that these ethereal calculations are derived from rational procedures which are firmly based on mathematical principles.

Such assumptions about algorithms are important to examine in light of the explosion of artificially intelligent systems. They are based on the apparently exotic process, which is referred to as ‘machine learning.’ As much as IT experts wish to insist their systems are almost human, hijacking words like ‘intelligence’ and ‘learning,’ their machines are still just frighteningly fast automatons. They do not understand, conceptualize, or even perceive as a human being would. Their ‘learning’ is based on building millions of connections between inputs and outputs.

These associations, and associations between the associations, and the links between them, may open up surprising possibilities for achieving the goals they are set. But the machine is not surprised. It will now know or comprehend what it has discovered in any way that is analogous to the thoughts of a person.

Given all that, the role of human input would seem to be negligible? If the algorithms are just churning through endless connections, derived from millions of examples, where is the possibility for values, preferences, biases, and all those aspects of human thoughts and actions that make those thoughts and actions human? Do these algorithms inevitably produce the neutral, unprejudiced results that would be expected of a mere machine?

I had the unexpected possibility of testing this hypothesis recently when I set out to get motor insurance for the car I was about to buy. These days it is rare to have the possibility of talking to an insurance agent. All searches for insurance cover consist of filling in forms online. Even if you do manage to speak to someone, there is no conversation. That person is just filling in a form on your behalf. The algorithms rule.

Going through this process with several different insurance companies it quickly became clear that they all ask the same questions. Age, history of motoring accidents, marital status, previous insurance history details of the car to be insured, and so on. I’m sure some of these questions have no bearing on the calculation of how much to charge for the insurance premium. They are probably using the opportunity to derive information about the demographics of potential customers. One company, having been told I was retired, wanted to know about previous employment. But otherwise, the basic information being asked for was the same, even if the format varied.

To may surprise the resulting premiums requested varied enormously. The first company, one I’d insured with previously, declared it would not insure a person of my age! They suggested I contact an insurance broker. They came up with a premium of over £2,000. I therefore approached a company that advertised widely. Their figure was £1,500. Both way beyond the average figure I’d paid in the past. Undaunted, I filled in the form for another well-known insurer. They came up with an offer close to £800. Interestingly, all four of the forms I’d filled in where somewhat different, even though they asked me the same questions. Out of curiosity, I filled in the form for a fifth organization. This form was remarkably similar to the fourth organisation. It offered a premium just £2 more expensive than the fourth one.

This empirical study therefore showed very clearly that the algorithms these companies used had somewhat different biases built into them. They were all huge companies, presumably with access to vast amounts of data on the risks associated with insuring different cars and different owners. Could that data have been so different from one to the other? What variations must have been built in by some human agency to generate such a variety of different outcomes?

 The algorithms for car insurance must be much simpler than many of the processes that are now carried out by the impressively complex artificially intelligent system which are now storming the ramparts of daily activities. The results here are a clear warning that no matter how sophisticated the programming, no matter how many interactions have been used to ‘educate’ the algorithms, they are generated by human beings. People who have values and biases, undeclared prejudices, and unconscious habits. We regard them as neutral machines at our peril.  

.

Professor David Canter, the internationally renowned applied social researcher and world-leading crime psychologist, is perhaps most widely known as one of the pioneers of "Offender Profiling" being the first to introduce its use to the UK.

View all posts by David Canter

Related Articles

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Deciphering the Mystery of the Working-Class Voter: A View From Britain
Insights
November 14, 2024

Deciphering the Mystery of the Working-Class Voter: A View From Britain

Read Now
Our Open-Source Tool Allows AI-Assisted Qualitative Research at Scale
Innovation
November 13, 2024

Our Open-Source Tool Allows AI-Assisted Qualitative Research at Scale

Read Now
Julia Ebner on Violent Extremism
Insights
November 4, 2024

Julia Ebner on Violent Extremism

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do

Emerson College Pollsters Explain How Pollsters Do What They Do

As the U.S. presidential election approaches, news reports and social media feeds are increasingly filled with data from public opinion polls. How […]

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

With over 50 countries around the world holding major elections during 2024 it has been a hugely significant year for democracy as […]

Read Now
‘Settler Colonialism’ and the Promised Land

‘Settler Colonialism’ and the Promised Land

The term ‘settler colonialism’ was coined by an Australian historian in the 1960s to describe the occupation of a territory with a […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments