Innovation

Why Don’t Algorithms Agree With Each Other?

February 21, 2024 1423

There is a tendency to think of automatic online processes as neutral and mathematically logical. They are usually described as algorithms. That means they consist of a series of pre-planned calculations that can be reduced to the most elementary operations, such as plus, minus, if then. It would seem obvious to assume that these ethereal calculations are derived from rational procedures which are firmly based on mathematical principles.

Such assumptions about algorithms are important to examine in light of the explosion of artificially intelligent systems. They are based on the apparently exotic process, which is referred to as ‘machine learning.’ As much as IT experts wish to insist their systems are almost human, hijacking words like ‘intelligence’ and ‘learning,’ their machines are still just frighteningly fast automatons. They do not understand, conceptualize, or even perceive as a human being would. Their ‘learning’ is based on building millions of connections between inputs and outputs.

These associations, and associations between the associations, and the links between them, may open up surprising possibilities for achieving the goals they are set. But the machine is not surprised. It will now know or comprehend what it has discovered in any way that is analogous to the thoughts of a person.

Given all that, the role of human input would seem to be negligible? If the algorithms are just churning through endless connections, derived from millions of examples, where is the possibility for values, preferences, biases, and all those aspects of human thoughts and actions that make those thoughts and actions human? Do these algorithms inevitably produce the neutral, unprejudiced results that would be expected of a mere machine?

I had the unexpected possibility of testing this hypothesis recently when I set out to get motor insurance for the car I was about to buy. These days it is rare to have the possibility of talking to an insurance agent. All searches for insurance cover consist of filling in forms online. Even if you do manage to speak to someone, there is no conversation. That person is just filling in a form on your behalf. The algorithms rule.

Going through this process with several different insurance companies it quickly became clear that they all ask the same questions. Age, history of motoring accidents, marital status, previous insurance history details of the car to be insured, and so on. I’m sure some of these questions have no bearing on the calculation of how much to charge for the insurance premium. They are probably using the opportunity to derive information about the demographics of potential customers. One company, having been told I was retired, wanted to know about previous employment. But otherwise, the basic information being asked for was the same, even if the format varied.

To may surprise the resulting premiums requested varied enormously. The first company, one I’d insured with previously, declared it would not insure a person of my age! They suggested I contact an insurance broker. They came up with a premium of over £2,000. I therefore approached a company that advertised widely. Their figure was £1,500. Both way beyond the average figure I’d paid in the past. Undaunted, I filled in the form for another well-known insurer. They came up with an offer close to £800. Interestingly, all four of the forms I’d filled in where somewhat different, even though they asked me the same questions. Out of curiosity, I filled in the form for a fifth organization. This form was remarkably similar to the fourth organisation. It offered a premium just £2 more expensive than the fourth one.

This empirical study therefore showed very clearly that the algorithms these companies used had somewhat different biases built into them. They were all huge companies, presumably with access to vast amounts of data on the risks associated with insuring different cars and different owners. Could that data have been so different from one to the other? What variations must have been built in by some human agency to generate such a variety of different outcomes?

 The algorithms for car insurance must be much simpler than many of the processes that are now carried out by the impressively complex artificially intelligent system which are now storming the ramparts of daily activities. The results here are a clear warning that no matter how sophisticated the programming, no matter how many interactions have been used to ‘educate’ the algorithms, they are generated by human beings. People who have values and biases, undeclared prejudices, and unconscious habits. We regard them as neutral machines at our peril.  

.

Professor David Canter, the internationally renowned applied social researcher and world-leading crime psychologist, is perhaps most widely known as one of the pioneers of "Offender Profiling" being the first to introduce its use to the UK.

View all posts by David Canter

Related Articles

Julia Ebner on Violent Extremism
Insights
November 4, 2024

Julia Ebner on Violent Extremism

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do
Communication
October 23, 2024

Emerson College Pollsters Explain How Pollsters Do What They Do

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Webinar: Banned Books Week 2024

Webinar: Banned Books Week 2024

As book bans and academic censorship escalate across the United States, this free hour-long webinar gathers experts to discuss the impact these […]

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments