The Trouble with Bootlaces: Treading on Artificial and General Intelligence
Pulling yourself up by your bootlaces is a process that often happens in statistical calculations. In effect, the data is put through a series of analyses to try and reveal interesting patterns within it. Nothing new is added or taken away. The numbers are just crunched and re-crunched until they give up something that looks like the secrets the numbers may be hiding. For example, if you look at the relationship every intelligence test question has to every other item you can show that all these relationships have something in common. This gives rise to the idea of there being ‘general intelligence,’ with underlies all other forms of intelligence.
The problem with this pulling you up by your bootstraps is that it is a totally internal process. Unless you add other forms of questions, say about emotional issues, you will only get results that relate to ‘the number you first thought of’, as in the conjurer’s trick. Furthermore, going beyond the setting of questions, to look for intelligence in other ways, may open up totally different ways of dealing with the world. Ways not open to the ‘general intelligence’ that is fed by intelligence tests. A prime example of the limits of such tests were the results of testing street children in Brazil with mathematical questions. They were very poor at this. But set them numerical tasks they face daily, of dealing with any currencies they could get their hands on, and they did very well indeed.
The moral of the problem with bootstrapping procedures is significant for considering the strengths and weaknesses of the new natural language bots, notably ChatGPT, which is causing such a stir around the world. It is essentially a very sophisticated bootstrapping procedure. It searches the text available on the web and puts it through its own mangle to draw out some essence of what is there.
This mangle is given the exciting name of ‘artificial intelligence.’ Computer scientists and engineers have always been very good at selling their creations but putting magical names on them, often drawn first from the creative wiles of science fiction writers. As noted with the measurement of ‘intelligence’ by psychologists, the term is a fraught one. Although it has been snaffled by computer folk, its more profound meaning requires that it is a property of a person. What all the robotic engineers and their associates have still not acknowledged is that to be a person you have to be born into some sort of social grouping and grow up within a related context. We do not arrive on earth as fully formed human beings, but literally and figuratively grow into being a person. Our intelligence is an aspect of that developing experience.
What computer boffins work with is not intelligence, artificial or not. It is a set of rigid procedures, algorithms, that work through the existing material available to it. It is very complex bootstrapping.
This was all brought home to me when, for the first time I questioned ChaGPT. I thought I’d ask it about something I lived through and so knew a lot about. So, egocentrically I asked it:
I remember well the day in 1990 when I wrote various terms on a blackboard so that together with a couple of colleagues, I could choose the best term. In that moment investigative psychology was created.
ChatGPT gave this answer.
This is bootstrapping at its worst. The overall feel of the response is reasonable but the details are sorely misleading. As I said, it was the 1990s and the term ‘including’ does not reflect the actual process. Donna Youngs was not around at that time, as I recall. She did go on to author with me the major textbook on the field, so her later involvement is not totally inaccurate. But Paul Britton had nothing to do with this. He may have since claimed to have contributed, but I have yet to see any publications of his that contributes to investigative psychology. The many other of my students and colleagues who have developed the field to the extent that some have been given national awards for their contributions were not picked up by ChatGPT, presumably because they take investigative psychology as a given and thus do not often mention it in their writings.
This is the problem with the bootstrapping at the heart of AI, it can only work with what it is given. It has no body (and nobody) in place to add from direct experience to the swirling search of the mega-terabytes that make up what people have put online. My example is a small, merely annoying one, but one can only imagine what ChatGPT might pick up about for example the MMR vaccination, or the U.S. election that Trump badmouthed. Indeed, this opens in intriguing area of research. Studying what Chat GPT produces could be regarded as digging into the atavistic collective unconscious that the is the Wild West of the World Wide Web.
Of course, once some sort of adjudication is put in place, filters, censors and the like, it is no longer a bootstrapping process. External human experience, and even emotion, come into play. The intelligence that is being used is no longer Artificial.