AI May Usurp the Market in Guiding Public Policy Decisions
Following the successful Brexit campaign, Dominic Cummings – the then campaign director of Vote Leave – published a series of blog posts describing how the campaign was run and what his plans were for a successful civil service. The last of these posts was released on June 26, 2019, just before he became the special adviser to the current prime minister, Boris Johnson. The idea this post resurrects is a promise in public policy that has died since the 1970s – the use of hard scientific (knowledge-based) methods to guide policy choices.
In what looks like to be Cumming’s version of public policy, an elite group of administrators trained in the disciplines of pure thought – mathematicians and philosophers – would run society based on evidence. Collected data points would be used to create a machine simulation (often called “the model”). Policy makers would then be able to test the simulations with hypothetical policies (“what if drugs were legal?”) and, according to the results, adjust public policy.
A complete cybernetic version of economic policy was advocated, but not practiced, in the Soviet Union by the likes of Nobel-prize winning economist Leonid Kantorovich and mathematician and computer scientist Victor Glushkov. They hypothesized the possibility of taking things a step further – getting the machines to identify what actions to take to reach optimal outcomes. That is, policy makers would need to decide what they are looking to achieve (“maximize the production of butter”) and machines would come up with the the policy of how to allocate resources to achieve this.
Outside the Soviet Union, this kind of thinking was actually enacted with Project Cybersyn, an effort put together by management consultant Stafford Beer in the 1970s for the government of Chile under the then president, Salvador Allende, to help manage the economy (the project was dismantled following the coup by General Augusto Pinochet).
Though Cybersyn was never fully operational, it was rushed into use so as to help break one of the biggest anti-government strikes, which was instigated by a right-wing union. Beer’s vision is far more decentralized and democratic than its Soviet counterpart, but it still falls within the same line of thought.
As you will have gauged by now, the cybernetic vision tends to be securely located on the left of the political spectrum.
The market
Sitting on the opposite side of the cybernetic vision, one will find the fathers of modern liberal economics, Ludwig von Mises and Friedrich von Hayek. Their arguments, taken more broadly, consider the cybernetic dream impossible from a computational perspective, either due to not being able to model the world efficiently, or not having appropriate signals to evaluate the quality of solutions.
They argued that another mechanism that exists inside the real world (in their case, the market) needs to do the heavy lifting, by providing a signal – which, in the case of goods and services, is prices. For them, a good policy is not one that lays out what steps need to be taken towards a solution, but focuses more on setting a “game” of sorts with the right incentives and punishments. This basically just leaves room for one real public policy which can be summed up as “privatize everything, create a competitive arena, let the market sort the problems out.”
Leaving all real policy decisions to the market has been a very traditional (post-1980s at least) right-wing idea. This raises the question as to why someone advising the current UK government is even discussing concepts that are not purely market-driven. In his latest post, Cummings laments the inability of the British state to do serious modeling. This seems a superb contradiction – shouldn’t the market be able to solve everything?
It is worth mentioning that conceptions of planning methods differ a lot across individual thinkers – there are even advocates of socialist markets on the left. Though there is a clear left-right divide, in terms of actual party politics it seems that the idea of some planning has been partially accepted (somewhat grudgingly) by the historical right for some time.
AI and public policy
So, does the progress in AI and (the concurrent) massive increase in computational power and availability of data allow us to circumvent the liberal arguments? I would say yes, but only partially. One can easily envision a solution where the latest AI methods are used to affect policy directly. It’s quite plausible that one could plan and re-plan millions of products and services on a daily basis, find the optimal set of actions to help tackle social ills and generally push for an overall brighter future.
This isn’t, however, trivial – delivering causal models to drive simulations is extremely hard, requires significant expertise, and can only be done in a limited capacity. On top of this, current AI methods lack a concept of “common sense.” A model created with a specific task in mind might be able to optimize for said task, but is prone to generating unwanted side effects. For example, an AI-optimized factory that aims to optimize production will do so without care for the environment.
But the mother of all problems in AI is that a lot of the more modern probabilistic planning algorithms are not stable without excessive human tuning, due to a number of reasons that are beyond the scope of this article. In practice, this means that outside straightforward, traditional planning (such as linear programming), getting value from modern AI requires significant human expertise. At the moment this sits mostly within private AI research labs and some university departments. Any serious attempt to create a cybernetic state would need both significant human resources to be moved towards the project and some further algorithmic breakthroughs.
Unfortunately, current AI deployments in public policy do not adhere to the ideas above. It seems that AI is mostly deployed only for simple predictive tasks (“will person X will commit crime Y in the future?”). For this reason, public bodies are finding this technology increasingly useless. But technological innovations almost always experience a series of failures before they find their pace, so hopefully AI will eventually be implemented properly.
Back to Brexit
What does Brexit have to do with any of this? My understanding is that Brexit (according to Cummings) is needed in order to help disrupt the civil service enough so as to allow it to be rebuilt. It would then be possible to deploy serious AI public policy solutions (which is another name for scientific planning). So the British state would be deploying projects that can model the future, with machines or civil servants probing the model for golden paths.
What is truly surprising, in my view, is that such proposals don’t come from the broad political left (though there are, of course, extremely interesting takes on the topic of scientific planning) – but from the right. This might imply the use of AI to hasten the free-market agenda by asking questions like “what is the best propaganda to produce in order to get everyone on board with increasing state pension age to 95, privatizing every public service and getting people to accept a ban on immigration?”
All this AI talk might be a red herring – the more traditional right-wing Brexit party policies are simply an intensification of a deregulation agenda, though again the signals are mixed. Alternatively, it might be the case that there is a split between One Nation Conservatives and free marketeers across the board.
It’s hard to imagine the EU allowing for direct planning (it goes against most of the principles of the internal market), but it’s equally hard to envision post-Brexit Britain doing the same. Most institutions see the market as the only legitimate form of organization.
But some cracks in the consensus seem to be appearing. Perhaps we may end up in a position where actively planning using AI towards a “good society” is actively pursued.