When Artificial Intelligence Becomes a Central Banker

Lambert here: Please, no.

Jon Danielsson. Director, Systemic Risk Centre London School Of Economics And Political Science. Originally published at VoxEU.

Artificial intelligence is expected to be widely used by central banks as it brings considerable cost saving and efficiency benefits. However, as this column argues, it also raises difficult questions around which tasks can safely be outsourced to AI and what needs to stay in the hands of human decision makers. Senior decision makers will need to appreciate how AI advice differs from that produced by human specialists, and shape their human resource policies and organisational structure to allow for the most efficient use of AI without it threatening the mission of the organisation.

Central banks are rapidly deploying artificial intelligence (AI), driven by the promise of increased efficiency and cost reductions. AI engines are already serving as central bankers. But with most AI applications today low level, and with the conservative nature of central banks, AI adoption is slower than in private sector financial institutions. Still, the direction of travel seems inevitable, with AI set to take on increasingly important roles in central banking. That raises questions about what we can entrust to AI and where humans need to be in charge.

We might think the economy and especially the financial system – the domain of the central banks – is the ideal application for AI. After all, the economy and the financial system generate almost infinite amounts of data, so plenty for AI to train on. Every minute financial institutional decision is recorded and trades are stamped to the microsecond. Emails, messages, and phone calls of traders and important decision makers’ interactions with clients are recorded, and central banks have access to very granular economic data. But data do not equal information, and making sense of all these data flows is like drinking from a fire hose. Even worse, the information about the next crisis event or inflationary episode might not even be in observed data.

What AI Can and Can’t Do

At the risk of oversimplifying, it is helpful to think of the benefits and threats of AI on a continuum.

On one end, we have a problem with well-defined objectives, bounded immutable rules, and finite and known action space, like the game of chess. Here, AI excels, making much better decisions than humans. It might not even need data because it can generate its own training datasets.

For central banks, this includes ordinary day-to-day operations, monitoring, and decisions, such as the enforcement of microprudential rules, payment system operation, and the monitoring of economic activity. The abundance of data, clear rules and objectives, and repeated events make it ideal for AI. We already see this in the private sector, with Blackrock’s AI-powered Aladdin serving as the world’s top risk management engine. Robo-regulators in charge of ‘RegTech’ are an ideal AI application. At the moment, such work may be performed by professionals with a bachelor’s or master’s degree, and central banks employ a large number of these. Central banks may first perceive value in having AI collaborate with human staff to tackle some of the many jobs that require attention, while not altering staff levels. However, as time passes, central banks may grow to embrace the superior decisions and cost savings that come from replacing employees with AI. That is mainly possible with today’s AI technology (Noy and Zhang 2023, Ilzetzki and Jain 2023.)

As the rules blur, objectives become unclear, events infrequent, and the action space fuzzy, AI starts to lose its advantage. It has limited information to train on, and important decisions might draw on domains outside of the AI training dataset.

This includes higher-level economic activity analysis, which may involve PhD-level economists authoring reports and forecasting risk, inflation, and other economic variables – jobs that require comprehensive understanding of data, statistics, programming, and, most importantly, economics. Such employees might generate recommendations on typical monetary policy decisions based on some Taylor-type rule, macroprudential tuning of the composition and the amount of liquidity and capital buffers, or market turmoil analysis. While the skill level for such work is higher than for ordinary activities, a long history of repeated research, coupled with standard analysis frameworks, leaves significant amount of material for AI to train on. And crucially, such work does not involve much abstract analysis. AI may in the future outperform human personnel in such activities, and senior decision makers might come to appreciate the faster and more accurate reports by AI. This is already happening rapidly, for example, with ChatGPT and AI-overseen forecasting.

In extreme cases, such as deciding how to respond to financial crises or rapidly rising inflation – events that the typical central banker might only face once in their professional lifetime – human decision makers have the advantage since they might have to set their own objectives, while events are essentially unique, information extremely scarce, expert advice is contradictory, and the action space unknown. This is the one area where AI is at a disadvantage and may be outperformed by the human abstract analyst (Danielsson et al. 2022)

In such situations, mistakes can be catastrophic. In the 1980s, an AI called EURISKO used a cute trick to defeat all of its human competitors in a naval wargame, sinking its own slowest ships to achieve better manoeuvrability than its human competitors. And that is the problem with AI. How do we know it will do the right thing? Human admirals don’t have to be told they can’t sink their own ships; they just know. The AI engine has to be told. But the world is complex, and creating rules covering every eventuality is impossible. AI will eventually run into cases where it takes critical decisions no human would find acceptable.

Of course, human decision makers mess up more often than AI. But, there are crucial differences. The former also come with a lifetime of experience and knowledge of relevant fields, like philosophy, history, politics, and ethics, allowing them to react to unforeseen circumstances and make decisions subject to political and ethical standards without it being necessary to spell them out. While AI may make better decisions than a single human most of the time, it currently has only one representation of the world, whereas each human has their own individual worldview based on past experiences. Group decisions made by decision makers with diverse points of view can result in more robust decisions than an individual AI. No current, or envisioned, AI technology can make such group consensus decisions (Danielsson et al. 2020).

Furthermore, before putting humans in charge of the most important domains, we can ask them how they would make decisions in hypothetical scenarios and, crucially, ask them to justify them. They can be held to account and be required to testify to Senate committees. If they mess up, they can be fired, punished, incarcerated, and lose their reputation. You can’t do any of that with AI. Nobody knows how it reasons or decides, nor can it explain itself. You can hold the AI engine to account, but it will not care.


The usage of AI is growing so quickly that decision makers risk being caught off guard and faced with a fait accompli. ChatGPT and machine learning overseen by AI are already used by junior central bankers for policy work.

Instead of steering AI adoption before it becomes too widespread, central banks risk being forced to respond to AI that is already in use. While one may declare that artificial intelligence will never be utilised for certain jobs, history shows that the use of such technology sneaks up on us, and senior decision makers may be the last to know.

AI promises to significantly aid central banks by assisting them with the increasing number of tasks they encounter, allowing them to target limited resources more efficiently and execute their job more robustly. It will change both the organisation and what will be demanded of employees. While most central bankers may not become AI experts, they likely will need to ‘speak’ AI – be familiar with it – and be comfortable taking guidance from and managing AI engines.

The most senior decision makers then must both appreciate how AI advice differs from that produced by human specialists, and shape their human resource policies and organisational structure to allow for the most efficient use of AI without it threatening the mission of the organisation.

References available at the original.

Print Friendly, PDF & Email
This entry was posted in Banking industry, Risk and risk management, Technology and innovation on by .

About Lambert Strether

Readers, I have had a correspondent characterize my views as realistic cynical. Let me briefly explain them. I believe in universal programs that provide concrete material benefits, especially to the working class. Medicare for All is the prime example, but tuition-free college and a Post Office Bank also fall under this heading. So do a Jobs Guarantee and a Debt Jubilee. Clearly, neither liberal Democrats nor conservative Republicans can deliver on such programs, because the two are different flavors of neoliberalism (“Because markets”). I don’t much care about the “ism” that delivers the benefits, although whichever one does have to put common humanity first, as opposed to markets. Could be a second FDR saving capitalism, democratic socialism leashing and collaring it, or communism razing it. I don’t much care, as long as the benefits are delivered. To me, the key issue — and this is why Medicare for All is always first with me — is the tens of thousands of excess “deaths from despair,” as described by the Case-Deaton study, and other recent studies. That enormous body count makes Medicare for All, at the very least, a moral and strategic imperative. And that level of suffering and organic damage makes the concerns of identity politics — even the worthy fight to help the refugees Bush, Obama, and Clinton’s wars created — bright shiny objects by comparison. Hence my frustration with the news flow — currently in my view the swirling intersection of two, separate Shock Doctrine campaigns, one by the Administration, and the other by out-of-power liberals and their allies in the State and in the press — a news flow that constantly forces me to focus on matters that I regard as of secondary importance to the excess deaths. What kind of political economy is it that halts or even reverses the increases in life expectancy that civilized societies have achieved? I am also very hopeful that the continuing destruction of both party establishments will open the space for voices supporting programs similar to those I have listed; let’s call such voices “the left.” Volatility creates opportunity, especially if the Democrat establishment, which puts markets first and opposes all such programs, isn’t allowed to get back into the saddle. Eyes on the prize! I love the tactical level, and secretly love even the horse race, since I’ve been blogging about it daily for fourteen years, but everything I write has this perspective at the back of it.


  1. jo6pac

    Well it might limit out right thief but I can’t see anything positive here. Then again it depends on who watches over AI on thief side. I glad I’m old.

    1. Jams O'Donnell

      No. Everything depends on who does the initial programming and selects the data input to the AI. The problem is that many of these people are convinced of the wisdom and effectiveness of the following:

      men (as opposed to women), western ideology, capitalism and all that it implies, western societal norms and the dollar.

      They are not generally, and especially in the world of finance, concerned with:

      justice, the good of society as a whole, the environment, poor people, non-western countries except as markets, etc.

      OK – this is a sweeping statement, and I am sure there may be substantial exceptions, but this is in general, true, I believe.

    2. Maricata

      It depends on who owns the means of production, in this case the specific AI.

      They make all decisions under capitalism.

      The ruling class.

  2. enoughisenough

    can anything be said to be “efficient” when it uses so much energy and water in a climate crisis??

    This is insanity.

    1. Jeremy Grimm

      Perhaps the AI will use less energy and water than the bankers it replaces — and the replaced bankers will probably need to adopt a much smaller energy and water footprint after they are downsized.

    2. Mikel

      Basically, how much of this “efficiency” IS the climate crisis?
      The have infinite growth plans for water intensive wafers and think people should be sacrificed. So how much you or I bathe is presented as the main problem.

      1. cnchal

        Google has that covered. They are going to suck on the aquifer in the Cascades to cool their chips and how much they are going to suck is secret. They will suck it all.

        The purpose is to run an ad fraud business with a crappy search engine attached.

  3. Susan the other

    This makes me wonder about the evolution of the concept of risk. And liquidity. Not to mention an updated definition of profit. The whole thing from “private” to “public” needs clarification. And because AI as we know it cannot yet handle “not” there’s no way to safely consider it any kind of intelligence. When quantum comes online there will be very efficient and accurate projections of probabilities but what if even quantum has a glitch with the nots? It surely will if it interfaces with some crappy little efficiency app. It’s possible that central banks are dinosaurs because their bedrock rationalization was always based on profit and in a cycle of balance, which is now required for survival, profit is a big wobble. Profit is a not.

  4. Louis Fyne

    everything that can be said about “AI” writ large applies to AI used by policy makers, Skynet is only as good as:

    the “cleanliness” of the data, the scope of data, the weightings applied to the data, the applicability of whether the past base rate applies to the current case, whether human biases influence the work of the AI, etc.

    The advantage of “AI” (in actuality, large data sets and lots of computing power) is that computers have perfect memories and theoretically can find correlations (potential causations) better than humans.

    1. digi_owl

      Ah yes, how long before some AI bans ice cream because it correlates with violent crime?

  5. Jeremy Grimm

    Using AI in Central Banking will save money, reduce the number of subordinates to manage and pay, and help free up upper management so they can enjoy more time on the golf course.
    “Of course, human decision makers mess up more often than AI.”
    “On one end, we have a problem with well-defined objectives, bounded immutable rules, and finite and known action space, like the game of chess. Here, AI excels, making much better decisions than humans. It might not even need data because it can generate its own training datasets.” I hope this is not begging the question just a little — is it?

    “Group decisions made by decision makers with diverse points of view can result in more robust decisions than an individual AI.” But imagine the robust decisions a group of differently programmed AI programs could make.

    “If they[humans] mess up, they can be fired, punished, incarcerated, and lose their reputation.” However in counterbalance, they could be called upon to give testimony against their superiors, if central bank screw-ups are ever prosecuted. But an AI will hold its tongue and cannot be interrogated — even using enhanced interrogation techniques. An AI never tires, never asks for raises, and sits quietly [forgiving fan noise] and does not irritate upper management with issues collateral to increasing the profits coming to the banks. If mistakes are made, they will not matter as much. AI will always make their mistakes in favor of the banks.

  6. R.S.

    I recall there was once a certain hydraulic thing called “Monetary National Income Analogue Computer”. I’m not sure if it was used for actually making decisions though.

  7. Synoia

    Well, first, that gigantic AI also has another attribute. It becomes a Target.

    Second, I have yet to read that an A!c an be mirrored, and AI results are by bay a second AI are constant and repeatable.

    I do know that in a non linear systems, results commonly cannot be accurately replicated, leading to the question: Which AI does one trust.

  8. JBird4049

    It does seem that the Elites, and I am referring to the planetary and not just Western, seem determined to dumb down the population, inflicting crapification on the areas economic, political, legal, social, educational, and even the climate and ecosystem to enhance their ability to extract resources and maintain the power over everyone. It appears AI is going to be another useful tool in this.

    I have see the comments on the wisdom of crowds. Using those crowds means losing up control over information and decision making, and it also helps to have a very well educated society with as many skills as possible, which also threatens the control of the ruling class. Both Western and Eastern countries are increasingly dystopian authoritarian or totalitarian police states run by people who seem to believe that the more they can freeze the heart, soul, and minds of their respective peoples, the better it will be for them, even if not for their subjects.

    AI putative ability to do more, faster, and more reliable, but unable to self reflect, have wisdom, or explain itself along with the ability to encode biases by whoever did the original coding is disturbing; that they can apparently depend on it to be cheaper, faster, and more reliable pushes all the buttons in the elite’s collective mind; its use being one of the causes of the decrease in flexibility, creativity, wisdom, foresight, adaptability, and ultimately survivability in the long run being considered less important than their short term control. It also shows the lack of wisdom of people who are supposed to know what they are doing.

    1. Jeremy Grimm

      The AI could become a universal diviner of the Way, very like the Neoliberal Market as an ultimate computer of the pareto optimal. An AI’s inscrutability leaves no room for the meek to question its conclusions or reasoning. And the Elites will own and control this new Wizard behind the curtain. I am reminded of the Twilight Zone about the old man in the cave who tells what food is all right to eat — although that story presents a very different moral than that of the Wizard of Oz, which better fits the idea of using AI to manage Central Banking policies.

  9. TomDority

    a long history of repeated research, coupled with standard analysis frameworks, leaves significant amount of material for AI to train on

    “Of course, human decision makers mess up more often than AI” – Huge assumption and, (my opinion) Of course a mess up. How many decisions have humans messed per decision and how many decisions did AI mess up per decision?
    Also, a significant amount of material that AI has to train on is skewed neoliberal – so more of the same

  10. Synoia

    Real Intelligence, especially in the Blob, appears badly broken.

    possibly artificial can go a better job.

    Also AI appears bribe free, given it has free food.

Comments are closed.