Yves here. We have actually have had a major AI financial crisis, except it occurred well before that nomenclature became common. Algo-driven trading is an AI implementation, particularly the black box variety.
The 1987 stock market crash resulted from a large-scale implementation of automated selling called portfolio insurance. So even from the early days of computer-implemented trading strategies, we’ve seen that they can wreak havoc.
By Jon Danielsson, Director, Systemic Risk Centre London School Of Economics And Political Science and Andreas Uthemann, Principal Researcher Bank Of Canada; Research Associate at the Systemic Risk Centre London School Of Economics And Political Science. Originally published at VoxEU
The rapid adoption of artificial intelligence is transforming the financial industry. This first of a two-column series argues that AI may either increase systemic financial risk or act to stabilise the system, depending on endogenous responses, strategic complementarities, the severity of events it faces, and the objectives it is given. AI’s ability to master complexity and respond rapidly to shocks means future crises will likely be more intense than those we have seen so far.
Both the private and the public financial sectors are expanding their use of artificial intelligence (AI). Because AI processes information much faster than humans, it may help cause more frequent and more intense financial crises than those we have seen so far. But it could also do the opposite and act to stabilise the system.
In Norvig and Russell’s (2021) classification, we see AI as a “rational maximising agent”. This definition resonates with the typical economic analyses of financial stability. What distinguishes AI from purely statistical modelling is that it not only uses quantitative data to provide numerical advice; it also applies goal-driven learning to train itself with qualitative and quantitative data. Thus, it can provide advice and even make decisions.
It is difficult to gauge the extent of AI use in the financial services industry. The Financial Times reports that only 6% of banks plan substantial AI use, citing concerns about its reliability, job losses, regulatory aspects, and inertia. Some surveys concur, but others differ. Finance is a highly competitive industry. When start-up financial institutions and certain large banks enjoy significant cost and efficiency improvements by using modern technology stacks and hiring staff attuned to AI, more conservative institutions probably have no choice but to follow.
The rapid adoption of AI might make the delivery of financial services more efficient while reducing costs. Most of us will benefit.
But it is not all positive. There are widespread concerns about the impact of AI on the labour market, productivity and the like (Albanesi et al. 2023, Filippucci et al. 2024). Of particular concern to us is how AI affects the potential for systemic financial crises, those disruptive events that cost the large economies trillions of dollars and upend society. This has been the focus of our recent work (Danielsson and Uthemann 2024).
The Roots of Financial Instability
We surmise that AI will not create new fundamental causes of crises but will amplify the existing ones: excessive leverage that renders financial institutions vulnerable to even small shocks; self-preservation in times of crisis that drives market participants to prefer the most liquid assets; and system opacity, complexity and asymmetric information that make market participants mistrust one another during stress. These three fundamental vulnerabilities have been behind almost every financial crisis in the past 261 years, ever since the first modern one in 1763 (Danielsson 2022).
However, although the same three fundamental factors drive all crises, it is not easy to prevent and contain crises because they differ significantly. That is to be expected. If financial regulations are to be effective, crises should be prevented in the first place. Consequently, it is almost axiomatic that crises happen where the authorities are not looking. Since the financial system is infinitely complex, there are many areas where risk can build up.
The key to understanding financial crises lies in how financial institutions optimise – they aim to maximise profits given the acceptable risk. When translating that into how they behave operationally, Roy’s (1952) criterion is useful – stated succinctly, maximising profits subject to not going bankrupt. That means financial institutions optimise for profits most of the time, perhaps 999 days out of 1,000. However, on that one last day, when great upheaval hits the system and a crisis is on the horizon, survival, rather than profit, is what they care most about ― the ‘one day out of a thousand’ problem.
When financial institutions prioritise survival, their behaviour changes rapidly and drastically. They hoard liquidity and choose the most secure and liquid assets, such as central bank reserves. This leads to bank runs, fire sales, credit crunches, and all the other undesirable behaviours associated with crises. There is nothing untoward about such behaviour, but it cannot be easily regulated.
When AI Gets Involved
These drivers of financial instability are well understood and have always been a concern, long before the advent of computers. As technology was increasingly adopted in the financial system, it brought efficiency and benefited the system, but also amplified existing channels of instability. We expect AI to do the same.
When identifying how this happens, it is useful to consider the societal risks arising from the use of AI (e.g. Weidinger et al. 2022, Bengio et al. 2023, Shevlane et al. 2023) and how these interact with financial stability. When doing so, we arrive at four channels in which the economy is vulnerable to AI:
- The misinformation channel emerges because the users of AI do not understand its limitations, but become increasingly dependent on it.
- The malicious use channel arises because the system is replete with highly resourced economic agents who want to maximise their profit and are not too concerned about the social consequences of their activities.
- The misalignment channel emerges from difficulties in ensuring that AI follows the objectives desired by its human operators.
- The oligopolistic market structure channel emanates from the business models of companies that design and run AI engines. These companies enjoy increasing returns to scale, which can prevent market entry and increase homogeneity and risk monoculture.
How AI can destabilise the system
AI needs data to be effective, even more so than humans. That should not be an issue because the system generates plenty of data for it to work with, terabytes daily. The problem is that almost all that data comes from the middle of the distribution of system outcomes rather than from the tails. Crises are all about the tails.
There are four reasons why we have little data from the tails.
The first is the endogenous response to control by market participants; this relates to the AI misinformation channel. A helpful way to understand that is Lucas’s (1976) critique and Goodhart’s (1974) law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”. Market participants do not just stoically accept regulations. No, they respond strategically. They do not tell anybody beforehand how they plan to respond to regulations and stress. They probably do not even know. Consequently, the reaction functions of market participants are hidden. And something that is hidden is not in a dataset.
The second reason, which follows from the malicious channel, is all the strategic complementarities that are at the heart of how market participants behave during crises. They feel compelled to withdraw liquidity because their competitors are doing so. Meanwhile, strategic complementarities can lead to multiple equilibria, where wildly different market outcomes might result from random chance. Both these consequences of strategic complementarities mean that observations of past crises are not all that informative for future ones. This is another reason we do not have many observations from the tails.
At the root of the problem are two characteristics of AI: it excels at extracting complex patterns from data, and it quickly learns from the environment in which it operates. Current AI engines observe what the competitors do, and it would not be difficult for them to use those observations to improve their own models of how the world works. What this means in practice is that future AI in private firms and public organisations train, and hence optimise, to influence one another.
Aligning the incentives of AI with those of its owner is a hard problem – the misalignment channel. It can get worse during crises, when speed is of the essence and there might be no time for the AI to elicit human feedback to fine-tune objectives. The traditional way the system acts to prevent run equilibria might not work anymore. The ever-present misalignment problem between individually rational behaviour and socially desirable outcomes might be exacerbated if human regulators can no longer coordinate rescue efforts and ‘twist arms’. AI might have already liquidated their positions, and hence caused a crisis, before the human owner can pick up the phone to answer the call of the Fed chair.
AI will probably exacerbate the oligopolistic market structure channel for financial instability, further strengthened by the oligopolistic nature of the AI analytics business. As financial institutions come to see and react to the world in increasingly similar ways, they coordinate in buying and selling, leading to bubbles and crashes. More generally, risk monoculture is an important driver of booms and busts in the financial system. Machine learning design, input data and compute affect the ability of AI engines to manage risk. These are increasingly controlled mainly by a few technology and information companies, which continue to merge, leading to an oligopolistic market.
The main concern from this market concentration is the likelihood that many financial institutions, including those in the public sector, get their view of the world from the same vendor. That implies that they will see opportunities and risk similarly, including how those are affected by current or hypothetical stress. In crises, this homogenising effect of AI use can reduce strategic uncertainty and facilitate coordination on run equilibria.
Given the recent wave of data vendor mergers, it is a concern that neither the competition authorities nor the financial authorities appear to have fully appreciated the potential for increased systemic risk that could arise from oligopolistic AI technology.
Summary
If faced with existential threats to the institution, AI optimises for survival. But it is here that the very speed and efficiency of AI works against the system. If other financial institutions do the same, they coordinate on a crisis equilibrium. So, all the institutions affect one another because they collectively make the same decision. They all try to react as quickly as possible, as the first to dispose of risky assets is best placed to weather the storm.
The consequence is increased uncertainty, leading to extreme market volatility, as well as vicious feedback loops, such as fire sales, liquidity withdrawals and bank runs. Thanks to AI, stress that might have taken days or weeks to unfold can now happen in minutes or hours.
The AI engine might also do the opposite. After all, just because AI can react faster does not mean it will. Empirical evidence suggests that, although asset prices might fall below fundamental values in a crisis, they often recover quickly. That means buying opportunities. If the AI is not that concerned about survival and the engines converge on a recovery equilibrium in aggregate, they will absorb the shock and no crisis will ensue.
Taken together, we surmise that AI will act to lower volatility and fatten the tails. It could smooth out short-term fluctuations at the expense of more extreme events.
Of particular importance is how prepared the financial authorities are for an AI crisis. We discuss this in a VoxEU piece appearing next week, titled “How the financial authorities can respond to AI threats to financial stability”.
Authors’ note: Any opinions and conclusions expressed here are those of the authors and do not necessarily represent the views of the Bank of Canada.
See original post for references
As opposed to Model Collapse, AI could trend towards fashion, ie dresses are competition among females and sixpack_abs are competition among males. Attention is paid toward the main competition, instead of the theoretical object of inquiry.
So why shouldn’t AI’s start paying primary attention to each other’s actions? This is a main covariant of the selection process. Once that happens, the door is open to indirect collusion, as has happened in the past with airline pricing.
An AI doesn’t have to be aware of the other AIs – it’s making observations, acting on those observations and learning. It’s quite natural that at least some AI models would “learn” to manipulate the markets even though it would not “think” it as manipulations; it would be just a recognizable pattern in the data flow.
“Market state A, followed by action A’ will lead to state B which, followed by action B’ will cause short chaos in the state, and a correctly timed action C will lead to maximizing my feedback score…”
Yes airline pricing indeed. I am now driving 600 miles every weekend (300 one way) from Indianapolis to Warren, Michigan for work and check for air tickets incessantly using every known method. I am unable to find anything < $320 or thereabouts. Bootyjudge is unlikely to disturb the very profitable airline sector that since 2016 has attracted enormous investments from the usual Wall Street investor class (Buffett etc.). This is what the Democrats excel at. Set the avaricious "investor" class on the middle class. Will AI accelerate the coming backlash against Corporations and the political class who enable it? I very sincerely hope that AI in the stock market empties out a lot of 401Ks as in 2007/08. I am betting that the low lifes in Wall Street and PE are already working/colluding on creating AI bots that engineer stock bubbles to reel the suckers in and then BOOM!
>> I am now driving 600 miles every weekend (300 one way) from Indianapolis to Warren, Michigan for work and check for air tickets incessantly using every known method. I am unable to find anything < $320 or thereabouts.
I'm sorry. DTW to IND is pretty much one of the worst routes with respect to competition and price given the vagaries of point-to-point markets.
To put it into respective, flew 1600 miles to Mexico for $350 per person, round-trip, tax-fees included.
Feature, not a bug.
>> I am now driving 600 miles every weekend (300 one way) from Indianapolis to Warren, Michigan for work and check for air tickets incessantly using every known method. I am unable to find anything < $320 or thereabouts.
I'm sorry. DTW to IND is pretty much one of the worst routes with respect to competition and price given the vagaries of point-to-point markets.
To put it into respective, flew 1600 miles to Mexico for $350 per person, round-trip, tax-fees included.
>>>How AI can destabilise the system
It isn’t just “AI”….various mechanical trading systems are common with well-known mechanics (eg, trend-following, long X-short Y, treasury basis trades). Add lots of leverage, and pretty much intra-day volatility is at the mercy of system-traders and the ebb/flow of passive index money.
And just with Silicon Valley Bank, if one day everyone woke up to XYZ news, it only takes 3 minutes to log-on to your broker app and liquidate your entire passive index fund portfolio for $0.00 in commissions.
We saw a taste of that w/the Covid-lockdown mini-crash.
how can you stabilize a system that’s rapacious? when all of the money ends up in the hands of a few. A.I. will declare victory and say you won the board game monopoly.
again, its a BIZZARO world
Agreed, this was my first reaction. If all systems are aimed at the same objective — which, as “Joshua” identifies in “War Games” is simply, to win the game, then how does this NOT create an inherently unstable system in aggregate: a singularity? And how would the interests and quality of life of the average citizen-voter-consumer-worker be defended in the scenario? No.
I think the real story in “AI” is the general failure to produce ROI. We may be at the point where the big players are starting to see that, but won’t yet stop spending, for the same reason commercial landlords maintain empty space rather than lowering rents to market clearing levels.
Well, I don’t understand this at all except to think that snippets of AI software can act as a release valve for financial things that might otherwise go exponential because AI was initially used to program them to do so – so in that sense AI is aware of its destabilization aspects and uses another AI intervention destabilization to fight impending systemic destabilization which could otherwise cause cause critical mass, thus achieving a new level of financial stability. And nobody cares about blood and guts left behind in the natural environment. We should just call it A because there is no intelligence involved.
I am not too worried, when markets get too volatile, the authorities will resort to cancelling trades that are not to their liking, as if that has not happened before. If all else fails, they will declare a market holiday until “sentiment improves”. As a last resort, the PPT (Plunge Protection Team) will step into the light and declare that there will be a floor to the stock market that will be continuously adjusted upwards.
Just eliminate the finance sector:
● Permanent zero interest rate policy
● Eliminate tax advantaged savings incentives
● Require public companies to offer unlimited new shares at a fixed price
● Full bank deposit insurance and unsecured overdrafts at the Fed
AI’s impact on finance is fascinating and a bit scary. I remember when I was struggling with my finance assignment, I stumbled upon https://essays.edubirdie.com/finance-assignments that really saved me. The insights I gained from it made me think about how AI could both help and hinder financial stability. It’s crazy to think that while AI can predict market trends and prevent crises, it can also create new vulnerabilities if not managed well. Have you ever considered how much we rely on these systems without fully understanding their potential risks?