
The happenstance of a financial crisis more daring than the crisis of 2008 seems more inevitable than the event of a World War 3. History has done a great deal by showing us that financial crises are not just a mere possibility; they are a recurring reality. But times have changed—AI has emerged more prominently than ever and everyone talks about how transformative it is in every aspect. In as much as there’s a consensus to this, there seems to be a question we all are not asking. AI’s rise has transformed the game in financial markets, but can it also rewrite the rules of crisis? You know what I think? Its role in the next financial crisis will be more contentious than critical.
The Flash Crash of May 6, 2010, remains a watershed moment for understanding how automated systems can destabilize financial markets. On that day, the Dow Jones Industrial Average plunged nearly 1,000 points—approximately 9%—in minutes, only to recover almost as quickly. Investigations revealed that a single $4.1 billion sell order triggered a cascade of automated responses from high-frequency trading (HFT) algorithms. There are many lessons to learn from this. For me, the most outstanding has to be how tightly interconnected systems, when programmed to react to the same signals, amplify volatility rather than control or mitigate it. What industry leaders now call “flash feedback loops” – algorithms reacting too quickly to signals without human intervention is another danger the flash crash foreshadowed.
AI-driven algorithms now dominate trading volumes. They’ve even grown exponentially more sophisticated since 2010. Yet, they are still prone to “herding behavior,” where models trained on similar datasets and using analogous strategies respond uniformly to market signals.
It’s all fun now because AI-driven technologies have promised efficiency, precision, and scalability in financial operations, however their vulnerabilities and capacity for unforeseen consequences raise pressing concerns.
Could AI Systems Create a False Sense of Security?

The over-reliance on AI to “manage the chaos” has lulled market participants into believing that technology can prevent all crises. This complacency—what some call “techno-optimism”—can be the catalyst for inadequate preparation towards systemic shocks. If everyone assumes AI will stabilize markets, what happens when it doesn’t? If I had an absolute say in it, I’d say don’t count your chickens before they hatch.
The way I see things, when the next financial crisis swings in, Artificial Intelligence will amplify risks.
Here are my two cents on why:
- Overreliance on Black-Box Models is a Hidden Threat in Plain Sight
Many financial institutions employ AI models whose inner workings—often referred to as ‘black boxes’—are opaque even to their creators. An Associate Professor of Electrical and Computer Engineering, who specializes in artificial intelligence puts this into perspective: “The black box nature of the system means we can’t trace the system’s thought process and see why it made this decision” says Rawashdeh. Additionally, A report from C3 AI further emphasizes: “Coupling such systems with poor interpretability… can create a recipe for disaster.” Being so, it is no shock to see a systemic risk necessitated by lack of transparency. In a crisis, regulators and stakeholders very often struggle to pinpoint the root causes of market disruptions. Due to this, effective intervention becomes a sloth.
During the COVID-19 market crash of 2020, some AI models failed to adapt to the abrupt shift in market dynamics. Large Language Models trained on historical data struggled to account for unprecedented factors like lockdowns and global supply chain disruptions, resulting in flawed predictions and misaligned trading strategies. Wim Naudé echoed his thoughts on this pointing out how AI struggled with the unique challenges posed by the pandemic. He stated: “For any prediction algorithm that relies on past behavior, a global outlier event… can be described as ‘the kryptonite of modern Artificial intelligence’.” As these errors by default persist, it becomes clear that relying on AI without understanding its limitations is akin to building a house on sand.
- Data Dependencies and the GIGO Problem
Mainak Mazumdar is of the opinion that flawed data leads to compromised outputs in AI systems. This sounds to me more like a statement of fact than a statement of opinion—“It’s not the algorithm, but the biased data,” he says.
AI systems rely on vast amounts of historical data to make predictions. However, markets are evolutionary, and past data sometimes fail to capture emerging risks 100%. This issue is compounded by the “garbage in, garbage out” problem: if the data inputted into AI models is flawed or biased, the outputs will be similarly compromised. In a crisis scenario, outdated or incomplete data could lead AI systems to make catastrophic errors, such as mispricing assets or misjudging risk exposures. Gopinath, an IMF official, recently detailed that if AI models are trained on historical data that do not reflect current realities, they may trigger rapid sell-offs in response to falling asset prices.
Could AI Reinforce Financial Inequality?

Kristalina Georgieva, Managing Director of the IMF provides a satisfactory response to this question. She emphasizes that “the adoption of AI in finance is likely to favor larger institutions that can invest in advanced technologies, potentially widening the gap between them and smaller firms.”
Quite frankly, the adoption of AI in finance has created a competitive advantage for institutions with access to advanced technology and data. Smaller firms and emerging markets may be left in the lurch, projecting global financial inequalities. Jamie Dimon, CEO of JPMorgan Chase paints the picture perfectly—“those who can harness AI effectively will have a significant edge over their competitors, leaving smaller firms struggling to keep pace.” In a crisis, this disparity could deepen, as wealthier institutions use AI to shield themselves while less-equipped players are left holding the bag.
- Speed and Complexity in HFT
High-frequency trading, powered by AI, operates at a speed of milliseconds. Really impressive stuff, the reason being that it increases market liquidity under normal conditions. But there’s a dark side to it; during periods of extreme volatility, AI-driven HFT systems sometimes withdraw liquidity to avoid losses. A study analyzing HFT’s impact on the Italian market found that an increase by one standard deviation of HFT activity raises volatility by between 0.5 and 0.8 standard deviations. This means that when high-frequency trading (HFT) activity increases significantly (by one standard deviation, a common statistical measure of how much something varies), the level of market volatility also increases, but by a smaller amount—somewhere between 0.5 and 0.8 standard deviations. So if HFT activity becomes unusually high, it makes prices change more unpredictably.
Events like this leave markets vulnerable to sudden and severe price swings. In such moments, markets are a house of cards—even a slight nudge can bring the entire structure tumbling down.
- Systemic Interconnectedness
AI systems deployed across global financial institutions are highly interconnected. A failure in one system could cascade through the network, much like the 2008 financial crisis, where the collapse of Lehman Brothers spread rapidly through interlinked credit markets. In a system where AI underpins trading, lending, and risk management, a localized failure could set off a chain reaction with global repercussions.
Who is Accountable When AI Fails?

AI systems are rapidly getting autonomous, this makes it difficult to pinpoint responsibility. If an AI-driven trading algorithm initiates a market collapse, does the blame lie with the developers, the institutions, or the regulators who permitted its use? Without clear accountability, it’s easy for stakeholders to pass the buck, leaving markets and regulators scrambling for answers during a crisis.
To put this to an end, the question is not whether AI will play a role in the next crisis, but whether it will act as a savior—softening the blow, or a culprit—intensifying the fallout. The financial world continues to walk a tightrope, the balance between innovation and caution will determine whether AI is a boon or a bane to global markets. Call me crazy, but I think it’ll be bane.