Home
/
On the program
/
News
/
...

AI algorithms are fully capable of tacitly colluding on prices — Marie Brière and André Lévy-Lang, for Les Échos

A scientific, economic, and strategic challenge for Europe
Apr 9, 2026 14:05
Apr 9, 2026

In an op-ed published on March 26, 2026, in Les Échos, Marie Brière, CEO of the Institut Louis Bachelier and Head of Investor Research at Amundi Investment Institute, and André Lévy-Lang, founding chairman of the Institut Louis Bachelier, warn about the emergence of new systemic risks linked to the rise of artificial intelligence, alternative data, and digital infrastructures in finance.

Through this piece, the authors دعوت us to broaden our understanding of financial risks. Beyond the vulnerabilities already well identified by traditional models, they highlight more diffuse, complex, and sometimes hard-to-detect fragilities: data biases, technological dependencies, opaque interactions between automated systems, and spillover effects associated with the growing use of AI.

To read the full article:
https://www.lesechos.fr/idees-debats/cercle/marches-financiers-les-algorithmes-dia-sont-parfaitement-capables-de-sentendre-tacitement-sur-les-prix-2223048

Risks that are still hard to see, yet already structuring

Major financial crises have shown how certain risks can remain underestimated—or even invisible—for a long time before producing major effects on markets and the real economy. It is precisely this gray area that the op-ed explores.

In an environment where financial actors increasingly rely on non-traditional data—texts, images, sounds, real-time digital signals—analytical capabilities are strengthening, but sources of fragility are also multiplying. The promise of more granular information should not obscure reality: the quality, robustness, and representativeness of such data are far from guaranteed in all cases.

The authors emphasize that these new information sources can also introduce biases, misinterpretations, or new vulnerabilities, particularly in cases of manipulation, data breaches, or adversarial attacks. In this context, the challenge is not merely to use more data, but to develop methodological frameworks capable of testing their reliability.

From model risk to AI system risk

The op-ed also marks an important shift in how technological risk in finance is understood. While markets have long focused on “model risk,” the authors argue that we are now entering a new dimension: risks inherent to AI systems themselves.

These risks are harder to define, as they do not stem solely from parameter errors or poorly calibrated assumptions. They also arise from model opacity, the way systems learn, their interactions with one another, and how they are used in sensitive areas such as financial analysis, automated advisory services, and algorithmic trading.

The rise of generative AI undoubtedly opens up considerable opportunities for finance, but it also raises new questions. What biases do models reproduce or amplify? What happens when multiple systems make decisions in parallel based on similar signals? How can emergent collective behaviors be prevented when they are not explicitly programmed?

Algorithmic collusion: a blind spot to monitor

Among the most striking points raised is the issue of algorithmic collusion. The authors note that some AI systems can, under certain conditions, converge toward coordinated pricing behaviors without any explicit agreement being programmed by their designers.

In other words, algorithms may learn to “tacitly collude” in their strategies simply because it optimizes their outcomes. While such a phenomenon has not yet been formally observed in financial markets, it has been reproduced in experimental settings, making it an increasingly important area of attention for researchers, regulators, and market participants.

Beyond this, the authors also highlight autonomous agent systems capable of communicating, reasoning, and making decisions among themselves. In such architectures, a minor error or localized misunderstanding can quickly propagate from one agent to another, eventually affecting the entire system. This interaction dynamic, still imperfectly understood, deserves rigorous study.

A scientific, economic, and strategic challenge for Europe

Finally, the op-ed places these issues within a broader framework of technological sovereignty and infrastructure dependencies. At a time when computing services, AI platforms, and major digital infrastructures are highly concentrated, Europe’s ability to control its data, tools, and technological value chains becomes a strategic concern.

For the authors, financial risks can no longer be considered independently from geopolitical, industrial, and regulatory challenges. The interconnection between markets, technologies, and critical infrastructures calls for a renewed approach to risk—more cross-cutting, forward-looking, and collective.

A contribution to public debate led by the Institut Louis Bachelier

This op-ed fully aligns with the mission of the Institut Louis Bachelier: to shed light on transformations in the financial system through research, foster dialogue between academics, institutions, and practitioners, and develop analytical frameworks suited to contemporary challenges.

By highlighting these “hidden risks” of finance in the age of AI, Marie Brière and André Lévy-Lang call for making visible what remains insufficiently understood. A crucial requirement for building a more robust, transparent financial system, better prepared for the shocks of tomorrow.

download the publicationview the publication
;