The global financial crisis of 2007-2008 – known as the subprime crisis – was triggered by the inability of American households to repay their mortgages. This credit risk, although well known to banks, had been underestimated and subsequently spread to the entire international financial sector with the unrestrained securitization of US mortgage portfolios and its transmission to bank balance sheets in other geographical areas, mainly Europe.

To counter the crisis, global regulators adopted a number of measures, in particular increasing banks’ regulatory capital requirements. In short, banks are required to hold larger liquidity cushions than in the pre-crisis period. It was against this backdrop that global regulators adopted the first agreements of the Basel 3 reform in 2010, which was finalized in 2017 and then phased in from 2022. This package of prudential measures enables banks to adopt two different approaches for calculating their credit risk (in force since 2006 and the Basel 2 Accords): either the regulators’ standard models or the internal models developed by the banks (Internal Ratings-Based or IRB), subject to validation by the regulators.

According to specialists, the use of internal models would enable banks to calculate their credit risk exposure more accurately, but these models could also lead banks to underestimate the risks.

In this context, the use of machine learning – which has developed considerably in many sectors in recent years – in banks’ internal models raises a number of questions among regulators and financial players themselves. Are these models more effective at identifying risk than the usual statistical models? What data are used in internal machine-learning models? Can the algorithms used in these IRB models be interpreted and explained? Are they commonly used by banks? Do they enable banks to reduce their regulatory capital requirements? Do these models generate productivity gains?

In addressing these and other questions, this new issue of the Opinions & Débats series merits careful reading. Its authors – Christophe Hurlin (Université d’Orléans) and Christophe Pérignon (HEC Paris) – analyse, both theoretically and in a practical terms, the use of machine learning in banks’ internal models for calculating credit risks, which then determine their regulatory capital. This study summarises the academic literature on the subject and provides recommendations for both financial practitioners and regulators.

Enjoy your reading!