Leveraging Large Language Models in Finance: A New Report on Responsible Adoption

How can large language models (LLMs) be responsibly integrated into the financial sector?

This is the central question explored in a new report, the result of a collaboration between the European Securities and Markets Authority (ESMA), the FaiR (Finance and Insurance Reloaded) programme at the Institut Louis Bachelier, and the FAIR (Framework for Responsible Adoption of Artificial Intelligence in the Financial Services Industry) programme at the Alan Turing Institute.

A Snapshot of Use Cases and Key Challenges

The report, titled Leveraging Large Language Models in Finance: Pathways to Responsible Adoption, presents a summary of the discussions held during a workshop organised in June 2024. The event brought together 38 experts in technology and finance to explore three key topics:

  • The current use and potential applications of LLMs in finance,

  • The risks and challenges linked to their deployment,

  • The conditions required to ensure their responsible adoption.

Towards Ethical and Well-Governed Artificial Intelligence

Generative LLMs are increasingly being used in financial services, particularly to automate tasks related to text analysis and production, and to support customer interactions. While these technologies offer significant efficiency gains, they also raise legal, ethical, and reputational concerns.

In this context, the report highlights the need to implement:

  • Industry-wide standards and appropriate evaluation metrics,

  • A proportionate regulatory framework,

  • Specific training programmes for staff.

It also draws attention to the environmental impact of LLMs, whose carbon footprint must be carefully assessed as their use becomes more widespread and embedded in the day-to-day operations of financial institutions.

Read and download the full report here

The views expressed in this report are those of the authors and should not be attributed to ESMA, the Institut Louis Bachelier, or the Alan Turing Institute.