AI is rewriting finance, can regulators keep up?

Four years ago, AI in capital markets was a promising experiment. Now, it’s fundamental. From robo-advising and algorithmic trading to fraud detection and regulatory compliance, AI is becoming embedded in financial markets.
Large language models and generative AI are reshaping everything from risk management to financial reporting, analysing vast amounts of structured and unstructured data at speeds no human can match. But with AI’s growing influence comes an urgent question: how do regulators ensure transparency, accountability, and fairness in a world where decisions are increasingly made by machines?
The International Organization of Securities Commissions (IOSCO) is tackling that challenge head-on. Its latest consultation report examines AI’s expanding role in capital markets, highlighting the benefits but also the risks: over-reliance on black-box models, regulatory blind spots, and the dangers of AI-generated market manipulation. The report also explores how firms and regulators are responding, with some applying existing rules and others building AI-specific frameworks.
We’ve said it before, and it bears repeating: AI is only as reliable as the data it’s built on. Structured, high-quality, machine-readable data is the foundation AI needs to deliver accurate, transparent, and trustworthy results; especially in corporate reporting. Without it, AI is just making educated guesses. IOSCO’s focus on AI in markets is an important step, but ensuring AI works effectively in reporting and regulation means doubling down on structured data standards like XBRL and ISO20022.
IOSCO is now inviting feedback on its report, with comments open until 11 April. Have your say here.