Login

Making sense of the black box

Posted on March 9, 2025 by Editor

Making sense of the black box

AI is everywhere, and it’s getting smarter. But can we trust it? That’s the million—perhaps billion—pound question, and the AI assurance industry is racing to keep up.

Just as financial statements require audit and assurance, AI models—especially those used in regulatory and financial contexts—may also need rigorous oversight. The UK’s AI assurance market is already valued at £1bn (according to a report from the UK’s Department of Science, Innovation and Technology (DSIT)) , and with the EU AI Act coming into play, demand for AI assurance is set to grow. But before we rush to audit AI, we need to answer a fundamental question: what exactly are we assuring?

When financial reporting went digital, auditors had to adapt. AI will be no different. But right now, as ICAEW points out, the focus is on figuring out how to assure AI. Standards, accreditation, and clear methodologies are still in flux. Regulators will need to work closely with assurance providers to set the rules—and the accountants and auditors of tomorrow will need to be AI-literate.

AI models are notoriously complex, often operating as “black boxes” that produce results without clear explanations. Are we assuring the AI itself—the algorithms, the model, the biases? Or should we focus on the data and processes that shape the AI’s outputs? Right now, many assurance professionals are still figuring this out. The lack of consistency in outputs and the unpredictable nature of AI models make this a difficult challenge. However, one thing is clear: if we can’t see inside the black box, we should at least be able to trust what goes in.

AI is at its most powerful when working with structured, machine-readable data. Using XBRL-tagged data ensures a transparent, high-quality starting point for AI models, making it easier to assess and verify results. While we may not fully understand every AI decision, we can at least control and assure the integrity of the data feeding it—a crucial step in making AI more accountable, providing provenance and traceability back to source.

Read more about the growing market for AI assurance on the ICAEW website – and get up to date on the XBRL International’s most recent experiments using AI to improve taxonomy interactions on our website.

Other Posts


Newsletter
Newsletter

Would you like
to learn more?

Join our Newsletter mailing list to
stay plugged in to the latest
information about XBRL around the world.

  • This field is for validation purposes and should be left unchanged.

By clicking submit you agree to the XBRL International privacy policy which can be found at xbrl.org/privacy