Daoud Abdel Hadi, Lead Data Scientist at Eastnets, examines how rising regulatory scrutiny is pushing banks to embed explainability and accountability into AI-driven compliance systems.
From transaction monitoring and sanctions screening to fraud detection and payment controls, AI-driven systems now play a central role in how institutions identify risk, prioritise alerts and manage vast volumes of activity at speed.
For banks and regulators alike, AI has helped compliance teams cope with rising transaction volumes, uncover patterns that would be difficult to detect manually and operate more efficiently in an increasingly complex risk environment. And in many cases, that effectiveness has been the primary benchmark for success. But as AI becomes embedded into critical compliance decisions, the focus is shifting. Attention is moving beyond what these systems can achieve to instead how they reach their conclusions.
As we move further into 2026, regulators are signalling that performance alone isn’t enough; trust in AI – and trust within the financial system – now depends on transparency, accountability and the ability to evidence AI decision-making. Being able to “prove it” is fast becoming the baseline expectation for AI-enabled compliance.
Why trust in AI is under scrutiny
AI’s growing role in compliance is prompting a shift in focus to how its decisions are reached, and whether that reasoning can be clearly understood, scrutinised and defended. A key part of this shift is the growing awareness around AI hallucinations – where models generate outputs that are false, fabricated or ungrounded in data. Research suggests hallucinations occur in up to 41% of finance-related AI queries, highlighting just how frequently these errors can surface in high-stakes environments.
Many large language models prioritise generating statistically plausible responses, meaning outputs can appear confident while being incorrect. In compliance workflows, this can translate into inaccurately summarised client information, fabricated rationales for risk scores or inconsistent explanations that are difficult to detect without additional safeguards in place. When confidence in outputs outpaces the confidence in understanding, trust in both the technology and the decisions it informs begins to erode.
For regulated firms whose compliance decisions need to be defensible and auditable, these limitations point to a crucial gap in traditional “black box” AI. Many of these models – the most common type used – prioritise predictive performance and adaptability over transparency. While this makes them powerful pattern-recognition tools, it also means their internal logic can be difficult to interpret or explain. If institutions cannot understand how decisions are produced or if they can’t be traced back to verifiable data or explained through clear reasoning, regulators and auditors are likely to challenge their use, particularly when the “human in the loop” can’t stand behind the decision.
Regulatory expectations are also catching up with this reality. In the EU, for example, the Artificial Intelligence Act introduces binding transparency and documentation requirements for high-risk AI systems, reinforcing expectations that institutions must be able to evidence how automated decisions are made, governed and overseen. It can’t simply be left to AI. It reflects a broader supervisory shift: effectiveness alone is no longer sufficient if accountability cannot be shown. But compliance teams continue to remain overwhelmed and often under-resourced to keep up with the level of regulatory change, manual workload and expanding oversight expectations – so they turn to AI to support where possible.
Therefore, institutions relying on opaque or poorly documented models face growing challenges in maintaining trust, but also in simply meeting growing regulatory expectations. In this environment, the ability to prove how AI works – and who remains accountable for its decisions – is becoming foundational.
Explainability as compliance by design
As scrutiny around AI intensifies, it’s becoming clear that meeting rising expectations won’t come from simply adopting different models or adding controls after deployment. To “prove it”, AI needs to be embedded into compliance. Explainability simply becomes a part of compliance by design.
In practical terms, that means making sure AI-driven decisions are fully integrated into existing compliance workflows, rather than operating as isolated or opaque tools. Institutions need to be able to show how decisions are reached from start to finish; what data informed an outcome; how key risk factors were weighted; and how those outputs were reviewed. Explainability, in this sense, is not a feature of the model alone, but a property of the wider system in which it operates. It’s about changing the way AI is governed, integrated and operationalised.
When AI is embedded into investigation, case management and reporting processes, then decision-making becomes easier to understand as well as defend. Outputs can be contextualised, reviewed and challenged using consistent and accurate information, rather than relying on ad-hoc explanations or after-the-fact justification. Institutions must be able to demonstrate who was involved in a decision, what information was available at the time and how exceptions or overrides were handled. When this information is captured as part of everyday workflows, AI-enabled compliance becomes auditable by default rather than defensible only in hindsight. This shift helps compliance teams move from simply reacting to confidently demonstrating how their controls work in practice.
This evolution also reframes the role of automation in compliance. The future is not about removing humans from decision-making, far from it, but about moving from automation to accountability. AI can accelerate analysis, prioritise alerts and surface complex patterns, but responsibility must remain clearly defined. Human oversight needs to be part of the process by design, with clear points where decisions are reviewed, escalated or challenged, and where accountability is recorded.
Ultimately, making AI explainable is less about opening the technical “black box” and more about building the right operational and governance layers around it. Institutions that focus on how AI is governed, integrated and evidenced across the compliance lifecycle will be far better positioned to meet regulatory expectations and scale AI with confidence. In an environment where trust increasingly depends on proof, this shift is no longer optional.
“Prove it” becomes the baseline
As AI scrutiny increases, this shift presents institutions with a clear opportunity. By moving beyond reactive controls and embedding explainability and accountability into the fabric of compliance operations, financial institutions can create AI frameworks that stand up to regulatory scrutiny while remaining scalable and resilient. The emphasis is now on building systems that earn confidence through evidence.
Institutions that act early to embed compliance into the design and governance of AI will be better positioned to navigate future regulation and realise long-term value. After all, trust increasingly depends on proof. And the next phase of AI adoption will be defined by how clearly that proof can be demonstrated.


