By Alexon Bell, Chief Product Officer, FinCrime & KYC, at Quantexa
AI is transforming every industry at an incredible rate, making applications more effective and efficient. With all new innovations come teething problems and adoption hurdles, especially in highly regulated industries such as banking. As AI’s influence grows, regulators have started issuing guidance on its adoption, with the UK’s recent publication of the AI blueprint, demanding safe, explainable, and accountable systems. The challenge is navigating the complex regulatory requirements that make rapid innovation difficult in banking.
It’s worth reminding readers that AI encompasses not just new technologies, such as Generative AI (ChatGPT, Gemini, Claude) and Agentic AI, but also Machine Learning (ML), which has been used in banking and other industries for decades and is well understood.
Traditional ML remains the backbone of bank operations, representing an estimated 85% of AI workloads. This includes the critical infrastructure for fraud prevention, credit risk modelling (PD/LGD), and algorithmic trading.
Generative AI is growing in areas like meeting summaries and virtual assistants, catering to about 10% of workload. Agentic AI, the newest technology, represents 5% or less, covering applications such as Agentic KYC, Agentic Investigations, and Agentic Sales Relationship Management.

The Data Foundation Problem
The fundamental issue remains in data quality. If the data is wrong, AI will automate and accelerate incorrect decision-making. Consider an analyst researching a company: if the AI identifies a UK-based global firm with 11 entities as a small Asian business with only four, the system’s utility collapses. In regulated industries, these factual errors destroy the trust required for adoption.
This explains why so few Generative AI proofs of concept reach production. Most fail due to data issues. The classic ML adage of “rubbish in and rubbish out” applies directly to GenAI and Agentic AI. About 80% of a data scientist’s time is spent preparing data. Gen AI and Agentic AI are no different, except that they are mimicking human processes using prediction, leading to wrong answers when fed with the wrong data or poorly prepared data.
Use the right AI for the right problem and know its limitations
Each type of AI has corresponding strengths and weaknesses. ML is great at prediction, but cannot work with text and language. Gen AI is amazing at language-based tasks, but cannot perform basic math. This means that Gen AI is not for financial forecasting, but should be used for summarising meeting notes.
Traditional AI excels when abundant outcomes exist for training. However, when historical examples are limited, organisations must find alternatives. In areas such as sanctions circumvention, where insufficient historical examples exist, expert or knowledge-based systems often work well. An unanswered question remains: can Agentic AI play a part by rapidly asking questions and cycling through scenarios to augment experts with data mining and insights? Quite possibly, provided it does not hallucinate, which Gen AI has proven to do.
Truthful answers are relevant when combined with high-impact regulatory actions such as Biden’s executive order around secondary sanctions for Russia. This creates complications affecting supply chains, requiring new diligence for certain customer groups, countries, and transactions. These scenarios lack substantial training datasets, and with the issue of hallucinations with Gen AI, making agentic approaches riskier than adopting expert-based systems coupled with traditional ML and data discovery techniques.
Model Risk Governance: The Regulatory Bottleneck
Banks operate under model risk governance frameworks where international regulators require detailed explanations of how models work, what decisions inform them, and continuous testing protocols. This creates a fundamental tension where the most powerful AI techniques tend to be very opaque, lacking transparency or explainability and therefore cannot be deployed in highly regulated environments.
The real regulatory concern centres not on finding suspicious activity but on missing it. When models consistently overlook patterns, banks struggle to adjust opaque systems effectively. The constraining factor becomes governance itself, when models miss patterns and regulators require adjustments, opaque systems cannot be easily modified because they operate based on data patterns rather than explicit rules. Alongside this is the focus on ethical AI and ensuring that models are not biased which is hard to determine if they are opaque black boxes.
The application of Model Risk Governance to AML models is becoming a hotly debated topic, as the criminals are exploiting AI advancements while banks are constrained in adopting the most powerful techniques due to governance requirements, putting them at an immediate disadvantage.
Industry Reality: Caution after Consequences
Banks are extremely cautious when adopting new innovations following significant regulatory penalties for past compliance failures. Previous penalties have created risk-averse cultures, and teams already manage extensive task lists while regulators maintain close oversight. Legal constraints often require organisations to implement only a fraction of what they know to be most effective.
The Path Forward: Gradual Industry Evolution
Some regulatory bodies acknowledge current limitations. Organisations such as FATF have openly recognised that traditional approaches are not working optimally. The current system fails to provide effective risk capture.
Regulators should lead by example in adopting these technologies, creating dual benefits: improved regulatory efficiency and better evaluation of supervised institutions.
The most transformative next step is communication and information sharing between law enforcement, intelligence agencies, and regulated entities. Law enforcement agencies already possess knowledge of suspicious actors. Sharing this intelligence would transform the current approach, where banks monitor everything broadly, and regulators handle only a small fraction of the reports.
Agentic AI would excel at processing lists distributed by law enforcement and producing comprehensive intelligence packages. This would provide valuable training datasets for banks because if these suspects have not been detected previously, banks can then discover what they look like in their systems and build better AI models or informed scorecards.
However, success requires serious operationalisation. When humans must act on AI output, they need comprehensive context, not merely risk scores. Effective implementation requires actionable intelligence that guides human decision-making.
Looking Ahead
The transformation will not happen overnight. Typically, early adopters experience implementation challenges first, then others follow proven pathways. Agentic AI offers a pragmatic path forward by augmenting human decision-making rather than replacing it entirely.
Success depends on establishing robust data foundations first. Without quality data, even sophisticated AI will amplify existing problems. The future of banking compliance lies not in revolutionary AI deployment, but in careful, collaborative development of systems that satisfy both operational efficiency and regulatory requirements, building trust through transparency rather than seeking competitive advantage through opacity.
The next generation of AI will thank the institutions that invest in their data now. Artificial Narrow Intelligence, Artificial General Intelligence and Artificial Super Intelligence are all still reliant on good data.


