Ben Parker, co-founder and CEO, eflow
AI-powered trade surveillance is no longer a differentiator – it’s becoming a baseline expectation. The accessibility of AI technologies and the ease with which they can now be integrated into compliance workflows mean regulators increasingly assume firms are using them. But as adoption accelerates, scrutiny is shifting. The focus is no longer on whether AI can be used to detect anomalies, but on whether firms can explain how it reaches its conclusions – and defend those decisions under regulatory review.
This is creating a growing explainability gap. Firms may understand their AI outputs internally, but that’s not enough. Regulators must also be able to interpret, challenge and assess those decisions. ‘Black-box’ systems, no matter how effective they appear, will not satisfy regulatory expectations in 2026 and beyond. Accountability remains firmly with firms and individuals, not with the technology itself – making transparency and auditability essential rather than optional.
Without this alignment, firms risk racing ahead technologically while exposing their regulatory processes in a way that they cannot defend. That’s why we’re seeing AI-enhanced trade surveillance increasingly underpinned by rules-based frameworks; not as a step backwards, but as a foundation regulators can assess clearly.
Bridging the explainability gap requires more than deploying advanced tools. It demands governance, controls and AI models that regulators can understand as well as firms themselves.
Explainable AI in surveillance is a growing necessity
Even though the use of AI is becoming a baseline expectation, some compliance teams still have a somewhat uneasy relationship with AI-powered trade surveillance. That’s because many systems rely on ‘AI black-boxes’ to generate outputs and alerts without being able to provide any clear reasoning for doing so. For example, a system might highlight a trade as insider dealing based on sound logic, but if that logic isn’t clearly accessible then the lack of explainability will not meet a regulator’s requirements to understand how that decision was made.
As a result, instead of AI actually speeding up the regulatory process, it is likely to create more headaches for the firm that has to try to justify the rationale behind the decision to the regulator. And unlike a manual process that would leave a virtual audit trail, the lack of transparency could put the firm at risk of enforcement action regardless of whether any market abuse has taken place. The risk this presents to both an institution and an individual means some compliance teams might be reluctant to hand over too many tasks to AI – but they also feel a need to use AI to match regulatory expectations. What this conundrum exemplifies is the need for AI to add efficiency, but to also be explainable and defensible.
But how exactly can firms align AI-driven surveillance with regulatory frameworks and expectations?
Aligning practice with expectations
The most clear-cut solution is to look for AI surveillance systems that are explainable. If the platform allows you to trace each insight to its corresponding source data and explain how outcomes are reached, then regulators immediately have context and clarity for the alert. The most advanced systems also have generative AI capabilities that enable users to investigate alerts by using conversational prompts. A user might want to gain a broader context, for example, so they ask the system if a trader has been associated with similar alerts in a given timeframe.
But outcomes from using AI won’t truly meet expectations unless the technology is integrated into a system that is already highly effective. Data quality underpins the level of insights produced by AI and is impacted by how much data the platform can access. Spotting modern forms of abuse, for example, requires the consolidation of trade and communications data – trade data can identify the type of abuse while comms data explains the reasons behind the abuse. This data integration has now become a regulatory expectation, rather than a compliance benefit, due to this detailed reasoning.
With an explainable AI system accessing quality data, its success hinges on how it is used. For now, AI is best placed as an assistant to compliance teams; helping to scan alerts for signs of manipulation, reducing the ‘noise’ of false positives that swamp compliance teams, and providing additional context to help humans investigate risk more efficiently. Critically, AI should supplement human decision-making and keep compliance team members in the loop for any crucial decision. AI can triage alerts, for instance, but humans still need to escalate and close them.
Explainability is the first line of defence
Failing to close the explainability gap creates risk firms cannot defend under regulatory investigation. Explainability and human oversight are fundamental for establishing and maintaining trust in AI-powered surveillance. If an AI system produces an alert that the compliance team or regulator can’t understand, or it closes an alert without human investigation, from a regulatory position the firm still maintains accountability for any errors.
There is no doubt that the use of AI in processing trade surveillance alerts and adding context to suspicious activity will increase in the years ahead. But if the technology can’t explain how it has reached a judgement, then it puts individuals and the firm at risk of enforcement action, irrespective of whether market abuse has taken place. That’s how important demonstrating explainability is.
Ultimately, the use of AI in trade surveillance is all about balance. An over-reliance could create additional regulatory risk, while failing to use it effectively could limit operational efficiency and result in the failure to spot data trends that point towards suspicious activity. If firms can use explainable AI models that regulators understand and implement sufficient human-in-the-loop controls, they can start to bridge the explainability gap and build a more robust, efficient and compliant trade surveillance function.

