Jonathan Dixon, Head of Trade Surveillance at eflow Global
The predictions made by tech leaders towards the end of last year seemed to agree that while 2024 was the year of AI exploration, 2025 would be the year of its widespread adoption. For now, AI’s practical use for compliance is still in its relatively primitive stages, but we can certainly expect its development to accelerate this year. Specifically, we will start to see its capabilities in helping teams analyse behaviours accurately, reduce false positives, and detect potentially suspicious behaviour more efficiently.
But AI is not just another tool; it’s an entirely new data source that requires its own unique compliance framework. Regulators are already watching the space, and there’s no chance that AI-powered compliance systems will escape their scrutiny for long. Their potential to create compliance risks will make them a key regulatory focus. Consequently, firms will need to balance their efficiency gains with ensuring adequate risk assessments are also in place.
Crucially, the growing adoption of AI across all aspects of business raises another question: is AI sophisticated enough to assess market abuse alerts and make decisions on behalf of compliance teams as to whether particular behaviour warrants further investigation? While there’s no doubt that AI will be a vital tool in helping financial institutions identify risk quicker and more efficiently, does it actually make the need for a ‘human touch’ more vital than ever?
AI as a compliance assistant
AI stands out in its ability to not only analyse data but also explain the reasoning behind alerts and connect patterns of suspicious behaviour. It can enhance compliance through behavioural analysis and fraud detection and act as a supporting tool to help teams identify risk.
Off-channel electronic communications (eComms) were a big talking point last year, with NatWest being the notable institution to ban their use outright. But a total ban is counterproductive: first, because it is likely to move the problem elsewhere, and second, because the tools are already available to monitor these channels.
Compliance systems, for example, can ingest data from a range of eComms sources such as emails, Teams and Slack and link communications to specific trading events. Large language models (a subset of AI like ChatGPT) are now demonstrating their ability to analyse large volumes of unstructured data. They have the potential to learn intent from communications and flag any suspicious behaviour as alerts to compliance teams.
Impressively, while traditional systems provide archiving and search functionalities, AI can learn and adjust to a firm’s unique jargon and vocabulary, as well as place this in the context of broader linguistic and industry trends. Such sentiment analysis can significantly reduce the number of false positives and therefore the load on compliance resources.
This ability to learn and improve is key to refining the surveillance process and enhancing the quality of alerts generated by compliance systems. For example, AI can provide risk scores for alerts and dynamically adjust the risk parameters of models by analysing historical data and client-specific needs. Firms can then respond more efficiently and accurately to risks.
AI as a compliance risk
Despite these benefits, over-reliance on AI tools can introduce a different kind of compliance risk. As AI becomes more prevalent in regulatory compliance, dependence on these tools may hinder compliance professionals’ ability to independently assess whether behaviour constitutes potential abuse and, crucially, whether the insights are accurate.
LLMs in particular are known for sometimes producing wildly inaccurate statements but saying them with such conviction that they can be mistakenly construed as fact. These ‘hallucinations’ could lead to wrongful investigations and impede efficiency. What if the AI learns certain behaviours in a biased way? That would certainly present a risk to compliance processes. There are also data privacy concerns and how sensitive information put into LLMs might compromise a firm’s security.
To tackle these risks, regulators will implement standards targeting AI’s use in due course. Therefore, as well as advocating for the use of automation and AI tools to manage compliance, they are also likely to introduce requirements for humans to assess these tools and oversee models for accuracy, reliability and security.
The value of the human touch
While AI and automation can bring efficiency to the process, market surveillance is a sensitive area. Humans are the essential compliance component for continuing to assess risks and ensure (quality) control over processes. Therefore, despite some industry experts predicting AI could take over assessing alerts for compliance teams altogether, currently, this represents a premature and precarious move.
If a firm were to automate the process entirely, then I would argue why would it need to generate alerts for abuse in the first place? If they are of such low quality that they don’t need human intervention at any level, i.e. false positives, then there is no need for them to be sent.
But even if AI is used to automate and assess thousands of abuse alerts, from a quality assurance perspective, humans would still be required to check at least 10-15 per cent of them. If there was an instance where a risk was missed, for example, you can’t turn around to the regulator and say ‘we didn’t look at an alert because the AI said so’. So firms with greater human expertise will be able to get the most out of AI and use it to improve their processes.
What’s more, humans are not only vital to the compliance process and acting on AI insights, but also to help mitigate dangerous AI development. There will be bad actors using AI to outwit AI compliance models. Human expertise will be fundamental in assessing what these risks look like and training AI compliance tools to counteract growing forms of AI-fuelled market abuse.
Humans safeguard compliance
There’s no doubt AI is becoming a valuable assistant – a tool to enhance compliance processes and aid decision-making. Where it is becoming increasingly beneficial is for improving the efficiency and accuracy of market surveillance: firms can spot risk faster and reduce the number of false positive alerts.
What it is not, however, is a compliance professional. Without the human touch, AI can instead become a compliance risk. Ultimately, firms are responsible for making decisions and using the experience and knowledge of their regulatory experts.
AI will not be fined or go to prison for non-compliance; firms and people will. The human touch is what will safeguard compliance.