The compliance tightrope: balancing innovation and risk in an era of tech disruption

Ben Parker, CEO, eflow Global

The tension between innovation and risk continues to heighten. The refusal by the US and the UK to sign the declaration at this year’s AI Action Summit in Paris highlighted the disconnect between the vast majority of countries pushing for robust AI regulation and the dissenting minority (of big players) fearing its impact on innovation.

DeepSeek, China’s new ChatGPT rival, has been the other notable development to cause a major stir in this department. Is it a cheaper, more effective option to spur innovation and efficiency? Or does it represent a risk of exposing confidential data?

Well, our research has revealed that the majority (64%) of global regulatory leaders believe that tech-driven risks, such as the accelerated use of AI, will be the market force that is most likely to cause compliance issues in the next year.

There is also the ongoing volatility of the economic and geopolitical landscape to consider. The unpredictability of global events was ranked as the second most significant market force that firms need to consider, as cited by nearly three-fifths (58%) of compliance professionals.

In this era of tech disruption and instability, balancing the compliance tightrope between innovation and risk can become an increasingly difficult challenge. But with the right tools and processes, firms can ensure they push forward without exposing themselves to unwanted threats.

What are the risks associated with AI adoption?

There’s AI in trading, and then there’s AI in compliance. Both have potential risks for different reasons.

When it comes to trading, while AI’s accelerated use is causing uncertainty, its impact is, in fact, similar to previous technological evolutions we’ve had in the sector. Algorithmic high-frequency trading, for example, really took hold around 15 years ago. It is exponentially faster than human traders and means firms can execute orders and trading strategies in milliseconds. AI hasn’t actually advanced on these speeds, as it works similarly.

However, the risk lies in how it is programmed and its potential to act in unscripted ways. Will it manipulate trades to achieve profit-driven outcomes, whatever the cost? Generative AI also brings the potential for trading practices that execute orders with an intellect that goes beyond the limitations of current rules-based systems.

For compliance, AI tools are now being used for tasks like monitoring and assessing market abuse alerts. But as the use of AI becomes more commonplace for these activities, an over-reliance on the technology can emerge – and this is another risk. If compliance professionals become wholly dependent on using AI, this could impact their ability to independently evaluate whether certain trading activities are genuinely abusive and if the AI’s conclusions are accurate.

What’s more, there have been many high-profile examples of large language models (LLMs) like ChatGPT producing incredibly inaccurate statements as facts. Therefore, the use of LLMs for detecting market abuse could result in wrongful investigations and harm, not aid, efficiency.

The AI-human balancing act

The balancing act between innovation and risk centres on finding the balance between AI and human activities.

There’s no question that AI can be a valuable assistant. Like with many other sectors, it has huge potential to thrive when it comes to analysing huge volumes of data and explaining the reasoning behind alerts. This can be used to greatly improve the efficiency of behavioural analysis and spotting suspicious patterns of behaviour.

The introduction of AI into trade surveillance systems, for instance, can now deliver risk scores for market abuse alerts. These scores are based on real-time analysis of news events and their potential impact on market movements (something even more pertinent with the latest tariffs imposed by President Trump) to identify potential abuse more accurately and efficiently. This is where innovation can dramatically improve a firm’s compliance processes and support teams to identify risk.

The key point here, however, is that it supports teams and doesn’t automate the process entirely. AI and automation significantly speed up the process, but market surveillance is a nuanced area. Human expertise is still essential for quality assurance purposes and continuing to assess risk, especially if that risk is high-level or more subjective.

For example, rather than auto-closing the alert process entirely, AI can escalate high-value cases to a corresponding compliance team member who can then assess it. And while the technology can significantly reduce false positive alerts – another key pain point from our survey – humans still need to check a portion of these alerts to ensure AI is performing the process accurately.

Innovation relies on managing risk

Ultimately, the firms with the greatest awareness of how to apply AI to their processes will be the ones to derive the greatest value. That comes from knowing how to manage their risk most effectively. Failing to do so will leave them dealing with the consequences, whether that’s from falling foul of regulations or not managing AI models sufficiently.

When it comes to AI, firms need to assess their own risk and understand the reasoning behind their controls and processes. Can they mitigate an AI trader overcoming controls for profit motives by training their own AI systems to spot this behaviour, for example? In particular, they will need to evaluate their compliance tools and monitor AI models for accuracy, reliability and security.

With so much broader uncertainty, controls have become even more important to mitigating risk while enabling effective innovation. Walking this compliance tightrope depends on a blend of AI integration with robust human oversight.

spot_img
spot_img

Subscribe to our Newsletter