A(I) time for change: the pros and cons of AI in trading

Ben Parker, CEO of eflow Global

AI is single-handedly changing how traders trade. So much so that the online trading market is expected to reach a value of around $12 billion by 2028, largely due to its increased use. Its ability to analyse millions of data points to identify trends in the market, provide investment ideas and execute trades is driving new levels of data-driven trading.

But there’s a more dangerous side to this development. A study released towards the end of last year suggested that AI can not only carry out illegal financial trades, but it can even cover them up. This was demonstrated at the UK’s AI safety summit, when “a bot used made-up insider information to make an ‘illegal’ purchase of stocks without telling the firm”.

Now, new research has revealed this threat is causing widespread concern in the financial services industry: a survey of 250 senior compliance professionals revealed three quarters are worried about bots manipulating markets and being able to cover up their actions. On top of this, a massive 94% acknowledged that financial professionals using AI bots for manipulation is a challenge.

These fears align with upcoming findings of a new report into global trade surveillance, with regulated firms citing AI as the most likely cause of compliance issues over the coming year.

So, with such widespread acknowledgement of the threat, how does the industry manage it?

For regulators, it becomes a case of fighting fire with fire, where using AI is necessary to combat the potentially darker side of its use.

Challenges regulating AI market manipulation

The rapid and large-scale deployment of AI in trading is delivering efficiency and advanced trading capabilities. However, this rapid deployment is also creating unique risks, increasing the threat of market abuse and manipulation. The use of algorithmic trading is nothing new, but backed by these new technologies it could take on new forms altogether.

This combination of AI and algorithmic trading provides major challenges in ensuring market integrity.It provides room for inadvertent or deliberate market abuse and also adds further complexity and unpredictability to market dynamics. How so?

AI is highly susceptible to market manipulation. Machine learning (ML) models are built to optimise for their objectives. So, if they can hit these objectives through market manipulation strategies, they might inadvertently or explicitly use this route. What’s more, if ML algorithms can see there is a profit to be made, they can learn and adapt manipulative strategies. Without adequate supervision, these objectives, such as maximising profit, might inadvertently align with manipulative behaviours.

Their complexity and explainability also add extra complications for regulators. ML trading algorithms can be hard to understand and explain, with internal adjustments making their behaviour unpredictable. If regulators and market participants don’t have an explicit understanding of how these algorithms work, it becomes very hard for them to decipher between what is legitimate trading activity and what is potential market manipulation.

Future implications for market surveillance

This use of AI in trading decisions has the potential to totally transform the market. The uncertainty produced around the legitimacy of certain trading transactions plays into the hands of bad actors, facilitates illegal activity and makes it harder for regulators to pin them down. However, there are broader future implications of its use as well.

It could further amplify misinformation and discrimination. For one, AI has been known for exacerbating bias. The creation of biased algorithms could result in discriminatory trading practices that will create an unfair trading environment. Moreover, algorithmic trading platforms might also perpetrate misinformation, misleading genuine market participants and triggering suboptimal trading decisions.

There is also a new and growing risk. The accessibility of LLMs like Open AI’s GPT4 provides the opportunity for individuals – with little or no technical background or knowledge – to form trading strategies. This increases the chance that non-professional investors, by accident or intention, become wrongdoers of market abuse.

The Apollo study, which was explored at the UK AI summit, epitomises these concerns and shows that, in that instance, the bot saw helping the company as more beneficial than maintaining honesty.  

While the Apollo Chief Executive recognised that current models are not powerful enough to be deceptive “in any meaningful way”, he added that “it’s not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something”.

Fighting AI with AI

With these growing risks and future threats, where do regulators currently stand on managing it all?

Regulators are recognising that evolving market dynamics, driven by AI but also wider socio-economic and geopolitical factors, make surveillance methods of old deficient. As a result, these bodies are turning to AI, RegTech and SupTech to better monitor and combat market abuse. What better way to combat AI abuse than to use AI?

These evolving regulatory methods mean there is an increased expectation by regulators for firms to deploy digital tools as part of their compliance strategy. Not only should this change incentivise firms to adopt more sophisticated regulatory technology to meet compliance requirements, but it means their systems and controls will be more closely scrutinised.

For firms, incorporating advanced tech like AI can not only improve protection against market abuse but also offer vast improvements to existing surveillance systems. For example, it can be a huge task trying to quieten the ‘noise’ of false positive alerts. Regtech platforms can use AI and ML to learn nuanced patterns in market abuse and therefore enhance the precision of identifying and risk-scoring transactions.

A time for change

As AI drives new levels of data-driven trading, its expansion also creates vast opportunities for market abuse. This has become a major concern for both firms and regulators in financial services. AI is highly susceptible to market manipulation and makes it hard for companies to discern the legitimate from the illegitimate. Its use could bring even wider consequences such as increased bias and deception in the trading market.

Faced with these fears, regulators are fighting AI with AI. The technology has great potential to significantly improve outdated surveillance systems, reduce ‘noise’ and spot patterns of illicit behaviour. This use of AI for compliance marks a transformative step-change in the industry and sets a new standard for market oversight. For the sector, it symbolises a time for change.

spot_img

Explore more