By Stuart Tarmy, Global Director of Financial Services Industry Solutions at Aerospike
The financial services industry has taken enormous strides in their deployment of AI in recent years to help it manage the explosive growth of data, and particularly, real-time data. This work has been challenging, given that companies have been operating in an AI wild west where an ‘anything goes’ attitude prevailed as the regulatory bodies have ‘lagged’ in their review of what, if any, regulation might be needed. This is not a new phenomenon, as the pace of technology has often raced ahead of the regulators. Now, however, governments around the world are putting in place legal frameworks and regulations designed to make AI systems more transparent and the companies behind them accountable.
The EU Artificial Intelligence Act is the first major initiative for regulating Artificial Intelligence. Its goal is to make the use of AI more transparent, explainable and accountable. It takes a risk-based approach, applying different rules to AI according to the risk they pose. It is being used as the model by other countries to develop their own guidelines. However, all countries that do business with the EU will need to adhere to its requirements.

The EU Artificial Intelligence Act came into force across all 27 EU Member States at the beginning of August 2024, and the regulators have provided two years for companies to become compliant, with enforcement of the majority of its provisions starting in August 2026.
The EU is taking AI regulations very seriously, and will impose even higher penalties than they impose for the GDPR (data privacy) regulations they introduced in 2016. For comparison, the EU AI Act can impose penalties of the greater of €35 million or 7% of the company’s previous year’s global turnover vs the GDPR whose penalties were much smaller, coming in at the greater of €20 million or 4% of the company’s previous year’s global turnover.
It has always been important for companies to understand how they could best leverage AI and advanced AI algorithms, such as neural nets, and the technical architecture that is the enabler. But now, it will be even more imperative as companies ensure their technology, processes and tools adhere to the regulations. They will need to implement system audits, risk assessments to classify AI systems, data protocols, and AI monitoring, and they will need to seek out trustworthy vendors who can help them comply and certify their AI use.
It will be incumbent on CIOs within financial companies, for example, to keep up to date on how existing laws apply to new AI algorithms and establish robust internal governance policies to manage risks associated with AI and to guard against litigation over misuse such as copyright infringement. At the very least, they should be developing compliance plans with timelines and resources to ensure compliance. AI-based systems will need to be transparent about how they collect and use the data they are collecting. The AI systems themselves must be designed to be ‘explainable’, meaning they can show a user, regulator or auditor the key information on how they arrived at a decision, such as approving or denying a loan.
Advances in AI are creating enormous opportunities for financial services firms to improve and automate their business functions. Fundamental to this is the use of best-in-class, real-time technologies that can process and handle enormous amounts of data. At the same time, companies must work within these new AI regulations to ensure compliance and avoid very large fines.