How AI regulation will affect fintech

Elliott Hoffman, Co-founder of AI Tool Tracker

 

The advent of artificial intelligence (AI) has brought with it great benefits across multiple industries. It has made processes both quicker and more efficient, while the insight it provides has vastly improved business’s analysis and decision-making.

The technology’s development has accelerated in recent years, with the launch of ChatGPT in November 2022 one of the biggest advances to date. With the AI market expected to reach $46.8 billion by 2030, the potential for this technology is huge.

Despite all this, however, serious concerns have been raised over the risks AI tools(https://aitooltracker.com/) pose to businesses, their customers and wider society. The potential for the technology to go wrong, as in the case of ChatGPT, have been well documented and, given that it’s in its infancy, it is still relatively untested.

The upshot is that there have been growing calls for AI to be regulated. One of the areas where the technology is already widely used is in fintech, which has come under particular scrutiny in recent months.

Key legislation

Elliott Hoffman

As a result of this mounting pressure, governments and regulators across the world have been hastily drawing up new legislation and regulation to tackle the problem. However, this is a huge challenge, considering that AI has a global reach and that every region or country has its own set of rules to comply with.

The most significant piece of legislation to be brought in so far has been the European Union’s (EU) AI Act. Approved this month by the European Parliament, when it comes into force, it will tighten up the current rules on data quality, transparency, accountability and human oversight. The new law will also deal with implementation and ethical challenges within sectors such as finance, energy, healthcare and education.

Using a classification system, it will rank the level of risk an AI technology could pose to a person’s fundamental rights or health and safety, as either minimal, limited, high or unacceptable. The penalties for not adhering to the rules are high, bringing with them fines of €10 to 30 million or two to six percent of a company’s global annual turnover, whichever is higher. And they don’t just apply to entities within the EU, but also those outside that supply AI systems to them or that use systems whose output is used within the Union.

But it’s not just in the EU where this is happening. The UK Government launched a consultation for establishing a regulatory regime for AI, while Brazil’s Congress has already passed a bill that creates a legal framework for the technology.

Implications for fintech

So what does all this mean for fintech firms? Essentially, the AI Act will govern how they provide, import, distribute or use software for a person’s credit assessment, biometric identification or human capital management. Software providers and users will also be obliged to be transparent in their use of AI, while software that adopts subliminal techniques or is used to exploit vulnerable people based on disability or age will also be banned.

While this key legislation looks good and makes sense on paper, the reality is that it will be extremely difficult to implement in practice. That’s because there are so many grey areas within the definition of AI, that it’s hard to determine exactly who should be subject to these obligations and which software is classed as an AI system.

Consequently, because of all this added burden, many entrepreneurs and startups may decide to move elsewhere instead. It’s already on their radar, with 50% of AI startups claiming that legislation will slow down innovation in Europe and 16% deciding whether to stop developing the technology or relocate to another region.

The writing is on the wall. AI regulation is coming and businesses need to be ready for it.

 

spot_img
Ad Slider
Ad 1
Ad 2
Ad 3
Ad 4
Ad 5

Subscribe to our Newsletter