HOW FINANCIAL SERVICES CAN MEET THE EU’s NEW AI REGULATIONS

Alix Melchy, VP of AI, Jumio

 

The European Commission recently proposed a new regulation framework around the use of AI. The draft legislation seeks to regulate the use of AI proportionate to the level of risk each AI system presents in order to promote ethical and trustworthy use and development of AI in Europe. Moreover, the legislation will ban AI systems that present unacceptable risk and impose strict requirements on those which are considered to be high risk.

Although the UK has left the EU, there are some sectors that are still closely linked to the European banking sector — for example, lending. Therefore, we can expect financial services organisations, even those based in the UK, needing to keep abreast of these regulatory requirements, or else risk severe consequences.

Let’s look at how organisations in the financial services sector can better prepare themselves.

 

AI is no magic wand

AI is regularly positioned as an essential investment to stay ahead of the competition, provide greater customer service to customers, deliver more relevant services and offerings, as well as helping to transform many back-end processes. Its potential use cases have only increased further as we see more bank branches than ever having to close due to the impact of the coronavirus pandemic and consequently more consumers becoming dependent on digital banking services.

However, this raises the serious question as to what would happen if the algorithm used in financial decisions was tinged with bias. Such biases could negatively impact the way millions of consumers and businesses borrow, save and manage their money.

We need to be aware of the limitations of AI and learn to set reasonable expectations with it. To do this, we must take a step back and separate the actual technological capabilities of AI from magic and remind ourselves that AI is a tool, not a solution for everything and must be used responsibly. If businesses are going to start implementing AI, they must specify the exact problem they are trying to solve in order to select the best suited options and come back to this initial goal time and time again throughout the project to ensure it still aligns.

 

Eliminating bias

Another important factor to consider is the data that is going to underpin an AI model. AI systems are built on sets of algorithms that “learn” by reviewing large datasets to identify patterns, on which they are able to make decisions. In essence, they are only as good as the data they feed on.

Financial organisations must ask themselves whether they have enough of that data and, if so, whether it is representative of the population. Algorithms are data hungry and this data needs to be well stratified. It’s absolutely vital that the data represents society fairly so that it doesn’t reproduce historical biases. While it’s possible to buy datasets to speed up the process of building AI models, it is important to ensure that this data meets the required criteria rather than simply being a large data set. This allows employees in the financial services sector to treat customers fairly and, when combined with appropriate modelling and processes, allows them to maintain transparency and accountability in their decision-making processes to avoid legal claims or fines from regulators which can cause deep reputational damage.

 

Getting it right

Another process that businesses must implement in their AI practices is a pilot testing phase to ensure that the algorithm is working as expected and to better understand why an algorithm is making a certain decision. This allows companies to assess feasibility, duration, costs and adverse events to better understand why an algorithm is making a certain decision before it’s put into a real-world scenario.

It’s also important to note that the EU’s guidelines highlight that AI software and hardware systems need to be human-centric and a machine cannot be in full control. Therefore, there should always be human oversight and humans should always have the possibility to override a decision made by a system. Every algorithm has a set of limitations, and so, when designing an AI product or service, financial organisations should consider the type of technical measures needed to ensure human oversight. This is vital when it comes to understanding how AI is working, finding ways to train it and ensuring that it’s working as expected.

 

Establishing ethical AI

As mentioned earlier, a key driver for the EU Commission’s plans to take a risk-based approach to AI is to promote trustworthy AI systems that are lawful (complying with all applicable laws and regulations) and ethical (ensuring adherence to ethical principles and values) in order to avoid causing unintentional harm.

If financial organisations are to reap the benefits of AI, we must first minimise the potential harms of algorithms by thinking about how machine learning can be meaningfully applied. This means we need to have a discussion about AI ethics and the distrust that many people have toward machine learning.

 

There are some key areas to consider when it comes to ensuring AI is ethical:

  • Usage consent: Make sure that all the data you are using has been acquired with the proper consent.
  • Diversity and representativity: AI practitioners should consider how diverse their programming teams are and whether or not they undertake relevant anti-bias and discrimination training. This will draw upon perspectives of individuals from different genders, backgrounds and faiths, which will increase the likelihood that decisions made on purchasing and operating AI solutions are inclusive and not biased.
  • Transparency and trust building: Accurate and robust record keeping is important to assure that those impacted by it know how the model works.

In the financial services industry, there are many ways that AI can be leveraged. For example, in the document-centric identity proofing space whereby documents (such as a passport) are matched with a corresponding selfie to connect real-world and digital identities, proving AI is being used in an ethical way is becoming crucial. Gartner predicts that by 2022, more than 95% of RFPs for document-centric identity proofing will contain clear requirements regarding minimising demographic bias, an increase from fewer than 15% today. This shows that there’s a real opportunity to leverage AI solutions to provide the best service, but financial institutions must ensure that they are doing so in an ethical and accurate way by focusing on these key areas discussed. By following this guidance, businesses can ensure that their AI projects start off on the right foot and pave the way for regulatory compliance.

 

spot_img

Explore more