How to encourage ethical AI in financial services

Following publication of the Bank of England and FCA’s Artificial Intelligence Public-Private Forum (AIPPF) final report on AI in financial services, Gery Zollinger, Head of Data Science & Analytics at Avaloq, assesses the best ways to support ethical use of AI by financial firms.

Artificial intelligence can efficiently handle key business processes through a combination of machine learning and big, real-world data. But this data may contain human bias and prejudices – explicit or implicit – that can be learned and reinforced by the AI system. What can financial firms do to mitigate the risks and reap the rewards?

The evolution of AI in financial services

The traditional use case of AI systems in finance is to automate and standardize routine tasks, allowing businesses, such as wealth managers, to focus more on enhancing their value proposition and strengthening relationships with their clients. But today, AI is capable of doing much more.

Financial institutions can now leverage AI to instantly create personalized portfolio recommendations based on investors’ risk appetite, goals and preferences. Another innovative area is conversational banking, where AI systems use natural language processing (NLP) to interact with clients much faster and understand their intent.

This goes beyond just improving efficiency – it enhances the client experience and boosts engagement. This expanded role further highlights the necessity of an ethical framework to govern how financial firms deploy AI.

 

The future of AI regulation 

The use of AI is becoming increasingly widespread in financial services, but regulations have considerably lagged innovation, so it can be difficult for financial institutions to find guidance on AI best practices.

To maximize the value of AI, financial firms need to understand how the technology behind it fits in with the regulatory landscape. The European Commission (EC) is one of the first regulatory bodies in the world to produce a draft proposal on the use of AI. It classifies AI activity by risk, from unacceptably high risk to minimal risk, with credit lending, for example, classified as high risk due to the potential for prejudice. The EC’s proposal will likely influence similar regulations in jurisdictions around the world.

The Bank of England and FCA, through the Artificial Intelligence Public-Private Forum (AIPPF), has announced it will publish a Discussion Paper on AI later this year to ascertain how the current regulatory framework applies to AI and collect industry views on how policy can best support safe AI adoption. This paper will be key to the development of AI regulation in the UK.

What should financial firms do now?

AI needs to be coupled with a robust monitoring framework to constantly improve performance as well as to identify and rectify any potential shortcomings, including unethical outcomes. And in line with EC recommendations, AI systems should primarily be used in low-risk areas – such as investment recommendations, client churn predictions and chatbots – to minimize the severity of any unfair bias.

Where higher risk AI systems are needed, firms need to be particularly careful and implement even more stringent monitoring measures. By combining these factors, financial institutions can use the efficiency of AI to gain a competitive edge while ensuring fair outcomes for their clients.

 

spot_img

Explore more