How to implement ethical AI in financial services

By Matt Peake, Global Director of Public Policy at Onfido

 

From unlocking smartphones to accessing online applications and opening bank accounts, biometric technology has quickly become part of our day-to-day lives. With research showing that 80% of people find biometrics both secure and convenient, it’s no surprise that we are seeing widespread adoption across financial services.

Biometric verification is powered by artificial intelligence (AI) systems which have been trained with sophisticated data models to quickly and accurately recognise, categorise and classify facial images. But with 68% of large companies in the UK having adopted at least one AI application, it is crucial that the technology continues to be implemented correctly – otherwise, it can have serious consequences for real people.

This means that in industries like financial services, where banks and payment service providers play a key role in financial inclusion and building trust within communities, AI has to be subject to ethical parameters. In fact, there are six key considerations that the industry must pay attention to when building ethical AI: fairness and bias, trust and transparency, privacy, accountability, security, and social benefit.

Failure to address any of these can have serious consequences for customers and businesses alike. This includes financial exclusion, obstacles to accessing global markets, and non-compliance to existing and upcoming regulations. That’s why passing the responsibility to engineering, compliance or legal teams or even ignoring the issue is no longer an option. Financial services leaders, within all departments, must take an active role in the performance of AI in their applications.

The importance of ethical AI

AI is used across multiple functions of finance – from fraud detection and risk management to credit ratings, and so plays an essential part in the processes that underpin everyday life. If AI is not ethical, it damages trust in the system and erodes the value of financial services.

When issues with automation arise, human intervention is often the solution. But a manual fallback isn’t always the best answer as humans are prone to systemic bias. It is commonly understood that bias exists in systems seeking to distinguish faces of people from ethnically diverse backgrounds. This can lead to the development of non-optimal products, increased difficulty expanding to global markets, and an inability to comply with regulatory standards.

Where discrimination occurs, the consequences can be severe and include alienation from essential services. This is why Onfido takes a proactive stance to reduce bias, having published guidance based on defining, measuring, and mitigating biometric bias, and also participated in the UK’s Information Commissioner’s Office sandbox to pioneer research into data protection concerns associated with AI bias published its report.

Elsewhere, ethical AI is at the heart of regulation. The UK’s AI Governance regulations and the EU’s AI Act outline how trust should be at the centre of how businesses develop and use AI. Not only will it be a requirement for financial services to follow the considerations of ethical AI, but it will be central to future growth. There is also an ongoing requirement for compliance with Anti-Money Laundering (AML) and Know Your Customer (KYC) regulations, holding financial institutions accountable for how they verify customers’ identities. With an investment in ethical AI, financial services will improve the accuracy and reliability of their KYC processes and reduce false acceptance and rejection acceptance rates across the board.

Successfully tackling ethical AI

There’s no doubt that ethical AI is an evolving challenge that requires financial services to stay on top of their applications as new use cases emerge, and deployment grows.

Developing and deploying ethical AI should be a company-wide initiative. It requires a top-down commitment to ensure ethical practices are embedded into every stage of application development and implementation. Such an approach is necessary to keep up with the challenges of developing and maintaining ethical AI. To achieve optimal outcomes, businesses must bring teams together to identify problems, define and formulate solutions, implement them, and track and monitor their progress.

Executive teams must understand the risks of developing AI that is not ethical and the long-term financial and reputational repercussions it could have. But they must also recognise that ethical AI is the gateway to innovation, driving accurate and efficient financial services that can lead to positive social outcomes – for the benefit of all customers, no matter who or where they are.

The impact of ethical AI

Following the six considerations of ethics will not only help financial service providers meet their regulatory obligations but will also help to build fair, transparent, and secure systems. It also highlights their ongoing commitment to safeguarding their customers.

However, failure to do so may result in long-term issues. It can lead to products and services that discriminate against customers and ultimately lack regulatory compliance. Therefore, keeping ethical considerations front of mind during each stage of AI development and implementation will ensure that customers are treated fairly and, in the long-term, will protect and improve brand reputation, building trust and loyalty. It’s a worthwhile goal that creates a better world for everyone – both in terms of the performance of AI systems and the impact of building them.

spot_img

Explore more