UNLEASHING AI’S POTENTIAL IN FINANCIAL SERVICES

Luis Rodriguez, Chief Product & Innovation Officer at Strands explores how financial institutions can overcome the barriers in AI deployment and capitalise on its true potential

 

The idea that machines have the potential to make better decisions than us humans is nothing new. The recent explosion of available data, coupled with advances in technology has elevated AI’s commercial credibility, which, unsurprisingly, the financial services industry is keen to take advantage of.

Many banks have already begun their AI journey, albeit with relatively narrow applications. At the front-end, chatbots have started to replace humans in digital customer interactions, for example.  Internally, the industry is investing heavily in AI-enabled solutions to enhance automated threat intelligence and prevention, and fraud analysis and mitigation.

Despite these steps, however, considerable challenges remain before AI’s real potential can be unleashed.

 

Luis Rodriguez

Data, data everywhere

Much of this potential lies in unlocking the value hidden in the vast amounts of data, by translating it into individualized products and services for consumers.  To do so requires financial institutions to break-down silos, strategically evaluate existing and new data sources, and create a world-class governance model to prevent ‘black box’ AI.

The ‘black box’ problem is a key challenge. This describes the instance when no one can explain how data has been processed to achieve a specific outcome. This is particularly pertinent to financial services, when such outcomes impact the lives of customers. As AI is deployed in credit scoring, mortgage applications and lending practices, for example, banks and other lenders must be able to evidence the rationale behind their decision making.

They must also avoid the pitfalls of data bias and discrimination. Data sets can harbour hidden biases, just like the humans whose algorithms create them. If banks lose even partial sight of how their AI solutions are arriving at decisions, how can they guarantee impartiality in the process?

Should such instances occur, accountability will remain with the bank’s people, so the development of ethical policies that can steer the implementation of decision-making AI solutions is becoming increasingly important. Banks that champion transparency here will be the first to succeed.

 

Getting personal

Outside of financial services, AI has been powering increasingly seamless and customised consumer experiences for years, raising the stakes for what consumers expect from other services. Historically, banks have enjoyed the uncontested loyalty of their customers, thanks to their reputations for trust and security. But the drivers of loyalty are changing quickly. The new generation of digital challengers continue to gather momentum and are luring customers to their ranks by delivering new user experiences that are powered, at least in part, by AI. Banks now need to think carefully about how they respond. Leveraging behavioural economics to enable the deep personalisation of their banking services is one avenue under widespread consideration.

Despite concerns around the data privacy, a study by Accenture* found that 60% of consumers would be willing to share personal data, such as location data and lifestyle information, with financial service providers if it resulted in lower pricing on products, or delivered other benefits, such as faster turnaround on loan approvals.

But just because consumers are willing to hand over data, doesn’t mean it’s right for the bank to use it. Where should the line be drawn? As data converges, it will be easier to create individual consumer profiles and assemble a tailored offering, dependent on each consumers’ circumstances. But should FIs really have this level of access into a consumer’s personal life? And, what about those who dont want to share these details? Will they be penalised?

 

Human skills still needed for AI success

Digital transformation has not yet changed the fact that businesses are still built, managed and kept alive by people and, reassuringly, the true potential of AI cannot (yet) be realised without us. That said, serious investment in AI skills at banks is needed if they are to make proper use of its commercial potential. Many banks, particularly resource stretched tier two and tier three institutions may need to look to collaborative partnerships with specialist fintechs if they are to keep pace with AI’s development.

 

What about the regulators?

Right now, there is no specific regulation or legislation to govern the use of AI. As with any emerging technology, regulators apply existing frameworks to ensure that any firm engaging with technology has the right risk controls in place. It’s likely that regulation will adapt to address the new issues surrounding AI, but no-one can be sure exactly how that will play out, or at what speed. Financial institutions should think ahead here, and try to forecast how future regulation may impact their AI services. Collaborating with other institutions to create a general framework and set of guidelines could elevate future regulatory impact. Firms should also look at implementing AI processes internally to remain compliant throughout their operations.

 

What’s next?

Making sense of AI for FIs requires a collaborative industry effort. To explore this topic further, Strands, in collaboration with global industry association, Mobey Forum and ESADE, is hosting an event from 27 – 29 November to bring together players from across the ecosystem. TitledAI In Financial Services: Unleashing the Potential, it will cover five key themes related to AI: ethics; infrastructure; personalisation; regulations and legislation, and the challenges with data. Speakers include Elena Alfaro, Global Head of Data and Open Innovation, BBVA, Jason Mars, CEO, Clinc, and Oriol Pujol, Vice President, University of Barcelona.

 

spot_img

Explore more