BANKING ON AI TO DELIVER FRICTIONLESS CUSTOMER JOURNEYS

Rob Mason, Chief Technology Officer, Applause

Banks and financial services companies play their cards close to their chest when it comes to IT, and Artificial Intelligence (AI) is no different. They prefer to rely on bespoke systems built internally, which can be a drawback for chatbots and smart voice assistants designed to interact with customers. The Machine Learning (ML) algorithms driving these systems need exposure to diverse data sets to prepare them for real-world interactions. Building AI systems behind closed doors might make them exclusive and secure, but without the proper training and testing, they could cause friction with the customer.

Today, most digital experiences, such as streaming or online shopping, are simple and intuitive, but digital banking is perceived to be slow and complex. Smart apps, chatbots and voice assistants are designed to streamline customer services, but all too often, they lack the training data ML algorithms need to respond to real-world situations. This can be corrected by sourcing vast amounts of data to improve an ML algorithm’s capacity to learn, successfully interact with human beings and make accurate predictions. The source of that data is people. As many are required to fulfil any number of customer use cases. In the case of a bank or a financial services company that could apply to any number of services that cater for a customer’s digital finance needs.

Rob Mason

 

Financial services companies place their trust in crowds

The pandemic has accelerated digitalization across all walks of life, the financial services space is no exception. As already mentioned, there are many areas of cloud and IT where financial services companies excel. You only have to look at the innovations in mobile and online banking. The constant stream of new digital banking products and the countless links to merchants and other service providers through API platforms that make paying for goods simple and straightforward. However, with so many banking, insurance and credit card customers now dependent on digital channels, self-service capabilities have become central to the digital banking experience. That process is reliant on the performance of AI chatbots. If they’re unable to interpret customer requests correctly or fast track them to a specific website, application or customer service agent, they can have a detrimental affect on a brand’s reputation.

Financial services companies can reduce that risk and increase levels of customer engagement through a process of rigorous crowdtesting that identifies any gaps or flaws across the different digital channels customers use. This technique involves a global community of vetted testers trialing new products before they’re released or helping to refine current applications and services. Either way, crowdtesting provides businesses with valuable insights about how customers interact with their products, improve user experiences and deliver frictionless customer journeys. It also augments in-house QA and testing resources within financial service companies and boosts the smaller, more disruptive fintech brands that tend to have limited resources.

A one-size fits all approach doesn’t really apply here. Banks and financial services companies have different tiers of customers, each with complex requirements and different expectations of what they want from digital services. Crowdtesting can be tailored to suit the needs of a specific company, by selecting vetted testers that match the profile and characteristics of their customers. They can quickly identify any bugs or processes that are causing friction and feed that data directly back to in-house QA teams. However, the impact of crowds on AI/ML is much more far reaching.

 

Data diversity is key to training ML algorithms

It takes time and effort to train and test an ML algorithm. The training process shouldn’t be restricted to a lab either. For a start, in-house teams of developers, data scientists and QA specialists tend to be from the same age range, gender and background. They’re not representative of the wider population and despite their best intentions their inherent biases will feed into the underlying algorithm. The best way to avoid this is to widen the net and ensure the training data has a high degree of a quantity, quality and diversity.

Crowdtesting allows financial services companies to source data at scale. Enabling them to select from a diverse pool of participants made up of specific demographics, including gender, race, native language, location, skill set, geography and any other filters that apply. The key is to expose the algorithm to different data sets and inputs, made up of authentic voices, documents, images and sounds. This tried and tested model ensures that the AI that doesn’t suffer from bias and has the capacity to continuously learn and improve.

For example, a recent project to train a smart voice assistant required over 100,000 different voice utterances. These utterances were delivered by 972 different people assembled remotely to train the algorithm. For another project 1,000 people contributed handwritten documents to provide the unique samples needed to ensure an algorithm could read human handwriting.

 

AIs show real intelligence

When you combine the diverse sets of training data with the community’s ability to spot any bias or other deficiencies inherent in AI-powered services during the test phase, the value of the crowdtesting model becomes clear. It can help banks and financial services companies to develop ML algorithms that can adapt to real-world conditions, reduce testing costs and release faster. By drawing on real-world experiences it’s possible to deliver exceptional levels of quality to customers across global markets, paying careful consideration to local language and culture. Gaining the well-earned trust and loyalty of those customers in return.

 

 

spot_img

Explore more