AI AND HOW IT’S LEADING THE FIGHT AGAINST FRAUD IN THE FINANCIAL SECTOR

Geoff Clark, Managing Director, Aerospike EMEA

Much like many other sectors financial institutions have accelerated their digital transformation projects since the beginning of the pandemic. Lockdown meant that customers could no longer visit local branches or meet in person with their financial advisor. Financial institutions have no choice but to find alternative ways to serve their customers.

We saw banks quickly adapt and improve their automation tools to interact with their customers online.  Technologies that enable chatbots, credit card brokerage, contactless payment cards, digital verification for onboarding, online insurance applications, mobile apps, recommendation engines, robo-investing and robotic process automation (RPA) were just some of the many solutions deployed. Here in Europe, Ernst and Young (E&Y) reported an increase of 72% increase in the use of FinTech apps since the start of COVID-19.

Geoff Clark

Cybercriminals typically opt for the lowest hanging fruit and as financial institutions clambered to expand their digital services the cybercriminals looked to identify and exploit any weakness in the infrastructure providing the backbone for these technologies. Exploiting the vulnerabilities of financial institutions is not new as they have long been a coveted target for fraudsters. In the main, that’s due to the wealth of sensitive personal and financial information they hold. Throw into the mix pandemic relief funds, increased unemployment benefits, and stimulus payments, and you have the perfect playground for fraudsters.

A recent report found that every dollar lost to fraud costs financial service companies as much as $3.78 — an increase from $3.25 in 2019. But fraud’s impact is much deeper than financial loss. It drains company resources to investigate and prosecute fraud, damages reputations, and puts customer retention at risk. For these reasons alone, it is imperative that the appropriate systems and processes are in place to combat fraud.

 

Analysing Fraud

The majority of financial institutions still rely on dated rule-based systems to mitigate fraud risk. These systems can consist of thousands of predefined rules that store, sort, and manipulate data to find fraud patterns. For example, a rule could say, if there is a credit card transaction in one state and another transaction in a different state within a 30-minute time frame, then this is likely a fraudulent transaction and therefore it declines the transaction.

Rule-based systems are static, hard-coded, and time-consuming to update, and are often one step behind the sophisticated techniques fraudsters use. When fraud occurs, the typical response is to create another rule that prevents another attack, but it’s often too late.

Fraudsters continue to find new ways to commit fraud that rules don’t capture.

The trend we’re seeing from financial institutions is to replace rule-based systems with AI and machine learning-based systems as they’re more effective. These systems are largely self-learning and there is so much more data available and the more information they’re fed the more effective they can be. Rather than using tens of data attributes with rule-based systems, AI and machine learning-based systems can analyse hundreds of data attributes over enormous data sets and longer time frames to automatically detect with higher accuracy unusual behaviours that indicate fraud. For example, Barclays Bank has implemented AI systems to detect and mitigate fraud improving the customer experience in the process through the reduction of false positives and false negatives.

AI and machine learning-based systems are heading toward explainable AI (XAI), an emerging sector in machine learning that addresses how AI systems arrive at their black-box decisions. Financial institutions know the inputs and outputs of these systems, but they lack visibility into how they reached the results.

Building XAI into AI systems enables banks to understand how decisions are made and create better models to improve their systems by removing bias. For example, suppose a fraud system declines a legitimate customer’s credit card transaction. In this situation the financial institution needs to understand why the false positive has occurred so it can further refine its model.

XAI also has data privacy in its favour particularly when it comes to compliance. Under the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA)—and with other data privacy laws coming—financial institutions need to comply with specific mandates. They must be able to explain how they use a customer’s personal information and how they came to decision such as declining a credit card transaction. Overlaying XAI on top of their AI systems, ensures they have far great visibility into how decisions are being made by AI/ML systems.

 

Constructing a Fraud System Architecture

To emulate some of the industry’s more innovative organisations financial institutions must understand and pursue best practices when building their AI-based fraud systems. They should work alongside technology organisations but also work with their line of business managers to understand how fraud is impacting their business, what their greatest weaknesses are, how customer satisfaction can be improved, and how they can incorporate customer fraud/risk metrics into their customer analytics to improve their omnichannel marketing campaigns. Customer data collected and analysed by fraud teams are some of the most robust depositories of customer information making them invaluable to marketers.

When looking to build a world-class system, financial services firms should consider the following steps:

  • The fraud system needs to likely consume hundreds of terabytes of data, perhaps even petabytes for the largest firms.
  • Data must be continuously updated in real time from many sources such as internal customer and transaction data from storefronts, web pages, and mobile devices, as well as third-party demographic, behavioural, geo-location, identity management, credit bureau, and other data types.
  • This data will usually need to be prepared, e.g., cleansed, standardised, and normalised, to convert it into a form that AI/ML models can more easily digest and understand.
  • The data needs to move back to the central data platform to be further enriched.
  • At this point those financial institutions can fine-tune the model parameters, test and select the optimal machine learning algorithms, feed them with data to learn the underlying patterns, and validate the model’s accuracy to make good decisions using data that was not part of the training set.

After the above steps are completed and they are satisfied the model can be deployed to act in the microsecond moments that are necessary to fight fraud.

As technology evolves at such a fast pace all organisations must aim to implement a fraud solution that can combat the increasingly sophisticated fraudsters while implementing the following key elements

  1. Large data sets (TeraBytes, PetaBytes) consisting of both internal company data supplemented with third-party data;
  2. Highly optimised and validated AI/ML algorithms that detect fraud and minimise false positives and false negatives;
  3. A real-time data platform capable of running these AI/ML algorithms across enormous data sets in sub-millisecond response times to provide customers with the fast customer experience that they expect.

 

 

 

spot_img

Explore more