Why banks and PSPs must use adaptive thinking to tackle fraud and scams in 2024

Steve Goddard, Fraud Subject Matter Expert, Featurespace

As we start a new working year in 2024, tackling fraud remains a key issue for banks and payment service providers (PSPs). 2023 closed out with several high-profile cybercrimes high on the news agenda, including a scam targeting the partners and customers of Booking.com.

The good news, though, is that methods of fighting fraud are also developing, and banks and PSPs that get on the front foot can give themselves the best chance of protecting their customers in 2024. Firstly, let’s look at what is on the fraud horizon over the next 12 months – and how this can be tackled.

The scams well see in 2024 have deep roots

There isn’t ever going to be a single solution that will stop all scams. As soon as the industry moves to shut one type of fraud down, criminals change tactics. We saw this with Chip and PIN, for example; when this was mandated in 2006 to tackle debit and credit card fraud, scammers adapted their strategy, finding new ways to commit financial crimes through methods such as online scams and card-skimming techniques. As a result of this, the financial services industry at large must also adapt new ways of tackling new types of scams.

Scams evolve as technology evolves; the age-old Spanish Prisoner letters, for example, developed into the wealthy prince-offering-you-money-emails. And as people become more and more reliant on technology, they become more susceptible to these scams.

Investment scams that offer victims a way to improve their lifestyle will always appeal to people. Maybe they will be drawn in by the promise of the latest crypto token where they can make a 200% return on their investment, or perhaps they’ll fall victim to a purchase scam inside Facebook Marketplace promising goods and services that never arrive. As long as there are trusting individuals, there will be scam victims – but victims deserve to be protected, and by utilising technology and better collaboration, banks and PSPs can work to minimise the risk of fraud during 2024.

How banks and PSPs can fight back

Firstly, education should continue to be central to every financial institution’s anti-fraud strategy. Banks and PSPs are doing a good job in educating customers about the risks of fraud. A strong anti-fraud programme is good for consumers – but it also warns criminals that we’re on to them, forcing them to change tactics. This means banks need to be ready to adapt to the fraudsters’ next moves, and to update their educational efforts for customers accordingly.

Key to combating fraud is better collaboration between banks, social media companies, telcos and law enforcement agencies. Most fraud originates on social media sites, so it’s vital they are part of the fight, while banks and governments sharing information about what they are seeing in law enforcement is important as well.

Additionally, deploying the right technology plays a vital role for banks wanting to better protect their customers. Methods such as advanced machine learning techniques level the playing field, as they allow for analysis of a huge pool of data and transactions. This means banks can quickly understand individual customers’ spending patterns and what is normal for them, as well as any out-of-the-ordinary behaviour.

Scammers attempt to play on human emotions, understanding our fallibility when making decisions; we can easily fall into a trap and authorise a fraudulent transaction. Artificial Intelligence (AI) machine learning removes emotion, seeing the situation objectively, and enables banks to ask the customer if they’re sure they want to go ahead with a transaction. 

The use of Generative AI to minimise fraud

Generative AI is artificial intelligence that can create something new from existing data, but it is also great at discovering and summarising complex information from huge datasets. It is a useful new technology that banks and PSPs can utilise in the fight against fraud.

The potential uses of Generative AI for banks and PSPs are limitless, whether using it to generate training data for machine learning models to speed up implementations, identifying and filling gaps in analytics strategies, or summarising consumer behaviour, we currently are only scratching the surface of its usability. Fraud teams can lean on Generative AI to do the heavy lifting, an additional team member that works 24/7 to interrogate billions of data points, allowing human operatives to make informed decisions more quickly.

Often, the customer service team will be the ones speaking to victims in the first instance, so they must know how these scams work to better provide aftercare. Too often the victims are shocked because they have fallen for a scam; but these are insidious, deceptive crimes that can be very tricky to spot. They are perpetuated by career criminals. In order to help combat feelings of shame or blame by the consumers who have been victimized, banks should also change the narrative in terms of the words they use; these customers haven’t just fallen for a scam or were duped. They were the victims of a crime.

Banks would be well advised to have flexible, adaptive AI-based solutions in place to help protect their customers. These systems will spot fraud quicker, even as fraudsters change their tactics, offering proactive – rather than reactive – protection. AI can scale up as the attack volumes increase and make faster and better decisions than legacy solutions can. Fighting fraud used to be rules based. Machine learning is much more adaptive; while this flexible approach can make some banks and PSPs wary, it’s a very effective, innovative way of combating scams and should be embraced.

Better data sharing can curb fraud

It is common practice for criminals to share data and collaborate. So how can banks do the same? Adhering to regulations is important, but there are technologies available that can be used to safely and securely share intelligence and data without going against GDPR, the Data Privacy Act or any other data protection legislation.

This sharing enables stakeholders to learn from each other: if a social media company sees a particular individual has signed up to 20 different accounts, or if a telco knows that an individual has got 15 phone numbers associated with them, that’s useful information for a bank. Banks and PSPs would be well advised to look at using Privacy Enhancing Technologies (PETs) as a way to share and gain intelligence from data to make better decisions in a secure way. Techniques such as k-anonymity and local differential privacy allow AI models to work with sensitive data without organisations having to reveal, share, or combine their raw data.

Data, intelligence, knowledge and experience must be shared internally between different teams in a bank. Our advice to financial institutions is for the customer service team and the fraud team to work  more closely together to understand what fraud is and know what to look for. Then as an industry we can start to have more collaborative working groups to discuss best practices.

Tackling first-party fraud in 2024

Everyday people can be fraudsters too. First-party fraud is effectively when someone gives false information, or misrepresent themselves, in order to make money – for example, orders something online and they say it hasn’t turned up, but it has. They get the goods and claim the payment back as well; you could think of it as online shoplifting. It’s a type of crime that is on the rise now. A recent Cifas survey reported that one in eight people (12% of the UK adult population) were perpetrators of first-party fraud in the last 12 months – representing an increase from 10% in 2022 and 8% the year before.

That’s because there’s so much ecommerce traffic these days, and often delivery drivers leave the package outside rather than knocking on the door. Recipients can just say they didn’t receive it and there’s no evidence that it was delivered; it seems almost like the perfect crime. Some courier companies will take pictures of delivered items, but people can still claim that the package was stolen from outside their house.

Banks can help fight against this type of fraud by monitoring chargebacks, incoming and outgoing payments. Through an understanding and analysis of their customers’ transactional behaviour they can predict which transactions might be first party fraud. good internal collaboration is important, and we encourage banks and PSPs to continue doing this, as it does help in fraud detection and prevention. And if the increase in cases of first party fraud becomes unmanageable from a human interaction perspective, we encourage banks to look at AI and machine learning solutions to carry out the analysis.

Takeaway: the human (and robot) foot cant come off the gas

Fraud is an ongoing fight – but the good news is that banks and PSPs having cutting-edge technology at their disposal to help them combat this. When a new technology comes to market, our advice is for banks to examine it straight away and understand how it can be exploited by criminals and how they can educate and forewarn their customers and internal teams about it. AI and machine learning will help us to adapt and put protective systems in place quickly; but it’s up to us humans to be on the ball as well.

spot_img

Explore more