AI-Powered Fraud: How Financial Institutions Can Fight the Deepfake Era in 2025

By Simon Horswell, Senior Fraud Specialist, at Entrust

Financial Institutions face mounting pressure to tackle the evolving world of cybercrime and fraud. Armed with the growing power of generative AI, fraudsters are turning from traditional tactics like physical counterfeit documents to hyper realistic deepfakes and synthetic identities. 

The Entrust 2025 Identity Fraud Report uncovers a stark reality: deepfake attacks are now happening every five minutes, while incidents of digital document forgery have surged by an astonishing 244% over the last year overtaking physical counterfeits for the first time.

With the financial sector becoming a primary target for sophisticated fraud attacks, and with the introduction of GenAI empowered cyber threats, it has become more important than ever for financial institutions to protect digital identities and transactions.

The Rising Tide of AI-powered fraud

Entrust’sIdentity Fraud Report showed that digital forgeries now account for 57% of fraud cases. What was once a game run by professional hackers and criminals has now become a trading network where seasoned fraudsters can monetise their tools and sell their stolen data, all whilst sharing knowledge with amateur cybercrooks online. With sites like Onlyfakes rising in popularity, financial services are now faced with the growingly complex ecosystem of fraud-as-a-service.

Simon Horswell

Whilst organisations use generative AI tools like ChatGPT and Gemini to help sync communication and boost productivity, cybercriminals are harnessing the same technology to create dangerous clones like WormGPT to create sophisticated scams. Not only are digital forgeries like fake passports and phishing emails becoming easier and cheaper to make but the rise in AI tools has made digital fraud more scalable. Fraudsters can now manipulate data with a click of a mouse, using AI editing tools and accessible templates found on the dark web or encrypted chat messaging platforms.

A criminal might use a stolen image of a victim’s genuine driving licence, using the portrait and generative AI to create a deepfake over their own face. This kind of attack now accounts for most selfie biometric fraud, with deepfake videos coming in at a close second at 40%. By creating these almost imperceptible scams, AI has opened the doors for bad actors to exploit our trust to craft scams.

Entrust’s Identity Fraud Report also reveals the three main targets of cyber fraud: cryptocurrency, lending and mortgages, and traditional banks. Cryptocurrency topped the list, with fraud rates reaching 9.5% in 2024, meaning one in 10 account activities, like onboarding, was expected to be fraudulent. Lending, mortgages, and traditional banks follows closely, with fraud rates of 5.4% and 5.3%, respectively, and a 13% rise in fraudulent onboarding, likely driven by economic instability, high inflation, and easier access to AI tools for fraudsters.

Strategies to Fortify Security Defenses

The key to fighting fraud lies in implementing robust identity verification (IDV) processes at onboarding, where fraud is the easiest to prevent. By stopping the cybercrook at the first knock of the door, businesses can prevent their house from being robbed or going up in flames. Or in other words, by stopping fraud at onboarding, security is strengthened down the line.

The second step is to make sure that you have a house alarm. A multifaceted approach to security, which includes document verification, repeated attribute detection and biometric checks means you can eliminate the risk before it even becomes a threat. Fraud prevention should not stop at onboarding and it’s integral to monitor the entire customer lifecycle. Fighting fraud begins with understanding who your customer is: confirming someone’s identity is a must to prevent money laundering and ensure customer safety.

AI tools are ironically like a coin. On one side, they are being exploited to orchestrate identity scams; on the other, they can also be used to combat the same fraudulent crimes. Entrust’s fraud prevention solution, for example, uses a special micro-model architecture combining over 10,000 machine learning models specifically trained to detect fraud markers. By automating their approach, Entrust can detect up to 50% more document fraud than a more generalised model. AI can also be used to compare live selfies against government-issued IDs like passports with unparalleled accuracy whilst simultaneously detecting subtle signs of manipulation or synthetic image creation.

This battle against fraud will undeniably evolve and rage on as the digital landscape continuously shifts. Yet financial institutions must stay ahead of the game by embracing AI technology, implementing multi-layered security processes and adapting to fight more sophisticated attacks. 

spot_img
Ad Slider
Ad 1
Ad 2
Ad 3
Ad 4
Ad 5

Subscribe to our Newsletter