AI is making financial fraud less predictable and far more damaging. With access to new tools like Fraud GPT, deep fakes, and large-scale automated and agentic, autonomous decision making capabilities to supercharge methods such as spearphishing, fraudsters are now able to target their activity more accurately, convincingly, and at higher volumes than ever before. Add in use of AI to flood the industry with financial applications which increase phishing and identity theft, especially for vulnerable individuals, and the cost of financial fraud continues to explode.
As one recent report revealed, in the UK alone, banking fraud caused £417.4 million in losses across 21,392 reported cases over the past year, making it the third costliest fraud type. Combatting this explosion in financial crime requires a different approach, one that not only transforms identity checks through robust, multi-tiered tools but also includes assessment of behavioural signals, transaction monitoring and cross validation to highlight suspicious activity at any point in the customer lifecycle.
Critically, argues Dave Rossi, Managing Director, National Hunter, it demands a new mindset based on collaboration, information sharing and a culture that encourages people to raise concerns, call out suspicious activity and prioritise fraud detection at every stage of the customer journey.
Financial Fraud Explosion
Financial institutions are struggling to adopt the new mindset required to protect customers, reputation and the bottom line from financial fraud. The continued internal conflict between the need to add layers of verification and detection to deliver essential safeguards and a perception that such measures will lead to customer disengagement and loss is adding unacceptable risk in a new era of AI enabled, widescale financial fraud.
Financial fraud is no longer opportunistic and small scale. From individuals trafficked to dedicated fraud centres in the Far East to the systematic use of AI to build synthetic IDs at scale and deep fake voice and video calls used successfully for spearfishing activity, financial fraud is a global, organised crime.
The ease with which AI can be used to generate synthetic identities alone should prompt a radical overhaul of anti-fraud measures. According to Signicat, AI-driven identity fraud is up 2,100% since 2021 and is now outpacing many traditional forms of financial crime. Rather than stolen passports and forged documents, fraudsters are now using AI to create manufactured personas, ID documents and accounts created using digital footprints that appear legitimate but have been built to deceive. Adding defence measures – both technology and human – to the process may potentially add friction to the customer experience but failing to protect either the business or customers will, without any doubt, cost significantly more.
Synthetic IDs
Organisations need to understand the sheer scale of AI-enabled financial fraud. LexisNexis Risk Solutions estimates that there are around 2.8 million synthetic identities in circulation in the UK, and hundreds of thousands more are created annually. They also claim 85% of synthetic IDs go undetected by standard models, creating a potential cost to the UK economy of £4.2 billion by 2027 unless companies adopt more stringent screening measures.
The use of AI at this scale enables criminal gangs to play the long game, with the behaviour of synthetic accounts mirroring real customers over months or years to build a credit history before cashing out and leaving the business and bank to handle the write-off. And this tactic is being used to target business in every industry. According to Experian over a third (35%) of all UK businesses reported being targeted by AI-related fraud in the first quarter of 2025, an increase of more than 50% over the same time period last year.
The use of synthetic IDs is just one way in which AI has changed the familiar patterns of financial fraud. The sophistication of deep fake technology is another, with fake voice and video building on chat based social engineering messaging via real-time chat scripts for LinkedIn DMs and WhatsApp messages, to successfully facilitate incredibly sophisticated spearfishing attacks. Mimicking the persona of high value individuals, especially CEOs and CFOs, such attacks have led to devastating losses, including the UK-based fintech which lost £1.8 million in 2024 following an attack using a combination of spearphishing and generative AI to impersonate the company’s CFO.
Trust Issues
Organisations cannot afford the current levels of (over) trust. Indeed, the success of the majority of AI-enabled financial fraud can be tied to organisational culture. Synthetic IDs succeed when the focus is only on verification – which checks identity – rather than on-going monitoring of behaviour and transactions as well as cross validation, which highlight intent. Spearfishing leverages a culture of uncertainty, succeeding in environments where individuals do not feel confident or are not encouraged to question the veracity of the CFO’s payment orders, for example.
The reliance on credentials verification is inadequate in a world of Fraud GPT. With diverse sophisticated technologies now being deployed at scale, it is no longer acceptable to rely on traditional models of verification, such as document validation. Furthermore, organisations are losing trust in newer techniques, such as facial biometric authentication due to the sophistication of AI deepfakes. Concerns are growing about the risks associated with proposed national eIDs: when a digital ID appears to be verified by government there is a temptation to believe without additional, yet essential, scrutiny.
Organisations need to consider intention as well as identity: what are the behavioural signals that could indicate fraud? Which transactions are suspicious and what additional insight can be surfaced through continual cross-validation of activity? Adding layers of verification and flagging possibly suspicious activity may initially annoy the odd genuine customer, but the reality of AI-enabled fraud is devastating individuals, businesses and financial institutions. It is now vital to adopt a fraud-first culture, where individuals at every level of the organisation have both the tools and understanding to spot suspicious activity and are encouraged to call out concerns, especially if they relate to senior management requests.
Collaborative Model
Failure to shift from over-trust to low-trust will continue to play into the hands of criminal gangs – gangs that are constantly sharing information about weak targets. Innovative, anti-fraud organisations are leading the fight back through intelligence sharing, cross-validation and next generation screening. Adopting both robust verification and validation technologies and culture that encourages suspicion and also fosters cross-industry insight is key to addressing this complex, evolving threat.
By proactively sharing the information surfaced through comprehensive verification as well as behavioural and device analytics, the industry can gain rapid understanding of the fast-changing tactics being deployed by these criminal gangs and take the appropriate remedial action to protect, customers, reputation and the bottom line.


