A new government report highlights phishing attacks as the most common type of cyber crime in the UK, with this form of attack posing a significant risk in financial services. Richard LaTulip, a Field Chief Information Security Officer at Recorded Future, looks at how organisations can bolster their defences through sophisticated threat intelligence.
The UK government’s ‘Cyber security breaches survey 2025’ shows 85% of businesses experienced phishing attacks. This is up from 84% reported last year and 79% in 2023. Phishing is, according to the report, ‘by far the most common type of cyber crime in terms of prevalence’. It’s a growing risk for financial services organisations, which shows little sign of slowing as phishing attacks evolve.
Rise in phishing
Relentless risks of fraud and financial crime have seen organisations make cybersecurity a strategic priority. Financial services companies are investing in the latest tech and software to protect their operations against malicious activity. A by-product of this positive intent is that cyber threats adapt. Criminals struggle to crack robustly protected networks, and realise that obtaining genuine user credentials is an effective means of getting past extensive authentication checks. Threat actors also see humans as a weak link in security defences, with a belief that employees can be manipulated to share information. It’s a combination of these factors that’s fuelling the prevalence of phishing.

The phishing threat landscape is now evolving and increasingly becoming ‘sprearphishing’, where attacks are much more targeted and seem even more plausible. Extremely personalised attacks will be directed at specific individuals and companies to deceive them into sharing trusted credentials and confidential information.
Sprearphishing will be carried out via channels such as email, SMS and other messaging platforms, and phone calls. Artificial Intelligence (AI) is being exploited by criminals to make these personalised attacks scalable and effective.
AI-powered phishing
Generative AI is often used by threat actors to quickly generate thousands of unique, native language lures. Scam emails seem credible, because the language used appears authentic and less suspicious. For example, email copy may deliberately include typical spelling and grammatical errors in a message, and colloquial terms, so that it appears to originate from a plausible, human source
It’s also possible for AI to harvest and analyse data about the target of the attack, as well as the supposed party that’s requesting information. This is where spearphishing becomes very personalised. An email received from a supposed senior colleague seems real, because it’s able to impersonate a trusted source and contains what appears to be genuine and relevant references.
Criminals are also using the voice generating and changing capabilities of generative AI to impersonate support services such as an IT helpdesk. The AI contacts an employee and tricks them into divulging confidential and sensitive information. It’s an evolution of a social engineering scam, which takes advantage of an employee’s likely frustration with an IT problem and their willingness to quickly fix problems. The AI voice seems genuine and builds trust.
A key step for preventing spearphishing attacks is to build awareness amongst employees – they need to know what types of risk they are facing, if they are to prove an effective line of defence. Running simulated attacks can help employees to understand the capabilities of AI and show how it is being exploited by criminals. It also important to strengthen resilience through faster threat identification and sustained intelligence. Monitoring threat actors and spearphishing campaigns can enable financial services to stay ahead of potential attacks.
Impersonated brands
Phishing techniques are also evolving to spoof widely trusted and well-known brands. Genuine organisations like Microsoft and DocuSign are widely used throughout financial services and have a familiarity with users, and a familiarity associated with sharing confidential information. Employees interact with these types of platforms and services on an almost daily basis and, in most cases, won’t think twice about how they use them. Criminals know this and prey on it.
Brand impersonations can gain user trust, and operate in what a user typically thinks is a safe space to share passwords and credentials. These types of attacks are evolving with more sophisticated domain impersonations, including lookalike domains and homoglyph attacks that evade traditional email filters.
The threat of phishing is growing and evolving, and will continue to diversify and expand as financial service companies strengthen defences against cyber crime. It’s crucial that these defences go beyond preventative measures to embrace intelligence and monitoring that accelerates the identification of phishing attacks and anticipates threats before they become a reality. Threat intelligence can reduce vulnerabilities and lessen the effectiveness of phishing, even when it’s extremely personalised.
About Recorded Future
Recorded Future is the world’s largest threat intelligence company. Recorded Future’s Intelligence Cloud provides end-to-end intelligence across adversaries, infrastructure, and targets. Indexing the internet across the open web, dark web, and technical sources, Recorded Future provides real-time visibility into an expanding attack surface and threat landscape, empowering clients to act with speed and confidence to reduce risk and securely drive business forward. Headquartered in Boston with offices and employees around the world, Recorded Future works with over 1,900 businesses and government organizations across more than 80 countries to provide real-time, unbiased and actionable intelligence.