Spokesperson: Naz Bozdemir, Lead Product Researcher, HackerOne
Financial institutions are facing a battle on two fronts: strict regulatory demands on one side, and increasingly sophisticated AI-powered attacks becoming more accessible to the masses on the other. Recent data from the Department for Science, Innovation and Technology (DSIT) and the Home Office reveals that 43% of UK businesses experienced cyber-breaches in the past year, with financial services among the most frequently targeted sectors.
The mounting regulatory pressure and requirements on operational resilience, third-party risk management, and vulnerability disclosure are reshaping how institutions approach cybersecurity. When breaches occur, financial institutions face eye-watering costs, with the average standing at $5.56m, while regulatory penalties are added to customer data remediation and operational downtime expenses. Each unmitigated vulnerability represents both a security gap and a potential compliance event.
Thankfully, many financial services industry organisations are moving beyond checkbox compliance toward proactive, adversarial testing of their core systems.
At the same time, compliance frameworks are evolving too. For instance, DORA (Digital Operational Resilience Act), FCA operational resilience requirements, and PCI DSS guidelines now explicitly expect continuous security testing and third-party risk management.
Vulnerability disclosure and coordinated testing with security researchers are increasingly recognised as demonstrations of due diligence rather than optional security measures.

AI: The force multiplier amplifying cyber risks
While compliance frameworks provide necessary guardrails, they’re struggling to keep pace with AI-enabled threats. Just last year, valid AI-related vulnerability reports have surged by over 200%. This explosion reflects how rapidly AI integrations are expanding the attack surface within live financial workflows.
The threat manifests in multiple forms. Deepfake-driven business email compromise, synthetic identities, and API abuse are increasingly bypassing traditional controls. The recent case of a multinational firm losing $25 million when criminals used deepfake technology to impersonate the company’s CFO during a video conference is a great example of how sophisticated these attacks can be.
AI is not only creating new vulnerabilities but also making existing weaknesses exponentially more dangerous. Attackers are using AI to accelerate reconnaissance and identify subtle control failures that would previously have taken much longer to discover. When the AI agent social network Moltbook launched earlier this year, a misconfigured backend database exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages because basic access controls were not enforced.
The issue was not a novel AI flaw, but a familiar breakdown in authentication and object-level authorisation. In an AI-driven environment where agents act on behalf of users and connect to external systems, that kind of exposure can rapidly translate into impersonation, session hijacking and large-scale account compromise. Weaknesses such as broken access controls and Insecure Direct Object Reference (IDOR)- style exposures, therefore, become significantly more dangerous once embedded in automated decision-making and API-heavy workflows.
The culprit behind these risks is the speed at which financial services institutions are embedding AI into production environments. As the technology becomes embedded in fraud detection, underwriting, customer service, and internal operations, it inherits the same security gaps affecting existing financial platforms, particularly around identity management and complex authorisation logic, including API authentication and object-level access controls.
The most critical failures are appearing in endpoints, access controls and permissions, where weaknesses in model training and infrastructure design can translate into direct operational risk. While financial services environments often report fewer individual vulnerabilities than other sectors, each validated issue is more likely to escalate into a real-world incident because of the potential to integrate with key processes such as payments, account management and automated decisioning.
Currently, 21% of reported issues are high or critical severity, clustering around authorisation bypass and privilege escalation in APIs. This means attackers are moving away from exploiting large volumes of low-impact vulnerabilities toward focusing on fewer but higher-impact opportunities tied to key networks, identity systems and transaction flows.
Bridging the gap: From compliance to resilience
Today, AI should not be viewed as a separate security domain but as a significant force multiplier that amplifies existing structural risks within financial systems. The security challenge has evolved beyond whether financial systems are under attack, as cybercrime activity is ubiquitous across the entire industry. The critical question is how security investment should change to address new attacker tactics that bring significant potential for disruption.
In practice, this means rebalancing security investment away from volume-driven vulnerability reduction toward testing the controls that govern access to critical networks and data. Those tasked with protecting networks and data should prioritise end-to-end visibility across APIs, delegated access models, onboarding processes and automated decision workflows.
While relying on automation remains practical for detecting mature, deterministic vulnerability classes, it delivers diminishing returns given the rapidly changing threat landscape. Organisations should be increasingly pairing automation for scale with human-led testing to strike the most effective balance, focusing effort on the controls and workflows most likely to drive material loss.
The financial benefits are compelling. FSI programmes adopting adversarial testing are delivering a 5x return on mitigation and $128m in immediate breach avoidance by closing exploitable gaps before adversaries can exploit them. This approach aligns security programs with attacker behaviour and real loss scenarios, rather than relying on severity scores or compliance-driven metrics alone. For institutions planning security investment into 2026 and beyond, the challenge should be to move away from simply doing more testing to testing what matters most to the business and its risk exposure.
The look ahead
Financial services environments will continue offering threat actors the most favourable potential paths to monetisation. This reality demands that the sector shift from reactive compliance to continuous adversarial testing, treating security researchers as an essential line of defence.
Resilience will depend on recognising that regulatory compliance and AI threat mitigation are not separate challenges but interconnected imperatives. Instead of viewing compliance frameworks as ceilings to reach, organisations need to focus on building truly robust security programs as foundations – ones that anticipate and adapt to the AI-accelerated threats already reshaping the cyber risk landscape.


