By Vikas Krishan, Chief Digital Business Officer & Head of UK and EMEA for Altimetrik
Digital payments have become a constant for commerce across the UK and Europe. UK Finance report that half of all UK adults now regularly use mobile contactless services, such as Apple Pay or Google Pay, marking the first time mobile payment users have become a majority in the population. But convenience and security exist in tension and that tension is growing sharper.
The same technology that enables frictionless transactions can also be used to create new routes of attack. AI-driven fraud, phishing attacks and identity theft now exploit weaknesses faster than traditional defences can handle. The companies that thrive will be those that embed cybersecurity as a design principle, directly into the digital payment journey, treating it as a foundation of consumer trust and business continuity. The question facing financial services isn’t whether to prioritise security or user experience – it’s how to deliver both without compromise.
Trust breaks faster than systems
A single data breach can undo years of customer confidence. When payment systems fail to protect sensitive information, consumers don’t wait for explanations – they move their business elsewhere, often permanently.
Forty per cent of consumers have pulled their business from a company after learning it was not sufficiently protective of customer data, according to McKinsey research on digital trust. Once that trust is lost, transaction volume falls.
Regulatory frameworks such as PCI DSS v4.0, PSD2, and FCA guidance outline clear expectations around data protection. The Payment Card Industry Data Security Standard entered its mandatory phase in March 2025, requiring all businesses handling card data to meet updated compliance requirements. The upcoming PSD3 directive will further tighten rules around robust customer authentication and fraud liability.
But compliance alone won’t protect against the threats now emerging. Organised crime is using generative AI to create realistic phishing emails, automate credential stuffing, and simulate legitimate user behaviour at scale. These attacks evolve faster than any manual detection system can respond. A secure payment gateway and zero-trust architecture reduce attack surfaces and enable faster incident response, but they can’t anticipate what criminals will try next.
This is where static security models break down. Financial institutions need systems that learn and adapt in real-time – not just flag known patterns but identify anomalies that signal new forms of attack.
AI fraud moves faster than manual detection
Many financial institutions now use AI-powered fraud detection with anomaly monitoring and contextual risk scoring. Most of these systems rely on retrieval-augmented generation (RAG), which pulls context from internal databases before making decisions. RAG has proven useful, but it has limitations. It can omit relevant context, such as a customer’s recent travel patterns or account changes, leading to false negatives. It can also introduce conflicting information when external training data contradicts proprietary fraud intelligence held within an institution’s firewall.
To counter these retrieval errors, researchers have begun exploring a more advanced approach: Retrieval-Augmented Fine-Tuning (RAFT). Unlike RAG, RAFT combines retrieval-based context with selective model retraining, enabling fraud-detection systems to learn from internal, first-party data while maintaining confidentiality. The model improves accuracy without exposing sensitive customer information to third-party AI providers. It is essential that this data is anonymised fully.
Financial services companies that handle vast internal documentation, including transaction histories, compliance records and fraud case files, benefit most from domain-specific AI models trained on their own data. General-purpose models trained across multiple industries cannot match the precision required to flag payment anomalies in real-time.
But even the most sophisticated AI cannot protect customers who willingly hand over credentials or authorise fraudulent payments. Technology alone won’t close the gap.
Technology cannot stop phishing alone
The customer remains the weakest link in most fraud chains. Banks that educate customers on spotting red flags, such as unexpected payment requests and unusual account activity, achieve measurable reductions in authorised push payment (APP) fraud. The most secure systems though unite strong technology with an informed customer base. This requires more than periodic warnings. It requires transparency about how data is protected, how transactions are verified and what steps customers should take when something looks wrong. When customers understand the process, they are more likely to report suspicious activity early – before damage occurs.
AI decisions demand clarity
This principle extends beyond customer communication. When AI makes decisions that affect transactions, those decisions must be auditable and explainable to regulators and customers. When a payment is flagged as fraudulent, the bank must be able to justify why. Trust in outcomes depends on trust in the process. CFOs and CIOs need transparency before approving automation. If fraud-detection models operate in the dark, leadership cannot assess risk or defend decisions during regulatory reviews.
Operational resilience depends on ongoing monitoring, model retraining and keeping humans in the loop. Fraud patterns evolve daily, and models trained six months ago may miss the exploits criminals are using today. Financial institutions that treat AI as a static process will fall behind those that build continuous learning into their systems.
Security is an investment
Cybersecurity can no longer be considered a defensive cost. It is an investment in sustainable growth.
Financial services companies that move quickly to embed security, governance, and explainability into every transaction will protect their customer base. The laggers will lose market share to competitors who truly understand what is at stake.

