by Mark Pestridge, Executive Vice President & General Manager, Telehouse Europe
The UK financial services sector has a strong history of adopting technology early, from electronic trading to mobile banking and open APIs. AI is the latest step, improving fraud detection, risk modelling, customer service, and operational efficiency. Many firms are ready to move, yet progress is being held back by uncertainty over how AI will be governed.
New research from Telehouse highlights the scale of the problem. More than half (57%) of IT decision-makers at UK finance and banking organisations say uncertainty over future AI regulation is delaying investment decisions.
For a sector that competes internationally on trust, speed, and resilience, uncertainty quickly becomes a business risk. AI programmes are rarely small experiments. They influence how decisions are made, how customer interactions are managed, how models are monitored, and how systems are secured. Boards and risk committees want confidence that deployments will remain compliant over time, rather than meeting requirements at the moment of launch only.
Well over a year on from the EU AI Act, many UK-based firms with European operations are already building programmes to meet EU requirements. At the same time, they are having to track the UK approach as it develops, alongside emerging frameworks in the US and across APAC. This creates a compliance puzzle where a single AI use case can trigger different obligations depending on where data is processed, where a model is trained, where customers sit, and how outputs are used.
The result is friction across the whole delivery cycle. Projects slow down while teams wait for guidance. Procurement becomes cautious, particularly where third-party models are involved. Pilot phases stretch longer than planned, even for lower-risk use cases. Investment that could be directed into capability and security is diverted into legal interpretation, assurance and governance overhead.
The impact is already visible in workforce decisions. More than a third (37%) of IT leaders in this sector say their organisations have outsourced, or plan to outsource, technology roles overseas, or cut them altogether because of the cost and complexity of managing AI regulations. A further 24% are still considering cuts to balance budgets.
Offshoring is not new, but AI compliance is adding a fresh pressure point. If firms believe they can reduce exposure or simplify governance by moving work to other jurisdictions, they will consider it. That comes with a cost to domestic capability, and it weakens the UK’s ability to build long-term expertise in deploying AI safely within a regulated environment.
Internationally active banks face an uncomfortable operational choice. They can maintain separate AI governance models by region, with overlapping controls and reporting requirements, or they can standardise on the strictest regime and accept reduced flexibility. Multiple regimes increase complexity and create scope for inconsistency. A single strict standard can slow deployment, even where the application is relatively contained.
This is why clarity matters. It does not require copying another region’s legislation line by line. It does require a stable, risk-based framework with clear definitions, practical compliance pathways, and a credible timeline that allows firms to plan investment and delivery.
If the UK wants AI innovation in financial services to stay onshore, three priorities stand out. Regulatory certainty must come with usable guidance, including how risk categories are defined, what documentation is expected, how accountability is assigned, and how third-party models and data are treated. Friction between regimes should be reduced through alignment where it makes sense, including consistent terminology and workable routes for recognising equivalent controls across borders. Skills investment needs to be treated as urgent, with AI-specific apprenticeship schemes and practical upskilling programmes that help firms hire locally rather than looking overseas.
Physical infrastructure is in reasonable shape, although planning and energy constraints remain real challenges as demand for AI-ready capacity grows. Data centres can support next-generation technology, but success depends on matching infrastructure with the right skills and a clear governance environment.
UK finance does not lack ambition for AI. What it needs is confidence to invest, deploy, and hire with clarity. Getting the rules and the talent pipeline right is how the sector keeps innovation, jobs and competitiveness here in the UK.

