Julian Mulhare, Managing Director EMEA at Searce
Navigating the AI regulation tsunami: A 2025 perspective
The financial services industry stands at a crossroads. The transformative potential of AI is undeniable, promising streamlined operations, optimised capital allocation, innovative customer experiences and augmentation of roles, not just replacement. However, this technological revolution is coinciding with an unprecedented surge in regulatory scrutiny. This is essential to ensure that we foster the development and adoption of trustworthy AI, while mitigating risks and ensuring the safety and fundamental rights of individuals.
2025’s compliance crunch: A barrage of regulations
Finance leaders are bracing for a seismic shift. The regulatory landscape is becoming increasingly intricate, with a slew of new directives coming into force. Central to this is the EU AI Act, with its first components already in effect from February this year. This act aims to regulate AI systems based on their risk level, adding a layer of responsibility for firms to ensure the technology is used safely and ethically.

Complementing this is the Digital Operational Resilience Act (DORA), which has been fully effective since January 2025. DORA mandates that financial firms implement robust IT security measures. Firms have been quick to implement new systems, hire compliance experts, and conduct regular audits to ensure adherence, with penalties for Non-Compliance reaching up to 2% of annual turnover, and daily penalty payments up to 1% of the average daily global turnover.
In addition to AI and cybersecurity regulations, personal data sharing is becoming increasingly regulated. The EU Data Governance Act (DGA) and Data Act, starting in September 2025, introduce stricter rules around data sharing of personal information, adding another layer of complexity.
Anti-Money Laundering (AML) regulations are also tightening. We’ve already seen large fines hit the headlines this year, with Revolut fined €3.5 million for deficiencies in its AML procedures. The regulator identified shortcomings in monitoring business relationships and transactions, leading to inadequate identification of suspicious activities. While no confirmed money laundering was found, the penalty underscores the importance of robust AML frameworks.
Alongside these developments, the finalisation of the Payment Services Directive 3 (PSD3), set to be enforced with finalisation in early 2025 and 2026, is tightening the regulatory net. Not to mention the continued enforcement of the Markets in Financial Instruments Directive (MiFID) and the delayed Basel 3.1 implementation in the UK until January 2027, bringing evolving capital requirements and risk management standards.
The convergence of these regulations is creating a compliance crunch, pushing finance leaders to rethink their strategies and how to incorporate AI safely and effectively.
AI in decision-making: Navigating new regulatory territory
At the same time as new guardrails are being put in place, and AI is revolutionising decision-making with financial institutions moving beyond basic automation to inform core financial decisions. It’s making capital allocation leaner, risk assessment more accurate, and customer service more personalised. Expect increased use of AI in fraud detection, real-time transaction monitoring, with GenAI showing potential in market analysis, financial crime detection, and regulatory compliance.
With AI playing a larger role in risk management, it directly falls under the purview of regulations like the EU AI Act. Financial institutions must ensure their AI systems are transparent, explainable, and unbiased. The use of AI in critical decisions necessitates robust governance frameworks, ethical considerations, and continuous monitoring to ensure compliance. While this is new territory for AI regulation, such regulation has been needed in the past to ensure that the decisions humans make are transparent, explainable, and unbiased.
The hidden readiness gap: Data, tech, and talent
Many firms are discovering a hidden readiness gap as they attempt to scale their AI initiatives to remain competitive in this fast-moving space. Challenges around data quality, technical infrastructure, and talent shortages continue to present significant hurdles.
Effective AI deployment relies on high-quality, well-structured data. Many institutions are now investing heavily in data governance and modernisation initiatives such as metadata management and a data mesh framework to ensure this foundation is in place. However, a critical hurdle is the talent gap. Building and managing AI systems requires specialised skills, irrespective of whether the systems are purchased off the shelf or custom-developed in-house. As a result, firms are scrambling to attract and retain AI experts, data scientists, and compliance professionals.
As the landscape continues to evolve, bridging this talent gap, along with continuous adaptation and a proactive approach to regulatory compliance, will be essential for successful AI adoption.
To address this, organisations must adopt a proactive, strategic approach. They should invest in upskilling existing teams; equip them with targeted training and certifications in AI, data science, and regulatory compliance. This strategy empowers firms to build a strong, homegrown talent pool capable of managing and optimising AI systems without the pressure of constantly hiring externally.
Additionally, leveraging strategic partnerships with AI and data experts enables firms to access to specialised skills when needed. By collaborating with external partners, organisations can tap into cutting-edge solutions, best practices, and domain expertise that might otherwise be out of reach.
Rather than seeing the talent gap as a barrier, firms can view it as an opportunity for growth and innovation. With the right investments and approach, financial services institutions can transform their talent challenges into a competitive advantage, fostering a future-ready AI ecosystem and ensuring regulatory compliance in an increasingly complex landscape.
Navigating the AI Regulation Tsunami: A 2025 Perspective
AI promises to transform every aspect of financial services, from capital allocation to customer engagement. But make no mistake: 2025 isn’t just about AI’s promise; it’s about navigating a tsunami of regulation that demands immediate, strategic action. Failing to act now could not only stifle innovation but also expose your firm to significant penalties. This is more than a compliance exercise. It’s a fundamental challenge for finance leaders in fostering trustworthy AI, mitigating critical risks, and safeguarding individual rights.
To effectively navigate this complex landscape and ensure sustainable growth, firms must proactively address the widening talent and expertise gap. A multi-pronged approach that includes strategic upskilling initiatives for existing teams and targeted recruitment of AI and regulatory specialists will be critical. By partnering with experts, financial institutions can gain access to the deep domain knowledge and technical proficiency required to both build trustworthy AI systems and ensure robust compliance frameworks, thereby transforming regulatory challenges into opportunities for innovation and competitive advantage.