By Michael La Marca, Sarah Pearce, and Ashley Webber
Artificial intelligence technology (“AI”) has been used across the financial industry for years but following the significant technological advancement in recent years, the adoption of AI is rapidly increasing with many new use cases being identified and implemented. AI is being used for a variety of purposes in the financial industry such as fraud detection, KYC workflows, customer predictions on risk and behaviour, and cybersecurity. Simultaneously and reactive to these advances in technology is the increase in AI regulation and standards globally, with an evident divergence in approach between many jurisdictions. For financial businesses operating multi-jurisdictionally, compliance can therefore prove particularly challenging. In addition to specific AI regulations, businesses must also be aware of other legislation which may be triggered through the use of AI, such as privacy legislation when using personal information, or sector specific financial industry laws which govern, for example, cybersecurity and operational resilience. Below is a brief overview of AI regulation in the EU, UK, and US, demonstrating the differing approaches being taken to regulate the growing area.
The EU
In August 2024, the EU Artificial Intelligence Act (“AI Act”) entered into force, introducing a risk-based legal framework with extraterritorial scope for AI systems that are categorized as: (1) prohibited AI systems, (2) high-risk AI systems, (3) AI systems with transparency requirements, and (4) general-purpose AI models. Broadly, the AI Act applies to those organizations that develop and place AI systems on the EU market or put AI systems into service in the EU (the “providers”) and those organizations that use AI systems in the EU (the “deployers”). The obligations a business will be subject to will depend on which category their AI system falls into and whether the business is a provider or deployer with respect to such AI system. The obligations under the AI Act vary from, for example, data governance requirements including bias mitigation, drafting and maintaining technical documentation, and record-keeping, logging and traceability obligations, for providers, to assigning human oversight and meeting certain transparency expectations for deployers. The AI Act provisions are coming into force in phases which began in February 2025 and are due to end in August 2027; however, in November 2025, the European Commission proposed to delay certain of these phases. As the scope of the AI Act is wide and the obligations applicable to a business are dependent on the nature of the AI system(s) developed or used by the business, compliance requires an in-depth understanding of the AI systems of any business that have an EU-nexus.
The US and UK
The landscape in the US and the UK is entirely different from that of the EU in that currently, there is no comprehensive legislative framework governing the use or development of AI systems, nor does there appear to be a clear intention of either the US or the UK government to introduce an expansive framework similar to that of the EU.
In the US, there is broad disagreement among regulators and industry stakeholders on how to effectively regulate AI. It also is not clear which regulators have enforcement authority in this area. At the federal level, the Trump administration has prioritized innovation over regulation and has revoked the prior administration’s efforts at advancing AI regulation (i.e., Executive Order 14110). Efforts to regulate AI have thus occurred primarily at the state level, including through state privacy laws. In May 2024, Colorado became the first state in the US to enact comprehensive AI legislation that imposes generally applicable legal obligations on “developers” and “deployers” of certain “high-risk” AI systems that are designed to protect Colorado residents from algorithmic discrimination. Other state laws that have been enacted have generally been narrowly tailored to address either specific issues (e.g., AI-generated deepfakes or transparency around AI-powered chatbots) or specific entities (e.g., providers of large frontier models or publicly available generative AI services). In 2025, there were over 1,000 AI-related bills introduced by state legislatures, so, the AI regulatory landscape in the US is ultimately highly fluid.
In the UK, over recent years, there has been a change in government which has seen a change to proposals for AI regulation. While the previous government sought to introduce a non-mandatory “principles-based” framework, the current government stated its intention to introduce specific AI legislation targeted at tackling material risks in 2025 (although no draft has been published) and, most recently in October 2025, announced the introduction of an AI Growth Lab in the UK, consisting of AI sandboxes where companies and innovators can test new AI products in real-world conditions. This most recent announcement arguably demonstrates the UK’s focus being more on growth of AI in the UK as opposed to regulation.
Michael La Marca is a partner in Hunton’s New York office with extensive experience advising clients on a diverse range of global privacy and cybersecurity issues, including cutting-edge technologies such as AI/machine learning, biometrics, and geolocation tracking.
Sarah Pearce is a partner in Hunton’s London office and has extensive experience advising global clients on data strategies and compliance programs, including international data transfers, marketing-related issues, and risk management associated with the collection and use of data.
Ashley Webber is an associate in Hunton’s London office and focuses her practice on all areas of UK and EU data protection, privacy, and cybersecurity law.


