From pilots to production: How financial services can govern agentic AI at scale

By Richard Harmon, Vice President, Global Head – Financial Services Industry, Red Hat

In the coming years, banking, insurance, and investment management will be defined by intelligent hyper-personalisation, autonomous operations, and interconnected digital ecosystems. At the heart of this future lies agentic AI, autonomous systems that can reason, execute complex workflows, and make decisions with minimal human intervention.

From sophisticated trading platforms to intelligent fraud detection and automated customer service journeys, the promise is immense. Yet, as shown in our recent report, there is continued emphasis on AI for resilience and security. The survey found 34% of UK and European financial services decision makers expect using AI in operational resilience and business continuity to have the greatest impact on their organisation over the next two to three years. This shift is reframing AI strategy, placing a resilient and governed platform at the centre of sustainable innovation.

EU’s Digital Operational Resilience Act (DORA) and similar frameworks globally mandate a ‘minimum viable bank’, the uninterrupted delivery of critical operations through any disruption. This has profound implications for AI, especially as we move beyond static models to dynamic, agentic systems. A crucial requirement for systems such as an agentic trading platform, which potentially makes thousands of autonomous decisions per second, or a network of anti-money laundering agents analyzing transactional patterns, is that they must not become a single point of failure.

Richard Harmon

The frontier of intelligent automation

Agentic AI represents the convergence of generative AI’s creative capabilities with workflow orchestration and robotic process automation. In operations, this translates to intelligent automation at a new scale. Leading institutions are deploying agents for discrete tasks today. One customer reported that AI-augmented customer service agents realised between 5x to 50x more productivity gains.

Why Agentic AI demands a new rulebook

However, the autonomous, iterative and interconnected nature of Agentic AI introduces new and amplified risks that lie beyond the reach of traditional governance frameworks.

The core challenge is threefold. First, agents can exhibit emergent, unpredictable behaviours, where a single flawed decision can cascade into large-scale failure at a speed that renders human intervention futile. Second, the “black box” problem is profoundly magnified, as tracing the multi-step, branching reasoning of an agentic chain is vastly more complex than explaining a static model’s output, creating a critical explainability deficit.

The third and arguably greatest danger, though, may be systemic, arising not from any single agent failing but from their interactions, where agents could interact in unforeseen ways or even collude to produce destabilising outcomes. This necessitates a dedicated focus on “emergent governance,” a discipline concerned not with individual agents but with the connections and flows between them.

Current governance, focused on audit trails and static reports, is insufficient. We need agentic governance embedded within the architecture, guardrails and validation mechanisms inside each agent, and a system of “police” or “auditor” agents monitoring interactions across the ecosystem.

AI’s new mandate: active data sovereignty is the strategic differentiator

When it comes to sovereignty, the focus is also shifting from simple data residency, where data is stored, to active data sovereignty and governance. True sovereignty means controlling not just the location of data at rest, but also where and how it is processed. Can you ensure that a query on EU customer data is computed only within an EU jurisdiction? This requires a platform capable of enforcing granular policies across data, compute, and the AI models themselves. It also means avoiding dangerous concentration risk by ensuring portability across on-premises and multi-cloud environments, a key requirement under regulations like DORA.

The platform imperative

This brings us to the core thesis: the breakneck pace of AI innovation, particularly in the agentic realm, will only be sustainable if built upon a foundation of operational resilience and integrated governance. Financial institutions cannot afford a “Wild West” of disparate AI tools, models, and agents scattered across siloed environments. Such fragmentation leads to unmanageable complexity, invisible systemic risks, and an inability to demonstrate control to regulators.

The true strategic differentiator, therefore, will be an open hybrid cloud platform that delivers unified control, providing the visibility to manage, observe, and govern all AI workloads, whether agentic or traditional, across any infrastructure.

Safety cannot be an afterthought, it must be embedded directly into the platform’s fabric, integrating governance, security, and compliance controls, capabilities akin to those offered by specialised guardrail technologies, from the ground up.

Finally, it must grant strategic optionality, freedom from vendor lock-in that empowers institutions to place and move data and AI workloads dynamically, based on evolving sovereignty requirements, cost, and performance needs.

spot_img
spot_img

Subscribe to our Newsletter