Site icon Finance Derivative

AI, Machine Identities and the Future of Financial Cybersecurity

System hacked warning alert on notebook (Laptop). Cyber attack on computer network, Virus, Spyware, Malware or Malicious software. Cyber security and cybercrime. Compromised information internet.

Andy Parsons, Director of Financial Services and Insurance, CyberArk

Somewhere within your infrastructure sits a service account no one remembers creating. It holds elevated privileges, has not been rotated in years, and connects to systems that touch customer funds. Most financial institutions have hundreds of these accounts, yet few can say with confidence who owns them, what they access or whether they are still needed. With organisations now managing an average of 96 machine identities for every human employee, this is no longer an abstract risk but accumulated technical debt with a live fuse. The rapid rise of autonomous AI is only accelerating the countdown.

AI in the form of machine learning has long supported critical functions across financial services, from fraud detection and risk modelling to portfolio management. What has changed is the scale and autonomy with which these systems now operate. Today, AI interacts continuously with a dense network of applications, services and APIs, each carrying its own machine identity. This expanding layer of the financial technology stack has become a critical and often neglected part of core infrastructure, with 68% of businesses lacking identity security controls for AI. As organisations enter the new year, a sharper question emerges, can your CISO account for every machine identity in the business?

The expanding footprint of machine identities

For years, identity management in financial institutions was a people-centric discipline. The work revolved around workforce entitlements, segregation of duties, identifying toxic combinations and granting the appropriate access to engineers, auditors and compliance teams. Technology underpinned these operations, but the ecosystem itself stayed relatively stable and limited in scope.

This long-standing dynamic, however, has been entirely uprooted over the past few years following a surge in the number of machine identities. These identities support the breadth of operational tasks, including order and execution management systems, data analytics platforms, payment workflows, fraud engines, reporting pipelines and customer applications. Of these, each individual machine identity relies on credentials that must be issued, governed and rotated throughout its lifecycle.

Yet many production environments still contain legacy service accounts, old API keys and security certificates without clear ownership. These artefacts introduce avoidable risk and serve to expand the attack surface in an industry that can’t afford to have blind spots.

How AI accelerates capability and consequence

Andy Parsons

As machine identities proliferate, AI offers powerful ways to manage their growing complexity by analysing vast volumes of identity activity, detecting anomalies and automating routine credential tasks that can significantly streamline security operations.

These advantages, however, come with unavoidable trade-offs rooted in how AI systems operate. To perform effectively, AI requires broad and deep access to organisational data, increasing the potential blast radius of a breach.

At the same time, advanced persistent threat groups are already developing AI tools that can map org structures, identify valuable machine identities and orchestrate credential theft at a scale we’ve never faced, in fact deep technical knowledge is no longer required, AI can solve that gap for adversaries for relatively low effort. Countering this volume with AI on the defensive side can, paradoxically, introduce new forms of risk.

This inherent duality means businesses must strike a careful balance to capture AI’s benefits without amplifying its dangers. When identity-based attacks succeed, as seen in recent UK retail breaches that disrupted operations for weeks, the damage is only magnified if adversaries gain access to the high-privilege credentials AI systems depend on. Achieving this balance requires more than technical controls. It demands deliberate, disciplined AI governance.

Closing the governance gap as machine identities multiply

Most identity governance frameworks aren’t just outdated, they’re actively misleading boards about risk exposure if they don’t include machine identities, Today, as machine identities underpin nearly every transaction, service interaction and compliance process, frameworks must evolve to reflect modern financial operations.

In the age of AI, effective governance begins with four key areas, including:

Strengthening resilience for the AI era

Financial institutions that take a proactive approach to machine identity governance are better positioned for long-term resilience. This is not simply a cybersecurity issue, but an operational and continuity risk that directly affects service delivery, regulatory compliance and customer trust. Understanding this risk in practical terms is essential.

One way to do so is through a simple test. Ask how many machine identities were created in your environment last month, then ask who owns them. If the answer takes more than a day to produce, or comes back as an estimate, governance is not yet in place. Institutions that succeed in the AI era will not be defined by the sophistication of their models, but by their ability to answer fundamental questions about what is running inside their own environments.

Exit mobile version