How Financial Institutions Can Embrace AI without Losing Sight of Governance

By Laura Wenzel, Global Marketing and Insights Director, iManage

Across multiple types of knowledge work organisations, AI is making serious inroads: new global research shows AI is now used widely for natural‑language document search, classification, summarisation, and knowledge retrieval, amongst other activities.

Unfortunately, the same research shows that governance still lags: one‑third of these organisations globally have already experienced a policy‑impacting incident from unregulated or public AI tools, and nearly 30% have halted AI projects or delayed adoption because of security concerns.

Finance feels these dynamics more acutely than most industries, given that it manages highly sensitive client, transaction, and market data under strict regulatory regimes – and often within complex, long‑lived document estates.

Precisely because they operate in this type of environment – and have a long history of securely doing so – financial services firms actually display a greater degree of confidence in their ability to safely deploy AI than their peers in other verticals, according to the research.

Laura Wenzel

While this confidence can help encourage new AI deployments, financial services firms need to make sure it doesn’t cause them to overlook foundational areas that may actually require their attention. Specifically, they need to make sure that their information architecture – the foundation for their AI activities – is in good shape if they want to successfully embrace AI without losing sight of proper governance.

Fixing the foundation

There’s good reason to think that there are a few cracks in the foundation of your typical finance firm. The research reveals that financial services firms have end users who spend 2+ hours daily searching for documents – a statistically higher amount than other verticals. That increased time spent searching for valuable information suggests higher data fragmentation, information silos, and content sprawl.

These are the types of “foundational cracks” that AI will only amplify once it is brought into the picture. So, what steps should financial services firms take to shore things up and make sure they’re working from a solid base?

 As a first step, financial services institutions should make sure that they have a single, centralised repository for data so that there’s a single source of truth for AI to draw upon.

Crucially, this repository needs to be managed and governed. This governance layer needs to go beyond access rights and permissions to include retention and disposition policies, to ensure the organisation is staying compliant with any regulations or client guidelines regarding data. Equally important, there should be a process that defines what “good” data looks like so that the system doesn’t become bogged down with redundant, obsolete, or trivial data.

As part of the governance lifecycle, there should be a workflow that continuously reviews AI model outputs for accuracy, transparency, and traceability. That means building ongoing quality control into daily operations – a structured process that ensures AI‑generated work consistently meets the firm’s expectations for regulatory compliance and client expectations rather than just assuming everything’s staying within proper boundaries and hitting the right marks.

More broadly, firms should publish an enterprise‑wide AI usage policy that makes safe behaviour the default by defining approved versus restricted tool categories, permitted use cases, and prohibited data classes. This type of usage policy should also specify when a human‑in‑the‑loop is required to check outputs, particularly for any high-risk workflows.

By putting guardrails around their AI and getting their foundational information architecture in order in this way, financial organisations can ensure the high-level governance and compliance that is necessary for safe AI adoption.

Creating a safe and secure path forward

“Governance” and “AI adoption” aren’t at odds with one another. Quite the contrary: governance is the strategic enabler that makes innovative AI deployments possible. But that governance is only possible when financial services firms take the time to fix the foundation that their AI rests upon. When they do so, they allow themselves to safely and securely embrace AI for a variety of use cases, obtain a competitive advantage, and – ultimately – drive better business outcomes.

spot_img
spot_img

Subscribe to our Newsletter