Site icon Finance Derivative

How financial institutions can balance LLM innovation with regulation

Code. Free public domain CC0 photo. More: View public domain image source here

Dmitry Borodin, Head of Decision Analytics at Creditinfo

1. How can financial institutions face the challenge of ensuring LLM outputs are accurate, explainable, and auditable for regulatory and compliance purposes?
To mee regulatory expectations around accuracy, explainability and audibility, financial institutions should adopt a layered validation and governance strategy. One of the best ways to do this is to benchmark and validate LLM outputs against traditional, well-understood models as this provides a reference point for consistency and reliability.  

As part of this, they should keep detailed audit logs that capture prompts, outputs, and decision rationale. Regular testing in sandboxed environments is also imperative to uncover anomalies and refine performance before wide-scale deployment.

2. How can financial institutions balance the transformative power of LLMs with the regulatory and compliance requirements that govern financial services?

Financial institutions should take a phased, risk-based approach when implementing LLMs, aligning innovation with regulatory expectations.

Dmitry Borodin

It’s important to start with low-risk, high-value use cases within controlled environments. This allows financial institutions to understand their behaviour, refine applications, and ensure LLM outputs meet operational and compliance standards. In parallel, during the early stages of testing and implementation, it’s essential to establish robust governance frameworks so people have clear guidelines to follow.

Moreover, institutions must tread with caution and treat LLMs as intelligent assistants, not autonomous agents. Consistent human oversight is integral for high-risk decisions, especially in financial services, where trust and clarity are critical to brand loyalty and user confidence.   

3. Beyond customer service and administrative tasks, where do you think LLMs can add the most value in finance?

Beyond customer service and document processing, a lot of LLMs’ potential lies in enhancing internal risk and operational functions. For example, they can accelerate the creation of model documentation, assist in validation workflows and flag anomalies in transaction patterns – enabling faster escalation and resolution by humans.  

Another key area is internal knowledge management. LLMs can help navigate complex internal systems through natural language queries, dramatically improving information retrieval. They can also help institutions to prepare audits by summarising model behaviour, flagging inconsistencies, and surfacing compliance gaps.

These behind-the-scenes applications can drive efficiency and means humans have more time to focus on work that drives value for customers.

4. How do you see the evolution of human-AI collaboration in finance?

Human-AI collaboration in finance is shifting. In recent years, there has been a race to adopt AI for everything from onboarding to customer support. However, it’s recently become apparent that AI on its own can’t support high-stake interactions with the same level of judgement, empathy, or contextual understanding as a human. Recent examples, like Klarna’s rollback of its end-to-end AI-driven customer service model due to declining quality, highlight the limitations of fully autonomous systems in client-facing roles.

Therefore, financial institutions should focus on building systems where AI works behind the scenes to enhance, not replace, the work of humans. Ultimately, advisors and bankers should use AI to gain insights, automate analysis and streamline workflows, but humans should always be at the centre of complex decision making and client relationships.

5. Looking to the future, what kind of infrastructure and talent do you think will be required for financial institutions to successfully use and implement LLMs at scale?

In the future, institutions will need to invest in both robust infrastructure and specialised talent to successfully leverage LLMs at scale. From an infrastructure perspective, scalable compute environments are necessary to meet the demands of LLM training and secure data pipelines are essential to ensure models are fuelled by well-governed and compliant data.

But infrastructure alone is not enough. The success of LLMs will depend on the team using them. Therefore, institutions need to have the right people in the right roles. Machine learning engineers, risk experts, domain-savvy model validators, and legal and compliance teams that specialise in AI regulation will all be crucial to LLM deployment. With the right tech and people in place, LLMs can scale safely without adding unecessary risk.

Exit mobile version