Are Financial Services Firms Ready to Start Using Generative AI?

By Alex Smith, Global Product Lead – Knowledge, Search, AI, iManage

 

With the emergence of generative AI in the past year, companies across a variety of sectors are suddenly being faced with the question of whether they should jump in with both feet and find ways to leverage this new technology in their operations, or steer clear of it all together.

The financial services space is no different. But asking whether financial services firms should be using generative AI or not is a bit like putting the cart before the horse. What they really should be asking is whether or not they’re ready to start using generative AI – and if so, how can they know if they’re ready or not?

A diversity of data

Data is the lifeblood of any AI. The large language models (LLM) that underlie generative AI need to be trained on massive amounts of it in order to answer questions or generate content when a user enters a prompt.

In one way, the financial services space is well-positioned to supply this data. The industry has done a significant amount of work over the years to embrace standardised protocols like the Open Banking Standard that makes the data in their systems more open and accessible. They also have a good amount of structured data sets as a starting point; ISDA(https://www.isda.org/about-isda/) Master Agreements, for example, produce highly structured data.

On the other hand, at the core of many of these firms – particularly in the legal and regulatory departments – there is a lot of paper, containing mostly unstructured data.

Every loan, derivative, and commercial mortgage is still negotiated by law firms on behalf of legal teams and business teams, and they produce documentation around those financial instruments. Within those documents and contracts, there’s a host of potential risk, security, and compliance issues.

The question then becomes: Are financial services firms able to effectively look into those documents and contracts, truly understand them, and ascertain where potential risk lies? A previous wave of AI tried to crack this problem and had limited uptake. Is there room for generative AI to make a bigger dent – and if so, will it succeed where other AI has failed?

What does “good” look like?

If financial services hope to deploy generative AI to gain a better understanding of what lies within their documents and documentation, they’ll need to take a careful and considered approach. The problem isn’t as simple as using a ChatGPT-style interface to ask, “What risk lies in my North American collaterised debt obligations?”

In order to function properly and generate valuable answers, the LLMs need to be trained on the proper data. First order of business, then, is identifying what the trusted data sets are within the organisation as well as where those sets are. To the extent that an organisation has a centralised location for important files and documents, like a document management system (DMS), it will be much easier to find that trusted data.

Training the LLM on all of the documentation within the DMS, however, might overload the model with contradictory or non-germane information. That’s because the DMS will likely contain multiple different versions of a negotiated contract, including initial drafts that might not have taken an optimal approach, as well as the opposing side’s drafts.

Far better to train the LLM on a small subset of data within the DMS. This might be the final approved version of the document – but then again, the final version that both sides sign their names to is usually a “middle ground” that has already had some of the stronger opening moves or positions sanded down a bit, so there might be a different “best practices” template that provides a better starting point.

For this reason, it’s important that financial services firms have some sort of internal knowledge curation team who’s in charge of determining what “good” looks like for any particular financial agreement – and then making those knowledge assets or resources available to the wider team as well as to any LLMs that need to be trained.

Security still matters

Identifying the assets that can train the generative AI model is just part of the equation, however. Security remains paramount and needs to be properly addressed. After all, financial services firms are routinely dealing with highly confidential deals that have been done on behalf of their clients, and they are held to a similar standard of compliance, privacy, and security as other custodians of privileged information, like law firms or accountancy firms.

At the same time, if there are some materials that are “locked down” and only visible to certain people in the firm but not others, multiple different people within a financial services organisation might get multiple different answers if they ask generative AI a question like, “What does an ideal Credit Default Swap look like?”

For this reason, firms might want to consider adopting a slightly different security posture for knowledge assets and best practices content that is used to train the LLMs of any generative AI they put to use.

Same as it ever was?

Financial services firms are still facing the same challenge they were facing the last time there was a wave of AI interest, which is the ability to look into their documents and really understand what they contain and where the potential risk is. Will generative AI be the toolkit that helps them finally crack the problem? It’s still early days, but if they take the steps to make sure they’re actually ready to start deploying the tool before they leap in and start using it, they’ll have a much better chance of success.

spot_img

Explore more