AI in banking needs to be ‘explainable’

Richard Shearer, CEO of Tintra PLC

 

In the world of banking, AI is capable of making decisions free from the errors and prejudices of human workers – but we need to be able to understand and trust those decisions.

This growing recognition of the importance of ‘Explainable AI’ (XAI) isn’t unique to the world of banking, but a principle that animates discussion of AI as a whole.

IT and communications network firm Cisco has recently articulated a need for “ethical, responsible, and explainable AI” to avoid a future built on un-inclusive and flawed insights.

It’s easy to envisage this kind of future unfolding, given that – in early February – it was revealed that Google’s DeepMind AI is now capable of writing computer programs at a competitive level – and if we can’t spot flaws and errors at this stage, a snowball effect of automated, sophisticated, but misguided AI could start to dictate all manner of decisions with worrying consequences.

In some industries, these consequences could be life-or-death. Algorithmic interventions in healthcare, for example, or the AI-based decisions made by driverless cars need to be completely trustworthy – which means we need to be able to understand how such AI arrive at their decisions.

Richard Shearer

Though banking-related AI may not capture the imagination as vividly as a driverless car turned rogue by its own artificial intelligence, the consequences of opaque, black box approaches are no less concerning – especially in the world of AML, in which biased and faulty decision-making could easily go unnoticed, given the prejudices which already govern that practice.

As such, when AI is used to make finance and banking-related decisions that can have ramifications for individuals, organisations, or even entire markets, its processes need to be transparent.

 

Explaining ‘explainable’ AI

To understand the significance of XAI, it’s important to define our terms.

According to IBM, XAI is “a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.”

These methods are increasingly necessary due to the ever-increasing advancement of AI capabilities.

To those outside the sphere of this technology, it might be assumed that the data scientists and engineers who design and create these algorithms should be able to understand how their AI makes its decisions, but this isn’t necessarily the case.

After all, AI is – as a rule – employed to perform and exhibit complex behaviours and operations, and outperforming humans is therefore a sought-after goal, on the one hand, and an insidious risk on the other – hence the need for interpretable, explainable AI.

There are many business cases to be made for the development of XAI, with the Royal Society pointing out that interpretability in AI systems ensures that regulatory standards are being maintained, system vulnerabilities are assessed, and policy requirements are met.

However, the more urgent thread running throughout discussions of XAI is the ethical dimension of understanding AI decisions.

The Royal Society points out that achieving interpretability safeguards systems against bias; PwC names “ethics” as a key advantage of XAI; and Cisco points to the need for ethical and responsible AI in order to address the “inherent biases” that can – if left unchecked – inform insights that we might be tempted to act upon uncritically.

This risk is especially urgent in the world of banking and, for AML, in particular.

 

Bias – eliminated or enhanced?

Western AML processes still involve a great deal of human involvement – and, crucially, human decision making.

This leaves the field vulnerable to a range of prejudices and biases against people and organisations based in emerging markets.

On the face of it, these biases would appear to be rooted in risk-averse behaviours and calculations – but, in practice, the result is an unsophisticated and sweeping set of punitive hurdles that unfairly inconvenience entire emerging regions.

Obviously, this set of circumstances seems to be begging for AI-based interventions in which prejudiced and flawed human workers are replaced with the speed, efficiency, and neutral coolness of calculation that we tend to associate with artificial intelligence.

However, while we believe this is approach to diversity and fairness is absolutely the future of AML processes, it’s equally clear that AI isn’t intrinsically less biased than a human – and, if we ask an algorithm to engage with formidable amounts of data and forge subtle connections to determine the AML risk of a given actor or transaction, we need to be able to trust and verify its decisions.

That, in a nutshell, is why explainable AI is so necessary in AML: we need to ensure that AI resolves, rather than repeats the issues that currently characterise KYC/AML practices.

In response to the ethical and societal impact that can be caused by AI systems, several initiatives have arisen which aim to guide and support companies in the development of trustworthy AI.

One prominent initiative is the European Union’s Ethical Guidelines for Trustworthy AI, which puts forward a set of 7 key requirements that the design of an AI system must adopt, to be deemed ‘trustworthy’. One of these, entitled diversity, non-discrimination and fairness, states that ‘unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination’.

As we previously highlighted, AI systems are, as Verbeek terms, behaviour guiding technologies. They do not simply reflect society, but actively alter it through the decisions they make. Therefore, the design and training of AI systems capable of highly complex decision making must be undertaken with great responsibility. And to do so, we must be thorough and considered in our approach throughout the process, from design and training to deployment and monitoring.

 

Transparency and trust

The specific method used to achieve explainable AI in AML isn’t as important as the drive to ensure that we don’t place all our eggs in a potentially inscrutable basket: any AI we use to eliminate prejudice needs to have trust, confidence, and transparency placed at the heart of its calculations.

If we don’t put these qualities first, the ‘black box’ of incomprehensible algorithms may well continue to put a ‘black mark’ by the names of innocent organisations whose only crime is to exist in what humans and AI falsely perceive to be the ‘wrong place.’

spot_img

Explore more