Terence Tse Professor of Finance at Hult International Business School and Sardor Karimov Business Development Manager at Nexus Frontier Tech.
Artificial intelligence (AI) is widely viewed as a technological innovation that will leave no economic sector untouched and is thought to be revolutionising most business operations. The excitements ae understandable as the technology can offer a competitive edge, increase efficiency, generate cost reductions, and enable the creation of new products and services. However, AI is a complex technology and many companies remain anxious about some the challenges of its adoption, such as data bias and discrimination, data integrity, and job cuts, just to mention a few. Moreover, deployment of AI can impact many business functions and disrupt existing operational and IT processes. Often, companies’ natural response to this is to stay cautious about purchasing AI solutions. The risk perceptions in the mind of those making these buy decisions therefore matter a great deal. In this article, we would like share some of our observations of such risks from our work in the financial service industry.
Who is responsible if the technology does not work?
Companies, particularly the large ones, tend to have their own in-house innovation teams charged with assessing new technologies and making recommendations about their viability and fit with the business. There are many aspects that these teams will consider when assessing an AI solution. Perhaps contrary to the conventional wisdom, financial services companies do not really need to be concerned with various oft-mentioned risks such as those related to bias and discrimination. Why? They tend to use AI solutions to automate non-customer-facing activities such as document processing. Instead, they are more likely to focus on the technological issues. A consequence is that having an advocate for the technology who will take full responsibility for the purchase can be difficult.
So much of the successful adoption of an AI solution goes beyond the pure technical merits of the solution, and requires its integration within existing IT and operational processes: will the AI model work once live data is fed into the system? Will the IT infrastructure be able to handle the AI solution? Will the in-house engineering and data science teams be able to manage and maintain the technology in the long run? There is always the risk that the AI technology does not perform as intended or expected. The uncertainties around the technology success within the company are too high and they may be unwilling to bear the full weight of sponsoring the technology and therefore risking their own job should its deployment fail for reasons outside their control.
Are you sure the technology would work as intended?
Business leaders are the ones who ultimately make capital expenditure decisions. In many organisations, especially those under public scrutiny or in the public sector, getting the technology right the first time is a essential. Failure to get the technology to deliver what was promised can be very damaging to the business leaders, who can be seen as incompetent, or worse, as having misused public funds, potentially leading to reputational or even legal damages.
With such a technically complex solution as AI, the leadership’s decision depends heavily on the information provided to them by their technical teams, and particularly data scientists. They must be knowledgeable of the new solution and able to communicate its benefits and challenges clearly to non-technically versed business leaders. Vendors have an essential role to play here to ensure that the data scientists understand the technology fully and have detailed information about its business case.
As far as purchasing AI is concerned, issues such as talent shortage to implement and support the technology, budget considerations, the integration with the existing IT infrastructure and a viable business case are likely to be top of mind. However, there remain many uncertainties in the longer run for the data and leadership teams such as whether the technology vendor will still exist in five years’ time to provide the necessary technical support, how scalable the proposed solution is given the state and nature of the existing IT system, who is going to maintain the onboard technology going forward and what this maintenance involves. Looking from the vantage point, unless the decision-makers can somehow be 110% confident and comfortable with the technology about to be introduced, there always exists the temptation of dropping the AI project altogether. In short, no gain, no pain.
Past research and studies have always warned us against the different risks related to AI technologies. Yet in the context of business, it is often not the technology itself but rather the uncertainties surrounding the adoption of the technology that matters. The perceived risks created by these uncertainties can be very real – realistic enough to discourage the uptake of AI for a company. Find ways to mitigate these risks should be a priority of any business wishing to use AI to create value for itself.