Four AI Predictions for 2023: From the Ashes of the Great Correction, Practical AI Will Rise

Scott Zoldi, Chief Analytics Officer, FICO

There’s no way to sugarcoat it: 2022 was a rough year for businesses. The stock market tanked, interest rates spiked and many thousands of people lost their jobs. Tech companies are retrenching, their once limitless horizons now tempered by hiring freezes and fleeing investors. Specific to artificial intelligence (AI), companies are reconsidering moonshot projects like self-driving cars; Ford and VW exited their joint robotaxi venture, and Alphabet, with its Waymo subsidiary, is under particularly intense heat from activist investor TCI Fund Management. In a letter to Alphabet management TCI provided a brief glimpse of the obvious: “Unfortunately, enthusiasm for self-driving cars has collapsed and competitors have exited the market.” Ouch.

Welcome to the Great Correction. But what might feel like an unmitigated flameout is actually a correction back to normalcy, nowhere more evident than in more realistic approaches to AI. I’m calling this new pragmatism Practical AI, and I predict this technology will rise in 2023 like a phoenix from the ashes of years of AI irrational exuberance.

Four Predictions for Practical AI

Under the umbrella of practicality, companies will strategically rethink how they use AI, an attitudinal shift that will filter down to implementation, AI and machine learning model management, and governance. Here are my predictions for Practical AI in 2023:

  1. Novelty applications will be out, practical applications will be in: Generative AI has been a big buzzword lately, with slick image generation capabilities grabbing headlines. But the reality is, Generative AI isn’t a new technology; my data science organization at FICO has been using it for several years in a practical way, to generate synthetic data and to do scenario testing as part of a robust AI model development process.

Here’s an example of why we need to focus more on practical uses of Generative AI: Open banking represents a huge revolution in credit evaluation, particularly for the underserved. However, as this new financial channel takes off, collecting a corpus of data to build real-time, customer-aware analytics is lacking. Generative AI can be applied practically to produce realistic, relevant transaction data for developing real-time credit risk decisioning models. This could greatly benefit buy now pay later (BNPL) lenders, which are now exposed to high default rates due to inadequate analytics, jeopardizing open banking’s potential to better serve the underbanked in credit evaluation.

 

  1. AI and machine learning development processes will become productionalized: Practical AI is incompatible with the modus operandi that many data science teams fall into:
  • Build bespoke AI models, experimenting with new algorithms to maximize performance on-sample
  • Spend inadequate time focused on whether these bespoke models will generalize out of sample
  • Put the bespoke model into production without knowing the consequences with certainty
  • Be faced with clawing back the model, or worse, letting the it run with unforeseen and/or unmonitored consequences.

To achieve production-quality AI, the development processes themselves will need to be stable, reliable and productionalized. This comes back to model development governance, frameworks for which will increasingly be provided and facilitated by new AI and machine learnng platforms now entering the market. These platforms will set standards, provide tools and define application programming interfaces (APIs) of properly productionalized analytic models, and deliver built-in capabilities to monitor and support them.

AI governance is a major focus of my work, and I predict that in 2023 we will see AI platforms and tools increasingly become the norm for facilitating in-house Responsible AI development and deployments, providing the necessary standards and monitoring. As a corollary, the Kaggle approach to model development — extracting the highest predictive power from a model, at all costs — will similarly give way to a new Practical AI sensibility coupled with business focus: what’s the best 95% solution? The reality is, 95% is likely sufficient for most AI applications, and in many ways preferred when we put model performance into a larger context of:

  • Model interpretability
  • Ethical AI
  • Environmental, social and corporate governance (ESG) considerations
  • Simplicity of monitoring
  • Ease of meeting regulatory requirements
  • Time to market
  • Excessive cost and risk in complex AI applications.

 

  1. Proper model package definition will improve the operational benefits of AI: Productionalizing AI includes directly codifying, during the model creation process, how and what to monitor in the model once it’s deployed. Setting an expectation that no model is properly built until the complete monitoring process is specified will produce many downstream benefits, not the least of which is smoother AI operations:
  • AI platforms will consume these enhanced model packages and reduce model management struggles. We will see improvement in model monitoring, bias detection, interpretability and real-time model issue reporting.
  • Interpretability provided by these model packages will yield machine learning models that are transparent and defensible.
  • Rank distillation methods will ensure that model score distribution and behavior detection are similar from model update to model update. This will allow updates to be integrated more smoothly into the existing rules and strategies of the larger AI system.

 

  1. There will be a handful of enterprise-class AI cloud services: Clearly, not every company that wants to safely deploy AI has the resources to do so. The software and tools required can simply be too complex or too costly to pull together in piece-parts. As a result, only about a quarter of companies have AI systems in widespread production. To address this gigantic market opportunity, I predict that 2023 will see the emergence of a handful of enterprise-class AI cloud services.

Just as Amazon, Google and Microsoft Azure are the “Big Three” of cloud computing services, a few top AI cloud service providers will emerge to offer end-to-end AI and machine learning development, deployment, and monitoring capabilities. Readily accessible via API connectivity, these professional AI software offerings will allow companies to develop, execute and monitor their models, while also demonstrating proper AI governance. These same cloud AI platforms could also recommend when to drop down to a simpler model (Humble AI) to maintain trust in decision integrity.

Surely there will also be specialist AI cloud service providers focused on industry domains, including regulatory profiles, providing companies with easy on-ramps to Responsible AI deployments at scale. These AI platforms will provide incredible, industry-specific leverage to accelerate speed-to-market, safely and responsibly.

Where Practical AI Lives: The Corpus AI

Over the past five years or so I’ve been evangelizing the need for Responsible AI practices, which guide us how to properly use data science tools to build AI decisioning systems that are explainable, ethical and auditable. These principles are at the heart of an organization’s metaphorical analytic body. But they are not enough. This analytic body, which I call the Corpus AI, is where Responsible AI and Practical AI must be supported by the equivalents of a biological circulatory system, skeletal system, connective tissue and more.

 

spot_img

Explore more