By Adam Pettman, Head of AI & Innovation at 2i
Imagine a bank’s computer system silently rejecting mortgage applications from an entire neighbourhood without explanation or approving a million-pound loan for a customer with multiple defaults. These aren’t hypothetical scenarios; they represent what happens when sophisticated technology processes poor-quality data. In financial services, this doesn’t just slow progress—it magnifies errors and introduces systemic risk throughout operations.
The adoption gap: investment vs retail banking
Investment banks have long embraced smart technology, using it to spot market patterns invisible to human traders. Their approach is straightforward: if it makes money, it works—even if humans can’t fully understand why.
Retail banking operates under entirely different constraints. In this highly regulated environment, automated decisions face relentless scrutiny. A single flawed, lending judgment might trigger regulatory penalties or unfairly exclude qualified applicants. While financial startups forge ahead, traditional banks remain stuck because of scattered data, regulatory concerns, and outdated systems designed before the smartphone existed.
As research from the Turing Institute confirms, this “garbage in, garbage out” data problem threatens to undermine technological progress across the sector. Without fixing fundamental data quality issues, banks risk creating sophisticated mistake multipliers rather than value generators.
The three data pillars you cannot ignore: speaking the same language
When customer information exists across disconnected systems in different formats, automation becomes an expensive guessing game. Consider what happens when a system mistakes monthly interest payments for annual salary—it doesn’t just make a simple error; it potentially approves unsustainable loans or flags legitimate transactions as suspicious.

Banks that establish a single source of truth using consistent data standards don’t merely improve accuracy, they significantly reduce compliance risk by ensuring their reports withstand regulatory examination.
Practice without the risk
Privacy regulations create a significant challenge: banks need vast amounts of customer data for testing but face severe penalties for misusing that information. Synthetic data—artificial information that mirrors real patterns without exposing sensitive details—offers a compelling solution.
Think of synthetic data as your sandbox before trying the real thing. If you’re nervous about using your customer data, why not practice with a model first? This approach enables banks to safely test new systems without endangering real customer information.
Avoiding built-in blind spots
Automated systems in financial services can unintentionally perpetuate existing biases. Systems trained on past lending decisions don’t simply learn patterns; they absorb historical prejudices, potentially denying services to certain groups at higher rates.
Consider an investment bank whose customer base primarily consists of older wealthy men. Creating tools exclusively from this data might seem logical—after all, these are the current customers. But this approach is limiting. Forward-thinking institutions are using more diverse data and implementing transparent decision processes.
From theory to practice: what you can do now
While many traditional banks remain hesitant to embrace new technology, practical steps can begin today without major risk:
- Start with practice data
Instead of risking customer information on unproven concepts, use artificially generated data to test your approach first. JP Morgan research shows banks are already investing significantly in data cleansing with this goal in mind.
- Focus on human-computer teamwork
While automation offers powerful capabilities, human expertise remains essential for high-stakes decisions. Begin with scenarios where technology supports rather than replaces human judgment.
- Look for warning signs to help customers
Imagine spotting subtle shifts in a customer’s financial behaviour—switching to lower-cost groceries, making credit enquiries, showing income changes. Rather than waiting for missed payments, you could proactively offer temporary mortgage adjustments, preventing defaults and preserving the relationship.
Beyond the risk narrative
Historically, justifying investment in data quality has been challenging. Data teams often operated from small back offices focusing primarily on regulatory compliance. But, as technology becomes a revenue driver rather than just a cost control mechanism, data quality is finally getting the investment attention it deserves.
For financial leaders, the path forward is clear: invest in data quality now or watch competitors capture the market with more responsive, personalised offerings. Those that transform their data foundations won’t deploy better technology—they’ll reshape how they serve customers and manage risk. By harnessing synthetic data to trial AI in parallel, organisations can get AI-ready while getting their data ready.
The technology revolution in finance isn’t about complex algorithms; it’s about creating a reliable data foundation that keeps you in control while confidently delivering innovations that benefit your institution and your customers.