By Manuel Sanchez, Information Security and Compliance Specialist, iManage
As artificial intelligence continues to reshape industries, financial institutions must ask themselves if they are truly AI-ready. New regulatory frameworks like the EU Artificial Intelligence Act specify that it’s not just AI vendors who bear the burden of compliance, but the organisations deploying these technologies.
Financial services organisations looking to roll out AI must address requirements ranging from transparency to risk management and accountability. The multifaceted nature of this mandate makes effective data governance non-negotiable for financial institutions looking to avoid any compliance missteps as they adopt AI – but how best to go about it?
Centralisation helps create the foundation
Financial organisations must begin by centralising their data into a single, secure system, such as a document management system (DMS). This step provides control over the data that fuels AI models. Furthermore, integrated security policies within the DMS can ensure that confidential transactions or client records remain inaccessible to the AI model unless explicitly permitted.
Additionally, financial entities must ensure that their data retention policies are clearly defined and adhered to. Do they have a process in place for retaining transaction records, client documentation, or communications for the mandated durations? What protocols exist for archiving or deleting data that is no longer relevant? Demonstrating these practices through auditable processes is essential not only for the EU AI Act but for regulations like GDPR or sector-specific mandates such as MiFID II.
This level of control is especially critical for industries like financial services, where sensitive financial data or personally identifiable information (PII) must be protected from misuse. By maintaining valid, relevant, and legitimate datasets, financial institutions can prevent outdated or non-compliant information from being inadvertently fed to AI systems. This approach has the dual benefit of supporting the AI Act’s mandate for high-quality datasets while also building trust in the outputs of AI-powered tools.
Transparency aids ethical AI
The EU AI Act also places a significant emphasis on transparency, requiring organisations to document how their AI systems function, including the sources of training data. For financial services firms, this transparency is particularly important when AI is used for client risk profiling, fraud detection, or algorithmic trading – all areas where a regulator might want to take a closer look “under the hood” to see exactly how or why AI reached the particular decision that it did.
Having a DMS as the central repository ensures that institutions can precisely point to the datasets used to train their AI models. It eliminates ambiguity around what information the model is drawing upon to “teach” itself and inform its decision-making, providing the transparency that is required for the deployment of ethical AI.
Prevention and planning bolster risk management and accountability
The EU AI Act requires AI systems to be secure, resilient, and resistant to adversarial attacks, necessitating both preventative and remedial strategies from financial institutions.
From a preventative standpoint, a zero-trust approach is essential for mitigating the risks associated with AI in financial services. Zero-trust ensures strict access controls, whether it’s for sensitive client data, internal financial reports, or regulatory filings.
Additionally, ongoing user awareness training should be implemented to combat pervasive threats like phishing and social engineering, which can compromise even robust systems. For financial firms, this training is vital, given the high stakes of data breaches involving client assets or transactional records.
Despite the best preventative measures, breaches can occur. Financial organisations must have a clear, actionable playbook for handling such incidents. This includes escalation procedures, defined roles and responsibilities, and adherence to reporting timelines mandated by regulations like DORA (Digital Operational Resilience Act). Such a playbook ensures that institutions can respond swiftly, to minimise client impact and regulatory penalties.
However, having a playbook is not enough. Financial firms must regularly review and update their incident management protocols to account for emerging threats, regulatory changes, or operational shifts. Regular reviews by key stakeholders – such as compliance officers, IT leaders, and risk managers – ensure that response strategies remain effective and aligned with current “on the ground” realities.
Strong data governance isn’t optional for AI – it’s essential
By implementing strong governance frameworks, financial firms can enhance the quality, security, and accountability of their data, ensuring compliance with the EU AI Act, DORA, and other relevant regulations.
This structured approach to data governance allows financial services organisations to unlock the transformative potential of AI while fully upholding compliance standards. In a regulatory landscape that is only getting trickier to navigate, this makes data governance more important than ever for financial services organisations looking for a way to safely forge a path ahead.