By Jorge Monteiro, CEO of Ethiack
Artificial intelligence has become one of the most powerful forces shaping the financial sector. It is transforming everything from customer experience to fraud detection, algorithmic trading and operational efficiency. But the same technology that’s helping financial institutions innovate is also enabling a new wave of cyber-attacks that are faster, more scalable and far harder to predict.
We are entering a world where AI systems are both attacking and defending financial networks, and the organisations that understand this shift earliest will be the ones best placed to protect their customers, operations and reputations.
The rise of autonomous AI
For years, cybercriminals relied on persistence, trial and error and relatively unsophisticated tools. Today, they are using AI to automate tasks that previously required expertise: scanning IT systems for vulnerabilities, crafting convincing phishing messages, generating malicious code and even adjusting their strategies in real time based on how a network reacts.
This new class of AI-driven attack behaves less like a static piece of malware and more like an adversary that is constantly learning. It can mimic user behaviour, bypass authentication challenges, and hunt for weak spots across vast financial ecosystems.
The financial sector is particularly exposed to the threat. Banks, fintechs and payment providers hold some of the world’s most sensitive data. They operate sprawling digital infrastructures, rely on hundreds of third-party integrations, and move enormous volumes of money every second. In this environment, a single weakness can quickly cascade into a systemic problem.
Traditional defences can’t keep up with adaptive threats
Most cybersecurity programmes in the financial sector were designed around the principle of known threats. Firewalls, endpoint tools and compliance-driven audits are excellent at catching what they recognise, but they struggle against attacks that can adapt and learn.
For example, an AI-powered intrusion won’t necessarily repeat the same pattern twice. It may probe a payment system in a thousand different ways until it finds a misconfiguration, or it may exploit a supply-chain integration that the institution didn’t even know existed.
Against that backdrop, reactive security simply isn’t enough. By the time an anomaly alert is raised, the damage may already be done: customer information exposed, transactions disrupted, or critical services forced offline.
Financial institutions need to move from passive defence to proactive discovery, meaning testing their systems at the same speed and sophistication as the attackers now operating against them.
Ethical hackbots: AI testing the systems AI is trying to break
At Ethiack, we believe the only sustainable way to match AI-driven threats is through the ethical use of AI in defence.
That’s why we’re building autonomous AI systems that behave like attackers, but operate within strict guardrails. These agents continuously test applications, APIs and digital infrastructure, learning from each result and adapting their strategy. When they detect something unusual, human ethical hackers step in to validate and escalate the findings.
This approach transforms cybersecurity from a point in time exercise into something continuous, verifiable and transparent. Instead of waiting for an annual penetration test or reacting to a breach, institutions can constantly expose and fix weaknesses before criminals exploit them.
In early trials, we implemented three layers of safety including clear instructions, strict rule filters and an independent verifier, to ensure that every action remained ethical and reversible. Only once those systems proved reliable did we allow autonomous testing within well-defined boundaries.
The result is a system that acts with the speed of a machine and the oversight and ethics of a skilled human.
AI and financial regulations: a turning point
This shift toward continuous, automated testing aligns closely with the direction of financial regulation.
The Digital Operational Resilience Act (DORA) in the EU, for example, requires financial entities to test their digital resilience regularly, including through threat led penetration testing. Similarly, NIS2 calls for the ongoing monitoring and validation of critical systems.
Both frameworks reflect the same reality: that financial security can no longer rely on static controls. Regulators expect institutions to demonstrate real-time assurance, not just tick compliance boxes once or twice a year.
A practical model for financial resilience: SEE. TEST. ACT.
At Ethiack, we describe the new model for digital resilience with three simple words:
SEE: Gain full visibility of your digital ecosystem. Financial infrastructures evolve every day, with new third-party integrations, product launches and software updates. You can’t protect what you can’t see.
TEST: Move from passive monitoring to active, continuous testing. Ethical hackbots, paired with human experts, simulate real attacks to uncover vulnerabilities before criminals do.
ACT: Insights only matter if they drive change. Feed validated findings into patching cycles, governance frameworks, and supplier management so resilience improves over time.
This loop transforms cybersecurity from a compliance obligation into an ongoing operational discipline.
The new reality: speed is the battlefield
AI has permanently changed the tempo of financial cyber risk. Attacks that once took weeks now unfold in minutes. Fraud attempts that used to require human planning can now be run at scale by machines.
But AI has given defenders a new set of tools as well. Used ethically, transparently and under human control, it can help financial institutions stay ahead of attackers by finding weaknesses early, closing them fast, and building trust with customers and regulators.
Because in the end, cybersecurity in financial services isn’t just about technology. It’s about protecting the systems people rely on every day, and using intelligence, both human and artificial, to do it well.



