Fighting AI with AI: How Ethical ‘Hackbots’ Are Reinventing Cybersecurity

By Andre Baptista, Co-founder of ethical hacking platform Ethiack

Artificial Intelligence has jumped from a theoretical concept to reality, and nowhere is this shift more stark than in cybersecurity. What once felt like science fiction is now an everyday concern, with cybercriminals already weaponising AI to probe cyber defences, plan attacks, and evade detection.

According to a recent assessment by the UK’s National Cyber Security Centre (NCSC), every kind of cyber threat actor, both state-backed and independent, is now making use of AI. While governments and advanced criminal groups were early adopters, AI is now also enabling less experienced actors to run sophisticated attacks at scale.

Little wonder, then, that ransomware-as-a-service attacks (where criminals can pay to launch AI-assisted hacks) are on the rise. It’s a worrying trend that speaks to the wider commoditisation of AI-enabled cybercrime.

But AI is not just part of the problem — it can also be a vital part of the solution.

In fact, one of the most effective ways to defend against AI-driven cyber threats is to fight back using AI itself. Specifically, this means deploying AI-powered ‘hackbots’, ethical systems designed to scan, test and strengthen cyber defences before criminals ever get the chance to exploit them.

One of the core principles of any strong cybersecurity strategy is constant testing. In the past, this was chiefly the domain of ‘ethical hackers’, highly skilled professionals who conduct what’s known as penetration testing. This involves simulating real attacks to identify weak spots in a company’s digital systems, known collectively as the ‘attack surface’.

Ethical hackers then help fix those vulnerabilities before they can be exploited by malicious actors.

This process is still essential. But what’s changed is the speed and scale at which it can now be carried out thanks to AI.

AI-driven hackbots are automated systems that can continuously scan an organisation’s attack surface. Unlike traditional software tools, they don’t just follow set instructions. Instead, they learn from the systems they encounter, adapt their behaviour, and apply what they’ve learnt to find vulnerabilities in new and more intelligent ways.

Backed by Large Language Models, hackbots can draw on an extensive knowledge base to detect known vulnerabilities, strange behaviours, and potentially serious gaps in security.

What’s more, they can carry out repetitive, time-consuming tasks much faster than a human ever could. But their value doesn’t stop at speed. Because they learn and adapt, hackbots can also discover flaws that a traditional scan might miss—making them a powerful early-warning system against emerging threats.

Crucially, this doesn’t replace the role of human ethical hackers. It actually enhances it.

Hackbots can act as digital research assistants, tirelessly monitoring systems and alerting human teams to anything unusual. This allows cybersecurity professionals to focus on strategic thinking, creative problem-solving, and high-stakes decision-making.

At Ethiack, we’ve been developing and testing AI-powered hackbots as part of our ethical hacking platform. Our experience shows two things very clearly. First, AI-based tools give defenders a real advantage when fighting back against AI-enabled attackers. And second, that they work best when paired with human oversight.

The future of cybersecurity lies in this kind of partnership — where machines do the heavy lifting, and humans provide the judgment, strategy and ethical decision-making.

Think of it less as man versus machine, and see AI as more of a tool for humans. With the right harmony between human insight and AI precision, we can stay one step ahead of the threats and turn this new technology into our strongest line of defence.

André Baptista is Co-founder of the ethical hacking platform Ethiack. A Visiting Professor at the University of Porto, he is a two-time winner of HackerOne Live-Hacking Events.

spot_img
spot_img

Subscribe to our Newsletter