Employee and AI Partnership Democratises Security

Usman Choudhary, General Manager of Business Security Division, VIPRE Security Group

A technology-led approach to cybersecurity has a lot to offer, but the view that “technology can do everything, and without human intervention” is misplaced. Democratising security – i.e., making it a responsibility of all, not just a few – has become essential. What do we mean by this?

A simple email threat scenario

Consider this scenario. An employee receives an email, purportedly from a bank, asking the recipient to log in via the embedded link to their account to view an important financial disclosure. How must the organisation determine – with certainty – whether it’s a genuine email or a credential-harvesting phishing attack? Let’s evaluate the roles that technology, the employee, and the security department play in this situation.

Technology helps detect anomalies: Is the email source a common phishing domain? Does the link lead to a well-known malicious website? While the target site “looks like” the bank in question, is the weblink indeed for that bank? Likewise, are there any other contextual and heuristic clues in the headers or body of the email that indicate phishing?

This technology-reliant hygiene is essential. But if the cybercriminal is masterful and meticulous in custom-crafting phishing emails – and they often are – none of these warnings will activate.

Therefore, the email recipient’s vigilance becomes critical: Do they actually have an account at that bank? Has the bank ever sent them similar (sender name, logos, design, etc.) emails in the past? Were they expecting this kind of communication? Despite such alertness on the part of employees, they still might struggle to determine the authenticity of the email because typically email clients hide important details from users, (controversially) in the name of “convenience”.

This triggers action on the part of the organisation’s IT security team. It must investigate the email: Does the source email address (often hidden from casual users) look legitimate? Does it match the domain of the URL? Does the target of the link appear to be legitimate? Have other users within the organisation received similar emails? and so forth.

As illustrated, a variety of elements need to come together to ensure the successful evaluation of risk stemming from a single email. This process isn’t easy to pull off in a systematic and timely manner. The technology, employees, and IT support need to communicate, and work in tandem, potentially also with the same level of urgency and speed.

Due to the incessant onslaught of suspicious emails, it’s unlikely to happen. Many situations might result in the email slipping through the net, security fatigue on the part of employees included.

AI and cybersecurity

AI can help a great deal. Instantly, it can improve the automatic detection of fraudulent emails. Machine learning and similar techniques have been in use for a long time for this purpose, and are well-embedded in security solutions already.

More interestingly, employees can use the built-in AI tool to ask questions in the context of suspicious emails and receive responses, in the language of their choice. In the context of the email phishing scenario above, the employee could ask the AI tool:  “Is this email genuinely from Bank X?” The AI tool looks at the domains used in the email and determines if the correspondence is truly from the recipient’s Bank X.

Employees can interact (via text or voice) with the embedded AI application at any time and from anywhere, be it from the individual’s desktop in their home office or from their smartphone in a coffee shop.

In effect, the AI tool provides routine, but critical security support to employees, freeing up the IT security team to concentrate on the more strategic and serious aspects of cybersecurity. In large corporations and financial institutions, handling support queries takes up a fair amount of time for IT security departments.

Democratising security with AI

This is the democratisation of security. This employee–AI partnership reduces the burden on the security team, which delegates the handling of routine, low-risk queries to AI, with high-risk and previously unseen issues automatically escalated to the department.

This approach serves as an important educational and ongoing training tool for employees too. They see what elements and factors the AI uses to evaluate risk, intuitively learning to spot threats in a nuanced manner. No matter how advanced the technology gets, human involvement will always remain critical to thwarting cybersecurity attacks.

Democratising security using AI substantially reduces overall risk to the business. This approach lowers the barrier and friction employees currently experience in getting answers to their security suspicions – be that a phishing email, a potential breach, or any such. With embedded AI security tools and the immediate assistance they offer, employees are more likely to question whether something is a risk – as opposed to ignoring their misgiving because it hinders their daily activity.

In an environment where criminals are rapidly advancing the sophistication of their threats, leveraging AI to democratise security presents the best solution yet.


Explore more