BOT ATTACKS IN THE FINANCE SECTOR: FRAUDSTERS ARE USING AI TOO

By: John Briar, COO and co-founder, BotRx

 

The use of Artificial Intelligence (AI) and automated processes in the finance industry is growing. From using AI-enabled chatbots to communicate with customers, to using Robotic Process Automation to eliminate tedious tasks in payroll and accounts receivable, financial organisations are making the most of this up-to-the-minute technology. Indeed, a report by McKinsey found that current technologies can fully automate 42% of finance activities and mostly automate a further 19%. As progress continues to be made in automated technology, this number is only likely to increase.

The problem is that cybercriminals are also using AI. With AI tools at their fingertips, fraudsters are developing and deploying sophisticated automated attacks, namely in the form of malicious bots. These bad bots masquerade as legitimate users to conduct malicious activities against financial organisations, such as stealing Personally Identifiable Information for illicit activities like fraudulent credit card applications and account takeover. This trend has only increased during the coronavirus pandemic, as cyber adversaries look to take advantage of the disruption caused by the outbreak. Indeed, financial fraud increased 33% during lockdown, according to Experian.

 

John Briar

AI-enabled fraudsters are on the loose

Fraudsters are becoming increasingly reliant on automated bots, and using credential stuffing as one of their favourite tricks. Credential stuffing attacks work by taking advantage of the fact that people tend to have poor cyber hygiene and reuse the same usernames and passwords across all of their different online accounts. Cybercriminals then launch automated bots to complete repeated password-guessing attempts to log into secure user accounts on hundreds of different websites.

After the fraudsters have sifted through millions, sometimes billions, of login credentials, and have found a login match for a specific website, they normally sell these verified credential pairs to other cybercriminals that launch follow-on attacks. Once they have access to the account, cybercriminals begin committing a variety of fraudulent activities.

Account takeover fraud is a common endgame for bad actors, and almost always begins with credential stuffing. This attack allows fraudsters to access an individual’s account. Once inside, they can conduct unauthorised activity, and depending on the attack, even change login and personal information. KPMG found a  57% increase in UK financial account takeover cases last year, with account takeovers even making the news, like Marriott’s March 2020 data breach where login credentials of two Marriott employees were used to access guest information, affecting over five million guest accounts.


It’s time to fight back

Financial institutions must look to better protect themselves and their customers from these automated bot attacks. There are numerous solutions out there, though organisations must take note of the strengths and weaknesses of each one. The biggest challenge for financial organisations is being able to combat the dynamic nature of automated bot attacks, which fraudsters change on such a regular basis that it’s difficult to predict attack behaviours and recognise signatures.

Indeed, the hardest part of stopping bot attacks is that bots can very easily outmanoeuvre static network infrastructures. Currently, most solutions don’t have a dynamic nature. Firewalls and Intrusion Prevention Systems for example, are ineffectual because they cannot detect changing attack patterns. Web Application Firewalls on the other hand struggle to pick up attacks that mimic normal behaviours, which is exactly what these automated bots do. Threat intelligence, which gathers intelligence on new threats only after an attack has happened, also aren’t bulletproof as they allow early attacks to go undetected.

AI and Machine Learning (ML) based solutions are a better match for automated bot attacks, as they are playing fraudsters at their own game. However, even the most sophisticated AI and ML solutions can be outsmarted by fraudsters who take the time to gather intelligence so that they can plan a future attack. Because AI systems rely on the information they’re fed, they require manual intervention to classify if the anomalies identified in the traffic patterns are real or false events.

Then there are new solutions like Moving Target Defense (MTD), which has recently surfaced as malicious bots’ new foe. Coined by the US Department of Homeland Security, MTD is unique because it is a proactive approach to stopping malicious bot attacks, unlike traditional detect-block solutions. It works by making the attributes of a financial institution’s network dynamic rather than static, obfuscating the attack surface. This reduces the window of opportunity for fraudsters, making it extremely difficult for them to infiltrate a network, and allows financial organisations to take back control of their IT infrastructure by always being on the front foot.

 

A proactive approach  

Continuing to rely on the detect-block methods simply isn’t sufficient to stop malicious bot attacks. While each of the above defence methods have their merits, financial organisations shouldn’t rely on any one of them alone, as the growing number of automated attacks will always be looking to take advantage of static infrastructure and other weaknesses.

It shouldn’t be surprising that, as financial institutions increase their use of automated processes, so too are cybercriminals. Financial organisations must therefore look to new solutions that will redefine the power balance between defenders and attackers. MTD is a promising approach to the equation, enabling them to protect their networks and their customers in the long-term.

 

spot_img

Explore more