As AI fuels fraud attempts, businesses must be equipped with the right tools

Ahmed Fessi, Chief Transformation & Information Officer at Medius

Despite the undeniable benefits businesses can reap from investing in and leveraging innovative artificial intelligence tools in 2025, much of the conversation surrounding the technology today has shifted to the growing threat of AI-enabled fraud. In fact, Deloitte’s Center for Financial Services predicts that generative AI could enable fraud losses to reach $40 billion in the United States by 2027, compared to $12.3 billion in 2023.

AI-enabled attacks are increasing in frequency and becoming harder to distinguish from authentic communications. While there’s no doubt that a case of corporate fraud is difficult for a business internally, organisations often suffer reputational damage as well. However, when various countries are facing economic crises and a rising cost of doing business, financial loss triggered by fraud could even lead to the collapse of a firm. As AI develops, fraudsters are manipulating it into a powerful dangerous tool leveraged in their favor, leaving businesses vulnerable.

In the face of this threat, businesses are facing heightened sophisticated fraud attempts from deepfakes, with Arup’s $25 million loss to fraud the most notorious deepfake-enabled incident in 2024. The scheme was carried out through a seemingly internal video call, where an Arup finance employee joined the call with whom he believed to be the CFO and other executives, but they were in fact all deep faked impersonations. Other high profile deepfake attempts include fraudsters using deepfakes to impersonate WPP’s CEO through a voice clone, YouTube footage used in a virtual meet, and even a fake WhatsApp account created to lure funds from senior executives.

Despite the growing awareness of AI integration in fraud attacks and deepfakes, Medius research surveying financial workers found that, when asked if and what technology their business uses to protect itself against deepfakes, only 5% knew what this technology was. It is of critical importance to ensure businesses are equipped with the right tools to prevent, identify and respond to attempted fraud attacks – including deepfake fraud – or organisations risk potentially devastating financial losses. As fraudsters strengthen their attacks with AI, businesses must be prepared to harness AI innovation for their protection.

AI has also aided fraudsters in broadening their reach, by helping them to more easily produce the code needed for mass attacks, which are more sophisticated and reach a wider range of potential victims. For example, the emergence of tools like ‘Fraud GPT’ on the dark web has led to a 135% increase in phishing emails, enabling fraudsters to craft deceptive emails at scale, making it harder for businesses to distinguish between genuine and fake communications. And software companies specialising in the development of voice cloning, and natural-sounding speech generation tools, make access to deepfaking technologies easier.

One AI-enhanced scam that is rapidly becoming more common is the Business Email Compromise (BEC) impersonation scam. This scam, enhanced by AI, sends targeted but accurate emails with convincing details, luring employees to send confidential business information or funds. 86% of UK adults are concerned that AI will give fraudsters a new way to scam people, and with fraudulent emails becoming harder to identify, the number of victims will only rise.

Examples of both successful fraud attacks (and attempts) showcase the need for strong processes to ensure these scam attempts do not succeed. With deepfakes being highly convincing, employees may drop their guard and be less vigilant when reviewing potential fraudulent invoices or when granting approvals. And as AI continues to advance and change, the technology’s high sophistication makes it difficult for both untrained employees and those aware of the technology to correctly identify deepfakes as inauthentic or genuine.

One technique that firms can implement to defend against deepfakes is multi-factor authentication. Requiring further verification steps to access payment systems beyond single passwords – such as codes to mobiles or authentication applications – can give employees an extra initial security blanket. If a CFO or senior executive requesting money cannot prove their authentication, employees can identify the red flags, decreasing the likelihood of the deepfake attempt succeeding. This form of multi-factor authentication can also be applied to payment requests, requiring additional verification from other employees before a request can be approved.

Implementing financial processes and Accounts Payable (AP) automation as a defense strategy is another foundational defense for businesses seeking protection from sophisticated scams. By implementing these strategies, no single employee will hold the ‘key’ to financial control, spreading responsibility amongst the financial team, fortifying defenses for businesses against the ever-changing fraud landscape.

When assessing the role of AI in financial operations, organisations must be able to recognise how the technology can remedy the pains of financial fraud. Businesses, technology providers, and regulators need to make AI their strength if they are to combat fraudsters. Prioritising digital AI enhanced solutions will be imperative for businesses seeking to shield themselves from the rising tide of AI-enabled fraud. Deepfakes are just one rising example of the fraudulent techniques being used by AI-enhanced attackers to infiltrate companies and cause extreme losses. But with automated AP, and foundational processes, firms can prepare for the impact of deepfakes, restrict transactions being made and limit devastating financial losses.

spot_img
spot_img

Subscribe to our Newsletter