Forget deepfakes, shallowfakes are the real threat to the insurance industry

By Martin Rehak, CEO & Founder at Resistant AI

 

To believe or not to believe—the dilemma facing insurers dealing with increased digital document fraud

Fraud continues to be a serious threat to the insurance industry, rising by 73% in 2021 according to Kingsley Napley. In the face of the unprecedented challenges of the pandemic, insurers have continued to try to thwart insurance fraudsters in order to protect honest customers.

Contributing to this fraudulent scenario are so-called “deepfakes”—sophisticated forgeries of still or video images or audio recordings made with the aid of artificial intelligence (AI) technology. But while these have become increasingly prevalent in fraudulent insurance claims, the insurance industry is now seeing more of what are called “shallowfakes”.

Once limited to a social media novelty, deepfake and shallowfake fraud has emerged as a formidable threat to the insurance industry, which already suffers from over US$80 billion in annual fraud in the US alone.

The difference between deepfakes and shallowfakes is that while deepfakes require AI to create them, shallowfakes can be created using basic photo editing software, such as Photoshop. The term “shallow” might imply that they are less threatening than their deepfake counterparts. But the fact that they do not require deep AI/machine learning methods to create them means that shallowfakes can be made and deployed easier and faster—for that reason, shallowfakes are presenting a more immediate fraud risk to insurers.

Insurance fraud can range from a person providing false information to an insurance company in order to get cover on more favourable terms, or faking motor vehicle, commercial, household or other personal insurance claims.

In these and other fraud scenarios, shallowfakes can include:

  • False proof of identity or address – including photo ID documents such as driving licences, passports, national insurance cards, utility bills and bank statements
  • Fake supporting evidence – any evidence required to support a claim or transaction, such as invoices for services, contracts and agreements, no claims discount certificates, or expert reports

Of course, the problem of altered digital media is not entirely new to the insurance industry. Photo editors began to proliferate many years ago and, in fact, altered photos that falsely inflate claims have been a leading concern among insurers in tackling fraud.

What is new is the scale of the problem: it’s not uncommon to find the same document being reused tens or even hundreds of times with just name, account, and address altered, effectively creating as many fake identities from a single template. This was the case of a single Canadian passport which was reused and submitted over 2,500 times in the space of 20 days — with one day clocking in over 400 submissions, each with subtle changes in name, address, and even hairstyle on the portrait to avoid detection.

 

Self-service automation

While there have been some moves to reduce shallowfake fraud, the pace of touchless automation—in the form of self-service transactions and straight-through processing (STP)—has been fast and furious. Undoubtedly, the global pandemic has aided the transition to self-service since it was a natural fit for claims reporting during lockdowns.

At the same time, this has increased dependency on customer-supplied photos for settling claims—an excellent opportunity for shallowfakes as the risk of fraud from altered, manipulated or synthetic photos significantly increases.

The past couple of years have shown that touchless claims (and underwriting) transactions are here to stay, and the way digital media can be compromised has become more elaborate. As a result, proactively taking steps to implement automated fraud prevention technology to tackle shallowfakes is quickly becoming an important consideration for protecting insurers’ business.

 

Using AI to detect shallowfakes

While shallowfakes don’t require AI to create them, AI can significantly increase the chances of detecting them. The use of AI solutions—combined with human instinct, attention to detail, and awareness and knowledge to check the validity of what is being processed—can prove a win-win for detecting fraudulent documentation.

Having AI-powered detection built into a claims process is one way of stopping fraud and increasing accurate claim handling. Without any ability to check, for example, the authenticity of photos, damages might be exaggerated, and insurers will ultimately pay for losses that are either entirely false or inflated.

The pace of claims automation is far exceeding the pace of automated fraud prevention, which is opening new risks as well as new opportunities. Some insurance companies may be willing to risk fraud vulnerabilities in return for cost savings elsewhere and an improved customer experience. That is a fine balance that they need to strike.

In the light of increased shallowfake activity, it has become increasingly necessary for the insurance companies to pay closer attention to the documentation being submitted for claims, where fraudsters may use shallowfakes to claim for large sums of money they are not entitled to.

Document scrutiny can be significantly enhanced with AI-based “document forensics” to find fraud that the human eye can’t see in insurance claims, and verify the authenticity of digital documents.

Matt Gilham, Head of Enterprise Fraud, Esure, recently quoted: “Insurance organisations are accelerating their adoption of digital technologies to better service customers and claims.  As digitalisation and speed of processing increases, vulnerabilities are created that, if left unchecked, can be exploited by tech savvy fraudsters. The problem with shallowfakes stems from the ease with which digital documents and images can be manipulated using readily available tools. The subtlety of shallowfake alterations makes them increasingly difficult, and often impossible, to track visually. As the manipulation of digital documents becomes more prominent, AI automation technology is a vital aid in the identification of, and defence against, shallowfakes.  This enables faster and more efficient insurance processing, while also stepping up defences against fraudulent abuse.”

 

A direct threat

By their very nature, shallowfakes are a direct threat to the accuracy of information relating to any individual in the existing digital environment. However, the threat that they pose will only increase as our interactions with the metaverse increase, given that there will be more opportunities for their use.

The cost of inaction to the insurance industry may be high. In all likelihood, few if any insurance firms have yet addressed the growing threat posed by shallowfakes. Yet it should be a high priority for them—without immediate action being taken to mitigate the impact of shallowfakes, they could be a threat that is hard to stop.

 

 

spot_img

Explore more