Compliance and AI: Why Data Makes the Difference

Authors:

Alia Mahmud, Regulatory Affairs practice lead, ComplyAdvantage

Jim Anning, Chief Data Officer, ComplyAdvantage

 

Forget artificial intelligence – in the brave new world of big data, it’s artificial idiocy we should be looking out for.” —Dr. Tom Chatfield, author “How to Think”

When it comes to compliance, there is a silent acknowledgment that fraudsters and criminals will always be one step ahead – that bad actors will always look at technological advancement as a new, improved way to separate people from their hard-earned cash. Banks, particularly established institutions with legacy systems, are constantly scrambling to catch up and adapt to meet regulatory requirements and protect their customers to the best of their ability.

This is why compliance officers worldwide are looking at artificial intelligence (AI) as possibly the most significant advancement since internet search to help them do more than simply meet their legal obligations; they’re hoping it can help them keep up with the criminals.

But is it all hype? Is AI the answer to all the woes faced by analysts on a daily basis?

As we’ve seen in the weeks since the roll-out of ChatGPT, the technology is impressive, but it isn’t always the perfect fit for every problem.

AI is a great solution to help compliance analysts complete routine tasks faster and more efficiently. For example, AI can vastly reduce the time it takes to onboard a new customer through entity recognition, like understanding the difference between a person and a corporation. Or, if an analyst is using adverse media for additional screening against specific crimes, natural language processing can differentiate between the person in question being the perpetrator or the victim.

In Germany, recent regulations to protect against deep fakes will open the door for AI avatars to be used as operators in the onboarding process, directing new customers to do or say certain things to demonstrate proof of life.

AI can also be critical to streamlining an organization’s response to new regulations by understanding which products are affected, helping a company accurately and efficiently direct staffing and resources, and reducing the cost of necessary updates and new releases.

But in all the excitement surrounding AI’s potential, the role of data and data scientists has gotten lost.

To truly understand how new technology can benefit your company, you need to know what the technology is good at – and what it is not – to see if it is appropriate and, if so, how it can be optimally integrated into your organization.

This is why investment in AI should be considered as a package, including investment in good data which is the foundation and the fuel for any tech solution. The data must be from reliable sources and, particularly regarding time-critical issues like sanctions enforcement, it should be updated in near real-time to train AI effectively. In short, you need to know that the data you’re using to train AI models is sound so that the resulting decisions are accurate because bias can be perpetuated in AI models.

And that is why a team of dedicated data scientists who continually build, rebuild, train, test, and nurture the solution is the last third of the equation.

By applying scientific rigor to the process of training the models, the team can improve the solution’s overall performance, reduce false positives and, ultimately, achieve the objectives the company is trying to achieve.

This brings us full circle to the original point: just as regulated organizations like banks and fintechs are excited about the potential of AI, fraudsters, and criminals are, too. Deepfakes. Fabricated news. Zombie IDs.

Just as AI is creating new solutions to regulatory problems, it is also creating new problems that will require more than technology to solve. Ironically, it will also require the power of human imagination and understanding to navigate the growing sea of data and the ever-evolving nature of threats.

 

 

 

 

spot_img

Explore more