APP scams are part of an extensive and growing fraud playbook. They’re carefully choreographed stories of manipulation; quiet, calculated moves that slip past conventional defences and stay hidden until the damage is done. Michael Morris, Product Director at Cleafy explores some of the most pressing questions about APP fraud, from collaborative detection to the influence of AI.
APP fraud is often seen as a human problem, but where does technology play a role in prevention or earlier intervention?
The biggest signs banks look for are unusual behaviour or unusual contacts. It’s less about the transaction itself and more about the individual involved. To catch scams early, you must look deeper into the person’s activity, not just the payments.
Pressure tactics are another key sign. Say you’re buying a house – there’s naturally pressure to get the deposit paid on time, which is expected. But if you see pressure that doesn’t align with the actual purchase or situation, that’s an early warning. That’s when you start focusing on the user’s behaviour rather than just the transaction itself – what’s driving these pressure points? Why is there urgency?
Unusual spending patterns are also important; for example, if a person who usually pays by card suddenly switches to bank transfers, gift cards, or cryptocurrency, which may be out of character, that may be a considerable red flag. Changes to contact details or odd communication patterns also add to the user profile and raise suspicion.
On the technology side, remote software access is a huge factor now. It used to be associated mostly with unauthorised transactions, like malware draining accounts without consent. But now fraudsters often ask for explicit permission, convincing victims that they need access to manage investments or provide help. They walk the victim through every step on their device, which is powerful social manipulation.
These behaviours offer strong signals for any industry to question if this is normal for that user. Additionally, environmental and anonymity factors matter: Are IP addresses changing frequently? Is there access from different locations or devices? Are network settings being altered? Fraud detection is not one-size-fits-all. It requires looking at a combination of signals to catch early warning signs.
At Cleafy, we are focused on the user journey’s complete visibility: contact detail changes, spending behaviour shifts, different types of interactions, and environmental anomalies like device changes. The key is correlating all those data points to build a rich context. That context helps decide how to respond and tailor the approach based on what’s happening in each specific case. This approach helps to prevent instead of react.
How important is collaboration between fraud and cybersecurity teams in detecting and stopping scams and fraud?
Working with cybersecurity is essential, as it offers a holistic view. The cyber team sees how attacks start, often spotting early signals through device and network data before they become obvious in fraud monitoring. This shared intelligence gives you a complete picture of where attacks originate in your ecosystem.
Fraud teams tend to focus on understanding individual victims and transactional activity, but security can offer a secondary review that speeds things up. For example, it can spot remote access across multiple people and enable rapid intervention before any funds are lost to others.
Working closely with security teams is essential because they offer an entirely different perspective. What might look like transaction volumes to fraud teams is supported by intricate data from security, from the moment you open the app, even before making a payment, or when you add confirmation. That’s where fraud and cybersecurity overlap, and it is critical.
What type of APP scams do you suspect we will see in the future with advances in AI?
There are two paths of evolution that AI will enable for threat actors.
The first is scale. Social engineering, particularly vishing and other techniques, is a human-driven endeavour with constraints on the number of hours in the day, languages spoken and the number of people employed in such activities. Generative AI will significantly reduce these barriers and allow threat actors to scale social engineering attacks across borders, 24/7. Expect deep-fakes to become more sophisticated.
Second, AI will also lower the barrier to entry for technology to be used in concert with social engineering; we’ll see a return to malware-driven attacks as app stores open up, new devices become mainstream and capabilities become attack vectors.
Learn more about Cleafy, and gather further insights into APP scams here.


