One text, one click, £695 gone. Can we ever stay ahead of online fraud?

By Adrian Podkaminer, Head of Security at G2A.COM

It can start with a WhatsApp message that reads “Hey Dad, I’ve lost my phone, can you help?” In a moment of panic or trust, someone clicks, sends money, or hands over login details. Just like that, another social media scam claims a victim.

Fraud isn’t just something that happens to other people anymore, it’s becoming personal. Criminals no longer rely on clumsy phishing emails riddled with typos. Today’s scams are emotional, convincing, and powered by data and AI. This change demands a sharper response from every industry touched by digital payments.

The social media sweet spot for scammers

Social media is where people share life updates, connect with family, and often let their guard down. That vulnerability is fertile ground for scammers.

One fast-growing tactic involves impersonating loved ones. AI deepfakes now let criminals clone voices and videos to sound and look exactly like someone you know. AI can write a message in the style that matches the person being impersonated to make it convincing. They might pose as a child or partner in distress, pressuring someone to send money or share login details. In other cases, they impersonate bank staff, claiming a large withdrawal and urging the victim to act fast.

The result? A surge in losses, especially around high-spend seasons like Christmas. In the UK alone, festive fraud topped £11 million last year.

Fighting fire with fire

While fraudsters weaponize AI to scale attacks, defenders deploy it to detect anomalies faster.

AI, when used well, is one of the strongest weapons in the fight against online fraud. Real-time data analysis lets platforms detect anomalies, such as logins from new locations or sudden, high-value transactions, before damage is done. Better still, machine learning systems continuously evolve, learning from past attacks to spot new threats faster before they even happen.

Unlike traditional fraud detection methods, which rely on rules and human review, AI picks up subtle patterns that others might miss. These systems can also act instantly, blocking transactions or locking accounts when red flags are raised.

That speed is important. Because the reality is, by the time a human fraud analyst spots the issue, the scammer’s already cashed out.

Multi-factor authentication isn’t optional anymore

Even with smart tech, fraudsters still slip through, especially when users rely on outdated security habits. That’s why we still see cases like the ransomware attack that forced 158-year-old transport firm KNP into administration, after hackers exploited a single weak employee’s password. Or the Co-op breach which exposed details of 6.5 million members. These are all reminders that human error and social engineering remain powerful entry points

One of the simplest, yet most effective defences is enabling multi-factor authentication (MFA). Think of it as a digital deadbolt. Even if someone gets your password, they can’t access your account without a second form of verification, whether it’s a fingerprint, one-time code, or biometric scan.

Yet adoption is still patchy. Perceived complexity or ‘hassle’ discourages uptake, despite minimal effort required. Many platforms offer MFA, but users opt out or delay setup. In some cases, organisations don’t enforce it robustly enough. That has to change. As phishing tactics grow more sophisticated, MFA is no longer a nice-to-have, it’s a must.

Why user education still matters

The biggest vulnerability in any system is still human. While technology can screen out many threats, users still need to recognise when they’re being manipulated.

Education is crucial here and fortunately, the signs of fraud are often consistent: messages that demand urgency, unfamiliar links, odd phrasing, or payment requests to unknown contacts. Once people are trained to spot these red flags, they’re far less likely to fall for them.

The responsibility doesn’t lie with users alone, though. Platforms and payment providers must prioritise transparency and awareness. That means regular fraud updates, online safety campaigns, and encouraging people to report suspicious activity without fear of blame.

The bottom line

Fraud isn’t a future threat, it’s happening right now, evolving with every new tech advancement. Whatever the industry, it’s no longer enough to be reactive. You need to get ahead of fraud with layered security, smarter tools, and making sure people are informed, alert, and part of the solution.

If fraud is getting smarter, then so must we. Staying ahead means ensuring the next ‘Hey Dad’ text is met with skepticism, not panic.

spot_img
spot_img

Subscribe to our Newsletter