According to Financial Times News, major UK mobile networks including BT EE, Vodafone, Three, and Virgin Media O2 have jointly committed to upgrade their networks to block “number spoofing” within the next year. The initiative is part of a new telecoms charter with the government and will use AI to identify and block suspicious calls and texts before scams happen. Criminals defrauded UK consumers out of £629 million in just the first half of 2025, with fraud now accounting for over 40% of all reported crime in the country. The technology will also include advanced call tracing to help police track down scammers, while data shows that 96% of mobile users decide whether to answer calls based on the number displayed. Lord Hanson, minister for fraud, said the government intends to make the UK “the hardest place in the world for scammers to operate.”
How spoofing actually works
Here’s the thing about number spoofing – it’s basically criminals in overseas call centers making it appear they’re calling from legitimate UK numbers that you’d recognize, like your bank or local police station. They’re exploiting the fact that the UK’s telephone network is outdated. An upgrade to 5G technology would naturally prevent this because it would make it clear the call originates from abroad. But we’re stuck with legacy systems that let scammers pretend to be anyone they want. And since three-quarters of consumers say they’d be unlikely to pick up unknown international numbers, spoofing gives fraudsters that crucial foot in the door.
Why this is happening now
Look, Ofcom has been pushing mobile operators to fix this for three years. So why the sudden urgency? The numbers are just staggering. Telecoms-enabled fraud accounts for 17% of all cases but a whopping 29% of total losses because these tend to be higher-value impersonation scams. When someone thinks they’re talking to their bank or the police, they’re more likely to transfer large sums. Meanwhile, UK Finance reports that banks prevented £682 million in fraud using their own AI systems in the same period. The mobile networks are basically playing catch-up.
The one-year delay problem
Nick Stapleton from BBC’s Scam Interceptors raises a valid point – why does it take up to a year to implement? “If they were really serious, they could pull the trigger on this tomorrow,” he told the Financial Times. And he’s not wrong. A lot of money will be lost in that time, and while new reimbursement rules mean most victims get their money back, the psychological harm and trauma remain. So what’s the holdup? Probably the usual – coordinating between multiple competing companies, upgrading infrastructure, testing systems. But when you consider that 96% of people judge calls by the number displayed, every day of delay means more successful scams.
AI as both problem and solution
There’s an interesting twist here – UK Finance notes that AI is increasingly being used to create convincing “deepfake” videos for high-value investment and romance scams. So we’re basically in an arms race where AI both creates more sophisticated scams and helps prevent them. The mobile networks’ approach of using AI to scan for suspicious patterns in real-time makes sense, but it’s reactive. The real game-changer is preventing the spoofing in the first place. Once that’s implemented, scammers will have to work much harder to seem legitimate. Whether they’ll find new ways around these protections is the billion-pound question.
