Voice-AI Industry Discovers Old School Telecoms Fraud

The Instant-Callback trap, fraud, legal landmines, and how not to get burned.
Summary: Attackers are stuffing naive AI providers' public web-forms with plausible details to trigger outbound Voice-AI callbacks to UK mobile-looking numbers. The bot then “talks” to convincing recordings for minutes at a time, with multiple submissions maxing its concurrency and burning cash. It’s classic IRSF/AIT (international revenue-share fraud / artificially-inflated traffic) with a twist of naive Voice AI bots as willing facilitators. Beyond the money, if you let unverified user input cause calls, you can wander into PECR/TCPA legal problems (unsolicited and automated marketing rules), Ofcom “persistent misuse” territory (silent/abandoned calls), and even get your system weaponised to harass third parties.
The safe rule of thumb: don’t ever call any number taken from user input unless you’ve verified the user identity and the number.
What seems to have happened (and why)
A slew of recent posts on LinkedIn and elsewhere describe real attacks exploiting public forms feeding an outbound agent. In one, the agent auto-dialled a range of +44 7418 534… numbers and connected to clean recorded audio. The AI kept “conversing”, concurrency hit 20 simultaneous calls, and the bill climbed ~$10 per minute until the operator pulled the plug. Others in the same community reported $450, $600, and even ~$18k losses.
Thread after thread on LinkedIn about the fraud have hundreds of comments, mostly from "Voice AI Experts" many of whom were mystified about why this is happening to them and what possible motive the attackers could have for disrupting their naive "killer feature" business model. They also call for upstream providers to do something, whilst recommending partial solutions like CAPTCHAs, form honeypot fields, budget cut-offs and prefix/country gating. One genuine surprise to me from the reports and ongoing debate is how wide the Voice AI Industry now is, and perhaps how little background some of the new, energetic players have in the dull old telecoms world!
A Voice AI influencer even made a YouTube video —“My AI Got Attacked… Don’t Make This $18,000 Mistake". The title is clickbait, he didn't lose $18,000 but was happy to talk about how he got burned and what he felt the industry could do about it. There is some good info in there, but it doesn't really help with understanding why this happened and the other issues that placing automated outbound calls to unverified users could cause a business, despite his cybersecurity credentials. YouTube
Why it works: AI agents don’t get bored; a simple loop recording can keep them engaged. If you take arbitrary user inputs and use them to drive possibly many parallel calls you’ve built a money printing machine for whoever controls the destination if there is any way for them to tap into a revenue share somewhere in the telecoms chain.
But (+44) 07 is just mobile, right?
The prefix, 07418 534..., used on multiple reported attacks are within the UK 07 mobile services ranges but is classed as a Non-standard 07 numbering range. In the UK, mobile numbers sit in a special 07 numbering space, and the caller generally pays more to call them to subsidise the mobile operator for the network cost of carrying the call. This is somewhat different to the US model where historically the mobile subscribers pays for inbound calls and the caller pays normal rate. This distinction has been blurred in both countries by inclusive plans where both kinds of call are free within a minutes allowance for consumers.
In the UK, the mobile operator charges the telco delivering the call to them a termination, or settlement fee for every minute of call they deliver. Historically this was quite a high number, but wholesale mobile termination in the UK is price-capped by Ofcom (currently 0.487 p/min). That’s the UK interconnect ceiling, not what your app pays internationally www.ofcom.org.uk.
Now interestingly the range including 07418 534 is allocated by Ofcom to a North American company, which raises the possibility of various revenue sharing opportunities, which may not exist with traditional in-country mobile calls. Perhaps this is the source of the fraud incentive, but only the company terminating the calls would really know for sure what the flow is and how the fraud is being monetised.
At the retail layer, some CPaaS providers mark mark specific mobile sub-ranges like this as “Mobile – Surcharged” with much higher per-minute prices (e.g., public Twilio pages show $0.4400/min for “UK Mobile – From Surcharged Zone 1,” far above standard UK mobile at $0.0305/min). The flow of those retail fees into interconnect fees is probably what the attacker has found a way of tapping into. Twilio
Who’s getting hurt
- AI builders, agencies, and vendors wiring public “speed-to-lead” forms to instant callbacks. Several in the threads report the same attack.
- Service providers and app builders with shared trunks or accounts: one vulnerable demo from an otherwise low volume consultant can create a cross-tenant blast radius when accounts hit concurrency and spending limits.
The legal landmines of dialing unverified numbers
Even if you never face fraud consequences because you filter or rate limit on numbers, letting unverified user input trigger live or automated calls creates compliance risk. Regulators don’t accept “the AI did it” as a defence; if your system initiated the call, you are responsible.
United Kingdom (PECR, Ofcom, TPS/CTPS, Communications Act)
- Automated calls with recorded messages (PECR Reg. 19): you must not make automated marketing calls unless the person has specifically consented to that type of call. General or “live-call” consent isn’t enough. You must also identify yourself and provide contact details. Information Commissioner's Office
- Live marketing calls: consent may not always be required, but you must not call numbers on TPS/CTPS unless you have specific permission overriding their opt-out. Information Commissioner's Office
- Silent/abandoned calls & “persistent misuse”: Ofcom can act (and fine up to £2m) where calling patterns cause harm, including repeated silent/abandoned calls—easy to create if your bot drops calls quickly or your dial strategy misfires. www.ofcom.org.uk
- Communications Act 2003, s.127: it’s an offence to persistently use a public network to cause annoyance, inconvenience or needless anxiety—relevant if an attacker manipulates your system to harass targets. (CPS guidance addresses s.127(2)(c).) Legislation.gov.uk
Implication: If your form lets adversaries make your system place unsolicited or automated calls—or generate harassing patterns—you risk ICO/Ofcom enforcement and criminal exposure regardless of the attacker’s role.
United States (TCPA/FCC) & EU (ePrivacy)
- U.S. TCPA: The FCC has clarified that AI-generated voices in robocalls are illegal under TCPA’s artificial/prerecorded voice rules; enforcement and AG actions are active. Prior express (often written) consent is the baseline for automated/recorded marketing calls. Federal Communications Commission
- EU ePrivacy Directive (Article 13): Automated calling systems for direct marketing require prior consent EU-wide (member-state implementations vary for live calls, but the automated rule is consistent). EUR-Lex
Bottom line on compliance: A public form that can cause your app to dial a number the user inputs is not just a fraud vector which can be used to attack you and your telecoms supply chains—it’s a business threat and compliance hazard across multiple jurisdictions.
Final word
Treat PSTN like payments. If a public form can spend your money by placing calls, it deserves the same paranoia you reserve for charging cards: verify first, gate destinations, cap everything, and cut off in real time on anomalies. Or, more bluntly: don’t make outbound calls to unverified numbers—ever.