Technology

Meta expands scam detection across Facebook WhatsApp and Messenger

new warnings rely on behavioural signals and optional AI chat review, enforcement numbers rise as classification becomes the product

Images

Image Credits:Meta Image Credits:Meta Image Credits:Meta
Aisha Malik Aisha Malik techcrunch.com

Meta is rolling out new scam-detection features across Facebook, WhatsApp and Messenger, including warnings for suspicious friend requests, alerts for risky WhatsApp device-linking attempts, and expanded “advanced scam detection” in Messenger, according to TechCrunch. The company says it removed more than 159 million scam ads last year and took down 10.9 million Facebook and Instagram accounts tied to criminal scam centres.

The new tools are framed as user protection, but they also formalise a shift that has been underway for years: trust and risk decisions move from the user’s judgement to the platform’s classification systems. On Facebook, an incoming friend request can now trigger an “are you sure?” prompt based on signals like few mutual friends or a location mismatch. On WhatsApp, the focus is on account takeovers via device linking—scammers trick users into entering a code or scanning a QR code that silently attaches the attacker’s device to the victim’s account. On Messenger, Meta’s system flags scam-like patterns in chats with new contacts and can ask users whether they want to share recent messages for an AI review.

That last step illustrates the core trade-off. Scam detection at scale requires telemetry: relationship graphs, behavioural patterns, device signals, and sometimes message content or message excerpts. Meta says scammers often “avoid detection” and do not immediately use accounts maliciously, which is another way of saying the system must infer intent before harm is obvious. The cost of being late is measured in fraud losses and headlines; the cost of being early is measured in false positives—warnings that spook legitimate conversations, and enforcement actions that hit real users.

Meta’s incentives are not symmetrical. The company bears reputational and regulatory risk when scams spread on its platforms, but it does not bear the full cost when an account is wrongly flagged or a legitimate business conversation is interrupted. For users, the harm of a false negative can be catastrophic (money stolen, identity compromised), while the harm of a false positive is often diffuse (friction, blocked outreach, lost time). Platforms tend to optimise for the metrics they can defend publicly: removals, pre-report takedown rates, and “we warned you” prompts.

Meta is not publishing the country list for the Messenger rollout, and it is not detailing precisely which signals trigger prompts or how long shared chat snippets are retained. What it is publishing are large enforcement numbers—159 million ads, 10.9 million accounts—paired with new prompts that nudge users toward reporting and blocking.

The update arrives as a set of pop-ups and warnings. The underlying change is that more everyday conversations and account actions are being scored in real time by Meta’s systems before the user decides what to do next.