North America

Meta explores facial recognition in smart glasses

Abuse charities warn consumer wearables enable stalking at scale, Surveillance arrives as a lifestyle feature

Images

Domestic abuse charities said wearable technology such as smart glasses are increasingly being used by perpetrators to stalk and harass survivors (AFP via Getty Images) Domestic abuse charities said wearable technology such as smart glasses are increasingly being used by perpetrators to stalk and harass survivors (AFP via Getty Images) AFP via Getty Images
Meta CEO Mark Zuckerberg wearing his company’s smart glasses (REUTERS) Meta CEO Mark Zuckerberg wearing his company’s smart glasses (REUTERS) REUTERS

Meta is again flirting with turning consumer gadgets into portable surveillance infrastructure—this time by adding AI-powered facial recognition to its Ray-Ban-style smart glasses.

The Independent reports that domestic abuse charities Refuge and Women’s Aid are warning that real-time identification features would be a “direct and serious” risk to survivors, because stalking and tracking are already common tactics and wearable devices are increasingly being weaponized. Refuge says referrals to its tech-facilitated abuse team rose 62% in 2025, to 829 cases.

The immediate fear is obvious: a jealous ex with glasses that quietly identify a face and pull up a name or profile. But the more interesting story is the normalization pipeline. According to The New York Times (as cited by The Independent), Meta is considering a “name tag” style feature that could identify someone linked to a Meta account or who has a public profile on Facebook or Instagram. Sources told the Times the tool would not allow users to “look up anyone and everyone they encountered”—a promise that is always true right up until the product team ships “improvements,” the growth team ships “discoverability,” and law enforcement requests a “lawful access” channel.

Meta says it is “still thinking through options” and will take a “thoughtful” approach, per The Independent. The company also points to existing safety indicators on its glasses—like a visible light when recording. But facial recognition doesn’t need to record video to be invasive; it needs only to compute a face embedding and match it against a database. A tiny LED is a charmingly analog solution to a digital problem.

What’s being built is not merely a feature; it’s a private, always-on identification layer for public space. Every bar, subway platform, workplace lobby, and first date becomes a data environment where the default assumption is that strangers can be indexed. “Safety by design” rhetoric—well-intentioned as it is—often functions as the social lubricant that gets mass adoption over the line. Once adoption is achieved, the business model (and the inevitable data retention, model training, and partner integrations) tends to follow.

The objection isn’t that some people will abuse the tool—they will. It’s that the tool itself is a power shift: from anonymous movement in public to continuous, privatized identification. And unlike state surveillance, this one arrives as a lifestyle accessory, bundled with social media accounts people already can’t quit.

If Meta wants to prove it isn’t building consumer-grade “enterprise stalking,” it could start by committing—publicly and technically—to on-device recognition only, no searchable face database, no cloud logging, no cross-app identity enrichment, and no API for third parties. Until then, “we’re thinking about it” is just Silicon Valley’s way of saying: we’ll ship it and see what regulators complain about afterward.