Media

Instagram alerts parents about teens’ self-harm searches

Meta expands supervision tools across US UK Australia and Canada, Notifications replace blocked searches while recommendation incentives stay untouched

Images

standard.co.uk

Instagram will begin alerting parents when their teenagers repeatedly search for suicide or self-harm related terms, rolling the feature out next week in the US, UK, Australia and Canada. The Standard reports the notifications will go to parents who use Instagram’s optional supervision tools, delivered by email, text, WhatsApp or in-app messages.

The change shifts the product’s safety posture from blocking and redirecting searches—Instagram’s current approach—to reporting a user’s behaviour to a parent. Meta frames this as “erring on the side of caution”, acknowledging alerts may be triggered when there is “no cause for concern”. In practice, that means the system is designed to tolerate false positives, because the cost of a missed warning is reputational and legal while the cost of unnecessary alerts is largely borne by families.

The company is explicit that the feature sits on top of its existing “teen accounts” setup: under-16s need parental permission to change certain settings, and parents can add monitoring only with the teenager’s agreement. That architecture matters because it defines what Meta has built: not a hard barrier, but a configurable dashboard. It is easier to document and defend in front of regulators than it is to redesign recommendation loops that amplify risky content.

Charities are already warning about the downstream effects. The Molly Rose Foundation criticised the move as a “clumsy announcement” that could leave parents “panicked and ill-prepared” for what follows, arguing the burden should be on reducing exposure to harmful material rather than “passing the buck to parents”, according to the Standard. A notification is not treatment, and it is not context: it is a prompt that can land at work, at night, or in the middle of a family conflict.

The policy also creates a new data trail. For an alert to be sent, Instagram must classify searches as self-harm related, decide what counts as “repeated”, and tie that behaviour to a supervised account with parent contact channels—email addresses, phone numbers, WhatsApp identifiers, or in-app links. Even if the content of searches is not shared verbatim, the act of flagging and notifying turns a private query into a recorded event and a message.

The rollout comes as governments push harder on youth social media use. Australia has already banned social media for under-16s, and the UK government has said it is considering similar restrictions, the Standard notes, while several EU countries explore access limits. Meta executives have faced court scrutiny in the US over allegations the company targeted younger users.

Instagram’s new alerts begin rolling out next week. They will be optional for parents to receive, but they will not be optional for teens to trigger.