Instagram to alert parents if teens search for self-harm and suicide content

Meta is implementing a controversial new safety feature on Instagram that will notify parents when their teenagers repeatedly search for suicide or self-harm related content. This marks the first time the social media giant will proactively alert guardians about their child’s search behavior rather than simply blocking access to harmful material.

The parental notification system will initially roll out to families enrolled in Instagram’s Teen Accounts program in the UK, US, Australia, and Canada starting next week, with global expansion planned subsequently. According to Meta’s official blog post, the alerts will be accompanied by expert resources designed to help parents navigate difficult conversations with their children.

However, the initiative has drawn sharp criticism from suicide prevention organizations. The Molly Rose Foundation, established in memory of 14-year-old Molly Russell who took her own life after viewing harmful content on Instagram, warned the approach “could do more harm than good.” Chief executive Andy Burrows expressed concern that “these flimsy notifications will leave parents panicked and ill-prepared” for sensitive discussions.

The foundation cited prior research indicating Instagram still “actively” recommends harmful content about depression and suicide to vulnerable young users. Multiple child safety advocates argue Meta should focus on addressing systemic platform risks rather than transferring responsibility to parents.

Meta acknowledges the system may occasionally generate false alerts but will “err on the side of caution” based on analysis of user search patterns. The company also plans to extend similar monitoring to interactions with Instagram’s AI chatbot as children increasingly turn to artificial intelligence for support.

This development occurs amid growing global scrutiny of social media companies’ child protection measures. Australia recently banned social media for users under 16, while Spain, France, and the UK consider similar legislation. Meta executives recently appeared in US courts defending the company against allegations of targeting younger users.

Sameer Hinduja of the Cyberbullying Research Center noted that while alerts would obviously alarm parents, the critical factor is “the quality and usefulness of the resources parents immediately receive to guide them through what to do next.”