Instagram is introducing a new feature that will notify parents if their teenagers repeatedly search for suicide or self-harm-related terms on the platform. The update, announced in a blog post, states the feature will roll out in the United States, the United Kingdom, Australia, and Canada within the next week, with additional regions expected later in the year.
Parental Alerts and Trigger Conditions
The feature will apply to parents who have enrolled in Instagram’s parental supervision tools. According to the company, alerts will be triggered when a teen conducts multiple searches within a short period for terms associated with suicide or self-harm, including phrases like ‘suicide’ or ‘self-harm,’ as well as other language that may indicate potential risk.
Notifications will be sent to parents via email, text message, WhatsApp, and in-app alerts, depending on the available contact details. When opened, the notification will display a full-screen message explaining that the teen has repeatedly searched for related terms. Parents will also be provided with expert-backed resources to help guide sensitive conversations.
In a blog post, Instagram said it developed the feature after analysing search behaviour and consulting with experts from its Suicide and Self-Harm Advisory Group. The company added that the alert threshold requires multiple searches in a short timeframe to avoid unnecessary notifications that could reduce the tool’s effectiveness.
Legal and Public Scrutiny Over Teen Well-Being
The move comes amid growing legal and public scrutiny over teen well-being online. Meta Platforms, Instagram’s parent company, has faced several lawsuits in the United States alleging that it failed to protect children adequately and designed features that contribute to addiction and psychological harm.
Company executives, including Instagram head Adam Mosseri, have been questioned about safety measures and the balance between privacy and child protection. In a separate case before the Los Angeles County Superior Court, internal Meta research presented in court suggested that parental supervision tools had a limited impact on compulsive social media use among children. The research also indicated that children experiencing stressful life events were more likely to struggle with regulating their usage.
Instagram said the alerts will begin launching next week in the United States, the United Kingdom, Australia, and Canada, with additional regions expected later in the year. The company also plans to expand the feature to notify parents if a teen attempts to engage the app’s artificial intelligence tools in conversations related to suicide or self-harm.
Broader Context of Social Media Safety Reforms
The latest update builds on earlier safety reforms introduced by Meta. In September 2024, the company rolled out Teen Accounts on Instagram, automatically placing users under 18 into private accounts with stricter controls. The changes limited who could message or tag teens, reduced exposure to sensitive content across Explore and Reels, introduced stronger anti-bullying filters, added daily time reminders, and activated overnight ‘sleep mode’ to mute notifications.
Other social media companies are facing similar scrutiny. TikTok, owned by ByteDance, is confronting lawsuits from multiple U.S. states alleging that its algorithm is designed to maximise engagement among children for advertising revenue. YouTube has also faced criticism over how its recommendation systems affect young users.
Instagram said it will continue monitoring feedback and refining the new alert system to balance parental awareness with user privacy. The company emphasized that the goal is to provide support to families in a way that is both effective and respectful of user privacy.
Comments
No comments yet
Be the first to share your thoughts