Meta is introducing stronger safety controls on its AI chatbots to protect teenagers from harmful content. The update means users under 18 will no longer be able to discuss issues such as self-harm, suicide or eating disorders with the AI, and will instead be directed to professional help, BBC reported.
The move follows concerns raised after reports that Meta’s systems had engaged in inappropriate conversations with minors, sparking scrutiny from US lawmakers.
Child safety groups welcomed the changes but criticised Meta for acting too late, saying companies must test risks before rolling out powerful tools.
Meta is also tackling misuse of its AI to create sexualised chatbots mimicking celebrities, including underage figures. The company said it permits likeness-based content but bans explicit or exploitative creations, especially those involving children, according to BBC.