Meta has announced that it will modify its approach to training AI chatbots with a focus on ensuring the safety of teenagers, as a representative informed TechCrunch, following an investigative report highlighting the company’s insufficient AI protections for minors. The corporation states that it will now instruct chatbots to refrain from engaging with adolescent users on topics such as self-harm, suicide, eating disorders, or inappropriate romantic discussions. Meta notes that these changes are temporary, with plans to introduce more comprehensive and lasting safety measures for minors in the future.
Representative Stephanie Otway acknowledged that previously, the company’s chatbots could discuss all these subjects with teens in ways that were considered suitable. Meta has now admitted that this was an error.
What are Meta’s New AI Rules
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” Otway commented. “As we continue to improve our systems, we are implementing additional safeguards as a precaution, including training our AIs to avoid discussions with teens on these subjects, instead guiding them to professional resources, and currently restricting teen access to a limited selection of AI characters. These updates are currently being rolled out, and we will keep adjusting our methods to ensure teens have safe and age-appropriate experiences with AI.”
In addition to the training modifications, the company will also restrict adolescent access to specific AI characters that might lead to inappropriate discussions. Some user-created AI characters on Instagram and Facebook include sexualized personas like “Step Mom” and “Russian Girl.” For now, teenage users will only be allowed to engage with AI characters that encourage education and creativity, Otway explained.
The announcement of these policy changes comes just two weeks after a Reuters investigation revealed an internal Meta policy document that seemingly allowed the company’s chatbots to partake in sexual conversations with minors. One portion of the document referred to a permissible response: “Your youthful form is a work of art.” Additionally, it included examples showing how AI should react to requests for violent or sexual images of public figures.
Meta asserts that the document did not align with its overall policies and has since been revised, yet the report has ignited ongoing concerns regarding potential risks to child safety. Following the publication of the report, Senator Josh Hawley (R-MO) initiated an official inquiry into the company’s AI policies. Furthermore, a coalition of 44 state attorneys general sent a letter to various AI companies, including Meta, stressing the importance of child safety and specifically referencing the Reuters report. “We are uniformly revolted by this apparent disregard for children’s emotional well-being,” the letter states, “and alarmed that AI Assistants are engaging in conduct that appears to be prohibited by our respective criminal laws.”
Otway chose not to disclose the number of minors using Meta’s AI chatbots and refrained from indicating whether the company anticipates a decrease in its AI user base due to these decisions.
Also read:











