Meta CEO Mark Zuckerberg rejected safety measures for AI chatbots that his own staff warned could engage in sexual conversations with minors, according to internal documents filed in a New Mexico lawsuit. The communications, obtained through legal discovery and made public Monday, show
Zuckerberg pushed for "less restrictive" policies and blocked parental controls despite concerns from Meta's child safety team, Reuters reported.
Internal messages from March 2024 reveal employees stating they "pushed hard for parental controls to turn GenAI off" but were overruled by leadership citing Zuckerberg's decision. Ravi Sinha, Meta's head of child safety policy, wrote in January 2024 that creating romantic AI companions for adults to interact with minors was neither "advisable or defensible." Meta's global safety head Antigone Davis agreed, warning it "sexualizes minors."
Meta defends chatbot policy amid mounting evidence
The documents paint a contradictory picture. While Zuckerberg reportedly wanted to prevent "explicit" conversations with younger teens, a February 2024 meeting summary shows he believed Meta should be "less restrictive than proposed" and wanted to "allow adults to engage in racier conversation on topics like sex." He also rejected parental controls that would have let families disable the AI feature entirely.
Nick Clegg, Meta's former head of global policy, questioned the approach in internal emails, asking if the company really wanted these products "known for" sexual interactions with teens, warning of "inevitable societal backlash."
Company suspends teen access after year of controversy
Those concerns proved prescient. A Wall Street Journal investigation in April 2025 found Meta's chatbots included sexualized underage characters and engaged in graphic sexual roleplay. Reuters later reported that Meta's official guidelines stated it was "acceptable to engage a child in conversations that are romantic or sensual."
Meta only suspended teen access to AI chatbots last week, more than a year after the initial safety warnings. The company claims it's developing age-appropriate versions with parental controls—the same safeguards Zuckerberg allegedly rejected implementing earlier. Meta spokesman Andy Stone dismissed the lawsuit as "cherry-picking documents," though the trial proceeds next month.