Meta Knew AI Chatbots Could Engage in Inappropriate Interactions Before Launch

13

Internal documents reveal Meta leadership dismissed safety warnings about AI companions engaging in explicit romantic interactions, including those with minors. The company proceeded with the launch despite objections from its own safety teams, according to court filings unsealed Monday as part of a lawsuit by New Mexico Attorney General Raúl Torrez.

Safety Concerns Overlooked

Communications between Meta safety executives like Ravi Sinha (head of child safety policy) and Antigone Davis (global safety head) confirm they raised concerns about the potential for chatbots to be exploited for sexually explicit interactions, particularly involving underage users. These officials agreed on the need for safeguards, but documents suggest CEO Mark Zuckerberg rejected recommendations for parental controls, including the option to disable generative AI features, before the platform launched AI companions.

This decision is particularly concerning given that Meta is already facing multiple lawsuits over the impact of its products on minors. These include a potential jury trial alleging the addictive design of Facebook and Instagram, as well as broader legal scrutiny of competitors like YouTube, TikTok, and Snapchat.

Predator Marketplaces

The unsealed communications are part of the discovery process in Torrez’s case against Meta, filed in 2023. The attorney general alleges Meta allowed its platforms to become “marketplaces for predators.” Further court filings from a multidistrict lawsuit in California also revealed that Meta executives were aware of “millions” of adults contacting minors on its sites. Meta’s response has been to claim selective use of documents and emphasize past safety improvements.

“The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens,” said a Meta spokesperson.

Temporary Fixes and Further Scrutiny

Meta temporarily paused teen access to chatbots in August after Reuters reported internal AI rules permitted “sensual” or “romantic” conversations. The company later revised its guidelines to prohibit content related to child sexual abuse and romantic role-play involving minors, then locked down AI chatbots again last week while developing a version with parental controls.

Torrez has also taken legal action against Snapchat, accusing the platform of enabling sextortion and grooming while marketing itself as safe for young users. These lawsuits underscore the growing legal pressure on social media platforms to prioritize child safety, especially as generative AI tools become more prevalent.

The revelations from these court documents highlight a pattern of prioritizing product launches over robust safety measures, raising questions about Meta’s commitment to protecting minors in the age of AI-powered social platforms.