Character.AI Restricts Teen Access to Chats, Launches “Stories” Feature

13

Character.AI, facing mounting legal pressure over its potential harm to teen mental health, is shifting its approach to underage users. The platform will ban open-ended text chats for those under 18, instead directing them towards a new, controlled experience called “Stories.” This move comes after multiple lawsuits alleging the AI’s interactions contributed to mental distress, including one case linked to a teenager’s suicide.

New Restrictions and the “Stories” Format

Starting November 25th, teens will no longer have unrestricted access to the platform’s chat feature. Character.AI is framing this as a temporary measure while they develop age-verification tools. However, the immediate replacement is “Stories,” a choose-your-own-adventure format where AI characters guide users through pre-defined narratives.

The company pitches “Stories” as an enhancement for younger users, emphasizing its structured nature compared to the freeform chats. Users select characters and genres, with the option to write their own story prompts or let the AI generate them. The experience includes AI-generated images, with plans to expand into more immersive “multimodal elements.”

Why This Matters

This shift reflects growing concerns about the impact of AI interactions on vulnerable users. Unsupervised, open-ended chats with AI can expose teens to harmful content or reinforce unhealthy behaviors, leading to psychological distress. The lawsuits against Character.AI highlight the real-world consequences of these risks.

The move also signals a broader trend: tech companies are under increasing scrutiny to regulate AI interactions, particularly regarding minors. While “Stories” provides a more controlled environment, it also raises questions about censorship, user autonomy, and whether this is a genuine solution or a PR-driven response to legal threats.

Character.AI’s decision is a direct response to legal and public pressure, demonstrating the growing accountability of AI platforms for the well-being of their users. The long-term implications of this shift remain to be seen, but it sets a clear precedent: AI companies must address the risks associated with their products, especially when dealing with underage audiences.