OpenAI is implementing age-prediction technology across its ChatGPT platform, aiming to restrict access to sensitive content for users under 18. For accounts where age hasn’t been explicitly stated, the AI will analyze user behavior – including account age and activity patterns – to estimate their age.
The system is designed to reinforce guardrails around potentially harmful content, such as graphic violence, self-harm depictions, and extreme beauty standards. OpenAI cites increased scrutiny and lawsuits related to teen deaths linked to AI chatbot interactions as driving factors behind these changes. The move follows a trend of stricter age verification measures across online platforms, with Roblox recently mandating age checks and Australia enacting a law banning social media for children under 16.
If a user is incorrectly flagged as underage, OpenAI suggests third-party verification via Persona, requiring a live selfie and government ID. The effectiveness of ChatGPT’s age prediction remains unclear, though facial recognition and age estimation technologies have demonstrably improved in accuracy. Government evaluations show top algorithms achieve over 99.5% accuracy in identity verification and 95% in age estimation.
However, experts like Kristine Gloria of Young Futures argue that technology alone isn’t enough. True safety requires transparency, accountability, and digital literacy, rather than relying solely on technical fixes. The broader goal should be to build platforms where youth wellbeing is foundational, not an afterthought.
The shift toward age verification and biometric scanning is becoming increasingly common as companies face legal and ethical pressures to protect young users. While these measures may improve safety, they also raise questions about privacy and the balance between protection and access.





























