ChatGPT Age Prediction System Now Rolling Out to Identify Teen Users

Share:

Loading

OpenAI first announced plans to introduce an automated age-prediction system for ChatGPT in September last year. The goal was to help the AI chatbot distinguish between teens and adults.

After testing it in selected countries over the past few months, the company has now started rolling it out on ChatGPT consumer plans.

The system identifies users under 18 and automatically redirects them to a more age-appropriate ChatGPT experience. According to OpenAI, the age-prediction model examines several factors to estimate whether someone is underage.

ChatGPT age prediction system

It looks at how long an account has existed, typical times of day when someone is active, usage patterns over time, and the user’s stated age.

This combination of signals helps the system make educated guesses about user age without requiring identity verification. The approach tries to balance safety with privacy. OpenAI doesn’t want to collect sensitive identity documents from every user, so they built a system that infers age from behavior patterns instead.

Younger users typically have different usage patterns than adults. They might use ChatGPT during school hours, have accounts created more recently, or interact with the chatbot in ways that suggest their age group. The system analyzes these patterns to make predictions.

You may also like: ChatGPT Go Launches Worldwide at $8/Month, Ads Coming to Free Tier

If the system determines a user is likely under 18, it switches them to a version of ChatGPT with additional safety features and content restrictions appropriate for younger audiences.

When the system detects that an account may belong to a younger user, ChatGPT automatically applies additional safety measures to limit exposure to sensitive content.

OpenAI says this age-appropriate experience restricts access to graphic or violent content, risky viral challenges, sexual or romantic roleplay, depictions of self-harm, and content that encourages extreme beauty standards, unhealthy dieting, or body shaming.

These restrictions aim to protect younger users from content that could be harmful to their development or well-being. The categories OpenAI chose reflect common concerns parents and child safety advocates have raised about AI chatbots.

Graphic violence and sexual content are obvious exclusions for underage users. The restrictions on viral challenges address situations in which teens might ask ChatGPT to help them participate in dangerous online trends.

Self-harm content restrictions prevent the chatbot from providing information that could enable or normalize harmful behaviors.

The inclusion of extreme beauty standards, unhealthy dieting, and body shaming shows OpenAI is thinking beyond just explicit content.

These topics can significantly impact teenage mental health and self-image. Preventing ChatGPT from engaging with or reinforcing these ideas adds another layer of protection.

The age-appropriate version still lets younger users access ChatGPT for homework help, creative projects, learning, and general questions. The restrictions focus on blocking potentially harmful content while maintaining the chatbot’s usefulness for educational and constructive purposes.

OpenAI will likely adjust these restrictions over time based on feedback and real-world results.

ChatGPT Age Prediction System Error Flagged

OpenAI admits the system isn’t perfect and may incorrectly flag adults as teens. The company plans to keep refining the model to improve accuracy over time.

Users who are incorrectly identified as underage can verify their age and regain full access through an identity verification service called Persona. To check if your account has restricted access, go to Settings > Account.

Beyond offering younger users a safer ChatGPT experience, the age-prediction model will let OpenAI introduce new features for older users. This includes the previously mentioned adult mode.

You may also like: Anthropic Launches Claude for Healthcare with Personal Health Data Integration

OpenAI originally planned to launch adult mode late last year, but delayed it to Q1 2026 to improve the age-prediction model. With the system running now, the company could announce the feature soon.

Adult mode would presumably allow mature content and discussions that wouldn’t be appropriate for younger users. This could include frank conversations about complex topics, nuanced discussions of sensitive subjects, or content with mature themes. The exact features haven’t been detailed yet.

The delay makes sense from a safety perspective. OpenAI needed to get age prediction working reliably before launching features designed exclusively for adults. Rolling out adult content without proper age verification would create serious risks and potential liability.

If you get flagged incorrectly, the verification process through Persona should restore your account to the correct one. You’ll need to provide identification to prove your age. This adds a step, but it protects the integrity of the age-restricted system.