ChatGPT’s newly launched age-prediction AI model is now rolling out globally, but early signs suggest it may be a little too aggressive in its efforts to identify users under 18 and automatically apply “teen mode” content restrictions.

The idea itself is easy to understand. OpenAI wants to create a safer, more age-appropriate experience for younger users—particularly as ChatGPT expands its presence in schools, family environments, and creative projects aimed at teens. With an adult-focused mode expected to arrive soon, the company believes its AI models can infer a user’s likely age based on behavior and context, then tailor content accordingly.
In practice, however, the system appears to be casting its net too wide. A growing number of adult users—some of them long-time, paying subscribers—have reported being incorrectly placed into teen mode. Once flagged, they find themselves blocked from discussing more mature topics, often without a clear explanation. The issue has been surfacing since OpenAI began testing the feature several months ago, yet it hasn’t slowed the broader rollout.
OpenAI has shared few technical details about how the system works. According to the company, age estimation relies on a mix of behavioral signals, account history, usage patterns, and occasional language analysis. When the model is unsure, it defaults to caution. That means newer accounts, late-night users, or people asking questions associated with teenage interests may be misclassified—even if they’ve been subscribed to ChatGPT Pro for years.
AI age verification and its limits
At first glance, this looks like a familiar case of good intentions meeting blunt execution. OpenAI clearly wants to minimize risk for younger users, but the experience for those incorrectly flagged has left many frustrated.
The company says fixing the issue is straightforward. Users can verify their age through a tool in the Settings menu. This process uses a third-party service called Persona, which may ask for an official ID or a short selfie video. While OpenAI insists the verification is optional and anonymized, for some users the problem isn’t the inconvenience—it’s the principle.
Being asked to prove adulthood to a chatbot raises concerns about privacy, data collection, and future policy direction. Some users fear this is a stepping stone toward mandatory identity verification, while others worry—despite OpenAI’s assurances—that submitted materials could be used to train models down the line.
Reaction online has been mixed but vocal. “Great way to force people to upload selfies,” one Reddit user wrote. Another said, “If OpenAI asks me for a selfie, I’ll cancel my subscription and delete my account.” Others acknowledged the intent but asked for a less invasive solution.
OpenAI maintains that it never sees the ID or images itself. Persona simply confirms whether an account belongs to an adult and returns a yes-or-no result. The company also claims that all verification data is deleted after the process is complete and used only to correct misclassification.
Still, the situation highlights a growing tension between personalized AI experiences and automated safety systems that risk alienating users. OpenAI’s explanation—that it can infer age from behavioral signals—may not reassure everyone, especially when the consequences feel personal.
Other platforms like YouTube and Instagram have faced similar backlash over age estimation tools, often from adults incorrectly flagged as minors. But with ChatGPT now embedded in classrooms, home offices, and even therapy sessions, being quietly shifted into a restricted “teen” version can feel especially jarring.
OpenAI says it will continue refining the model and improving the verification process based on feedback. Until then, the adult user looking for wine pairing advice—only to be told they’re too young—may simply walk away. After all, no one enjoys being mistaken for a child, especially by an AI.






