OpenAI plans to roll out age detection for ChatGPT in Europe, aiming to protect minors while raising new questions about privacy, accuracy, and adult content access.
![]() |
| As OpenAI introduces age estimation on ChatGPT, experts weigh its impact on child safety, user privacy, and the platform’s future business model. Image: CH |
Tech Desk — January 25, 2026:
OpenAI’s decision to introduce age detection for ChatGPT reflects a growing reality for global technology platforms: scale brings scrutiny. With hundreds of millions of users spanning all age groups, the company is moving to proactively address concerns about child safety—while also positioning itself for the next phase of growth.
The new feature, described as “age prediction,” will estimate whether a ChatGPT account likely belongs to a minor. Users identified as under 18 will automatically face restrictions on content involving violence, sexual material, or mature themes. OpenAI says the goal is to reduce the risk of harmful exposure for younger users and to comply with increasingly strict child-protection regulations, particularly in Europe, where the rollout will begin.
Yet the system’s probabilistic nature introduces unavoidable trade-offs. OpenAI has acknowledged that age detection will not be perfectly accurate. Adult users mistakenly flagged as minors will be asked to verify their age using a selfie-based identity service known as Persona. While this safeguard helps correct errors, it also introduces friction into what has traditionally been a low-barrier, anonymous experience—raising questions about user privacy and data handling.
Beyond safety, the initiative signals a strategic shift. OpenAI has confirmed plans for a distinct “Adult Mode” on ChatGPT, potentially launching in early 2026. This suggests that age detection is not only about limiting content for minors, but also about unlocking new possibilities for adults. Clear age segmentation could allow OpenAI to explore content, features, or services that were previously constrained by the need to cater to a mixed-age audience.
The stakes are high. ChatGPT reportedly serves around 800 million weekly active users, making age-based content control a massive technical and operational challenge. Even small error rates could affect millions, and user tolerance for misclassification will be limited. Analysts note that the long-term success of the feature will depend on how seamlessly age checks are integrated into the user experience, without eroding trust or convenience.
Commercial considerations are also in play. OpenAI has begun showing advertisements to users in the United States and is projected to surpass $20 billion in annual revenue by 2025. More reliable age segmentation could reassure advertisers, improve compliance with ad regulations related to minors, and support higher-value ad targeting—making age detection as much a business tool as a safety measure.
In the short term, most analysts view the initiative as a positive step toward protecting minors online. In the longer term, however, it opens broader debates about surveillance, consent, and the evolving boundaries of AI platforms. Whether OpenAI’s age detection becomes a model for responsible AI—or a flashpoint for privacy concerns—will depend on how transparently and thoughtfully it is implemented.
