Adults largely support policies that protect children’s privacy and provide age-appropriate online experiences, according to a new report from Common Sense Media.
The new research finds that 6 in 10 adults want age verification for social media and gaming platforms, and more than 50% want age verification for artificial intelligence services and chatbots.
In recent years, there’s been a larger push on the state level to regulate tech companies, like OpenAI, Meta, and Google, and require them to create barriers around how children engage online. For example, in September 2024, California passed a law that makes it illegal for minors’ social media accounts to include “addictive feeds” unless parents have given consent. New York, in December 2025, issued a law that mandates social media platforms to show a mental health warning label for users.
On the federal level, the Federal Trade Commission is looking into how chatbots are designed to interact with users, specifically if companies are adhering to the Children’s Online Privacy Protection Act, which requires parental consent before collecting personal information on children under 13.
Despite these restrictions, 29% of respondents in the Common Sense Media survey said they are most concerned that age verification systems are easy for children to bypass.
“They [adults] are concerned about the ability of tech companies to protect children’s privacy and to implement realistic privacy policies that children are not able to circumvent,” said Supreet Mann, director of research at Common Sense Media and a lead researcher on the report.
In a conversation with Education Week, Mann discussed the report’s findings, the importance of age restrictions, and what tech privacy laws mean for educators and tech companies.
This interview has been edited for length and clarity.
What type of content do the majority of adults want restrictions on?
We asked respondents about which online services should require age verification and what children should be protected from online. In both of those responses, emerging to the top were adult or pornographic websites, gambling services, and then pretty high up was also AI. Adults want to protect children from content that they view as typically adult-directed content.
These [type of websites are] not meant for children, and children shouldn’t be on them in the first place.
How can apps or sites be designed with kids’ privacy protections in mind?
What we’re really hoping for is that [the report] will help to promote some of the conversation around the policymakers and tech companies to find ways to build age-assurance processes. It’s not a one-way street; it’s not just about tech companies. Tech companies without oversight and without direction from policymakers are not highly incentivized to engage and build age-assurance policies.
We’re not saying that tech companies shouldn’t have [these sites or apps], we’re saying that they simply should not be spaces for young people to be on.
Where do you think schools and educators fit into this conversation?
It comes back to a larger digital literacy space. It’s as important as ever for educators to incorporate digital literacy curriculum into the classroom. It’s important for them to talk about safe spaces online, what to do when [students] encounter content that makes them uncomfortable. Part of this is also recognizing that this content is not content we really want our kids to be engaging with.
We recently ... highlighted just how dangerous AI companions can be—we’re talking about [chatbots like] Character.ai.
Finding a way to bring that content and some of that research into the classroom is really important for giving kids a way to verbalize and talk about the things that they’re experiencing online.
How will restrictions affect students who rely on social media for community or to get information?
There’s a lot of research around kids’ use of social media, both positives and negatives. Sometimes, some of the content that they’re exposed to, whether intentionally or not, can be really problematic and challenging. This comes down to a bigger digital literacy question: How are kids understanding this online space?
But I also certainly think that tech companies have a role here and in knowing who their audience is and how to filter and limit certain content for certain audiences. There’s a line to walk that is both allowing kids to continue to access these spaces when needed, but also helps to limit some of the content that they’re seeing—so they’re not seeing suicidal ideation content or eating disorder content.
Why is there a bigger focus on age assurance now?
The Australian social media ban [in December 2025, the country banned children 16 and under from accessing social media platforms like Facebook, Instagram, Threads, Reddit, Snapchat, TikTok, Twitch, X, and YouTube] and some of the similar types of legislation that have been proposed and talked about in different places have spurred a lot of this to the forefront. Some of these spaces are not intended for kids, but we need to actively protect kids from.
Is there anything else you want to mention?
We did ask respondents to indicate what their biggest concern about age verification systems is. Over a third said their biggest concern was privacy and data security. They want to see systems in place that are still secure and privacy protective, that are not going to sell their kids’ data.
But we also asked about distrust in the organizations that were involved. While about 1 in 10 did say that they had some distrust about the organization, that still wasn’t as big a concern as privacy and data security.
There is a way for tech companies to really work together with policymakers and with parents and educators to build systems that do all of these things—that are able to protect kids, that are not too complicated for parents to understand and navigate, but are complicated enough that children can’t circumvent.