Privacy & Security Q&A

‘Not Meant for Children': Adults Favor Age Restrictions on Social Media, AI

By Jennifer Vilcarino — February 09, 2026 4 min read
A cellphone sits on a desk at Ferris High School’s World Language Night on Dec. 3, 2025 in Spokane, WA.
  • Save to favorites
  • Print

Adults largely support policies that protect children’s privacy and provide age-appropriate online experiences, according to a new report from Common Sense Media.

The new research finds that 6 in 10 adults want age verification for social media and gaming platforms, and more than 50% want age verification for artificial intelligence services and chatbots.

In recent years, there’s been a larger push on the state level to regulate tech companies, like OpenAI, Meta, and Google, and require them to create barriers around how children engage online. For example, in September 2024, California passed a law that makes it illegal for minors’ social media accounts to include “addictive feeds” unless parents have given consent. New York, in December 2025, issued a law that mandates social media platforms to show a mental health warning label for users.

On the federal level, the Federal Trade Commission is looking into how chatbots are designed to interact with users, specifically if companies are adhering to the Children’s Online Privacy Protection Act, which requires parental consent before collecting personal information on children under 13.

Despite these restrictions, 29% of respondents in the Common Sense Media survey said they are most concerned that age verification systems are easy for children to bypass.

“They [adults] are concerned about the ability of tech companies to protect children’s privacy and to implement realistic privacy policies that children are not able to circumvent,” said Supreet Mann, director of research at Common Sense Media and a lead researcher on the report.

In a conversation with Education Week, Mann discussed the report’s findings, the importance of age restrictions, and what tech privacy laws mean for educators and tech companies.

This interview has been edited for length and clarity.

What type of content do the majority of adults want restrictions on?

We asked respondents about which online services should require age verification and what children should be protected from online. In both of those responses, emerging to the top were adult or pornographic websites, gambling services, and then pretty high up was also AI. Adults want to protect children from content that they view as typically adult-directed content.

These [type of websites are] not meant for children, and children shouldn’t be on them in the first place.

How can apps or sites be designed with kids’ privacy protections in mind?

What we’re really hoping for is that [the report] will help to promote some of the conversation around the policymakers and tech companies to find ways to build age-assurance processes. It’s not a one-way street; it’s not just about tech companies. Tech companies without oversight and without direction from policymakers are not highly incentivized to engage and build age-assurance policies.

We’re not saying that tech companies shouldn’t have [these sites or apps], we’re saying that they simply should not be spaces for young people to be on.

Where do you think schools and educators fit into this conversation?

It comes back to a larger digital literacy space. It’s as important as ever for educators to incorporate digital literacy curriculum into the classroom. It’s important for them to talk about safe spaces online, what to do when [students] encounter content that makes them uncomfortable. Part of this is also recognizing that this content is not content we really want our kids to be engaging with.

We recently ... highlighted just how dangerous AI companions can be—we’re talking about [chatbots like] Character.ai.

See also

Photo illustration of a 3d rendering of a chatbot hovering over a motherboard circuit.
iStock/Getty

Finding a way to bring that content and some of that research into the classroom is really important for giving kids a way to verbalize and talk about the things that they’re experiencing online.

How will restrictions affect students who rely on social media for community or to get information?

There’s a lot of research around kids’ use of social media, both positives and negatives. Sometimes, some of the content that they’re exposed to, whether intentionally or not, can be really problematic and challenging. This comes down to a bigger digital literacy question: How are kids understanding this online space?

But I also certainly think that tech companies have a role here and in knowing who their audience is and how to filter and limit certain content for certain audiences. There’s a line to walk that is both allowing kids to continue to access these spaces when needed, but also helps to limit some of the content that they’re seeing—so they’re not seeing suicidal ideation content or eating disorder content.

Why is there a bigger focus on age assurance now?

The Australian social media ban [in December 2025, the country banned children 16 and under from accessing social media platforms like Facebook, Instagram, Threads, Reddit, Snapchat, TikTok, Twitch, X, and YouTube] and some of the similar types of legislation that have been proposed and talked about in different places have spurred a lot of this to the forefront. Some of these spaces are not intended for kids, but we need to actively protect kids from.

Is there anything else you want to mention?

We did ask respondents to indicate what their biggest concern about age verification systems is. Over a third said their biggest concern was privacy and data security. They want to see systems in place that are still secure and privacy protective, that are not going to sell their kids’ data.

But we also asked about distrust in the organizations that were involved. While about 1 in 10 did say that they had some distrust about the organization, that still wasn’t as big a concern as privacy and data security.

There is a way for tech companies to really work together with policymakers and with parents and educators to build systems that do all of these things—that are able to protect kids, that are not too complicated for parents to understand and navigate, but are complicated enough that children can’t circumvent.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Managing AI in Schools: Practical Strategies for Districts
How should districts govern AI in schools? Learn practical strategies for policies, safety, transparency, and responsible adoption.
Content provided by Lightspeed Systems
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Two Jobs, One Classroom: Strengthening Decoding While Teaching Grade-Level Text
Discover practical, research-informed practices that drive real reading growth without sacrificing grade-level learning.
Content provided by EPS Learning
Jobs Virtual Career Fair for Teachers and K-12 Staff
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Privacy & Security How School Leaders Can Combat Rising Cyber Threats
Continuous training and student engagement can be key in protecting schools.
4 min read
Image with icons for "i" information, email, eye for "watch", and locks.
Collage via Canva
Privacy & Security From Our Research Center Is AI Ready to Protect Schools From Cyberattacks?
Some experts and district tech leaders are unsure what role the tech should play in cybersecurity.
6 min read
Illustration of woman defending school from monster with tentacles.
DigitalVision Vectors
Privacy & Security These Students Tricked Teachers With Phishing Emails—for a Good Cause
The exercise helped students understand how to protect themselves against hackers.
8 min read
Illustration of thief/fisherman catching at (@) symbol.
DigitalVision Vectors
Privacy & Security Why AI Is a Big Problem for School Cybersecurity
Many school districts are ill-prepared to defend themselves against AI-powered cyberattacks.
9 min read
Illustration of hacker peeking out of computer.
DigitalVision Vectors