Artificial Intelligence

ChatGPT Will Soon Have Parental Controls. How Schools Can Help Parents Use Them

By Arianna Prothero — September 12, 2025 7 min read
A computer screen in English teacher Casey Cuny's classroom shows ChatGPT during class at Valencia High School in Santa Clarita, Calif., on Aug. 27, 2025.
  • Save to favorites
  • Print

Come October, Open AI will roll out parental controls for its popular generative AI tool, ChatGPT. Experts say that could be a first step toward helping schools curtail some of the harmful things students are using ChatGPT to produce.

As it is, there’s been much handwringing over students using generative AI-powered chatbots to do their school assignments for them. Teens are also increasingly relying on chatbots for companionship and mental health advice, and in some high-profile cases this has led to tragic results.

Schools are uniquely positioned to teach students how to safely use AI-powered technologies, experts say, emphasizing that those lessons will complement parental controls. Schools can also help keep families abreast of their options for making tech safer for their children.

The problem is, parental controls for all kinds of technologies are often confusing and difficult to set up, said Robbie Torney, the senior director for AI programs at Common Sense Media. That’s where schools can play a role.

“Family coordinators in schools have often been in the position of helping to train parents on how to set up parental controls,” he said. “Those have been popular workshops in schools: this is how you set up parent controls on Instagram, or this is how you set up device time management on your kid’s iPhone or Android.”

While OpenAI’s plan to create parental controls is a step in the right direction, Torney said, the onus can’t be entirely on parents to keep children safe when using these technologies.

A tragic incident prompted OpenAI to roll out parental controls

OpenAI committed to rolling out parental controls in the aftermath of a California teen’s suicide. The parents of 16-year-old Adam Raine allege in a lawsuit against OpenAI that its chatbot discouraged their son, who was depressed, from seeking help, even going so far as to advise him on details of his planned suicide. The parents only learned of their son’s use of ChatGPT after his death.

OpenAI’s forthcoming parental controls will include options for parents to link their accounts with their children’s and receive notifications if the system detects that their child is “in a moment of acute crisis,” among other features, according to a Sept. 2 blog post announcing the plan.

This follows the company’s launch this summer of ChatGPT’s study mode feature, which is designed to guide users through the process of finding the right answer to a question, versus just spitting out an answer.

Children must be 13 to create a ChatGPT account and must obtain parental consent before opening an account if they are younger than 18.

However, popular safeguards in the tech industry like age restrictions and parental consent generally operate on the honor system and are easy for children to bypass.

“Many young people are already using AI,” OpenAI said in the blog post. “They are among the first ‘AI natives,’ growing up with these tools as part of daily life, much like earlier generations did with the internet or smartphones. That creates real opportunities for support, learning, and creativity, but it also means families and teens may need support in setting healthy guidelines that fit a teen’s unique stage of development.”

How effective OpenAI’s parental controls prove to be will depend largely on details that have not yet been publicly released, said Torney. Parental controls have become fairly standard in the tech industry, with these features available on social media, smartphones, and some AI chatbots, he said.

Google and Microsoft also offer parental controls for AI chatbots

Some companies—such as Google and Microsoft—offer parental controls for chatbots through linked accounts within a family.

For instance, parents can turn off their kids’ access to Google’s Gemini chatbot through their account. Teens also automatically get a different version of the chatbot than adults, based on the birthday they give when they sign up.

However, parents have few options to monitor their kids’ conversations on Google’s Gemini or receive notifications of concerning behavior, according to a risk assessment report by Common Sense Media.

Similarly, Microsoft allows parents to block their kids from accessing the company’s chatbot, Copilot, and set screen time limits through their personal accounts.

But other chatbots, such as the Meta AI chatbot which is available automatically on Instagram, WhatsApp, and Facebook don’t have any parental controls to monitor or block children’s use.

The parental controls that do exist are often not user-friendly, said Yvonne Johnson, the president of the National PTA. “We have heard from parents that parental controls are too complicated to use,” she said. “Also, through our research, less than 3 in 10 parents reported using parental controls and monitoring software.”

The National PTA surveyed 1,415 parents of K-12 students last year.
The survey basically found that when parents don’t know what to do, most turn to their kids’ schools for help, said Johnson. About seven in 10 parents said in the survey that they are most likely to seek guidance from their children’s schools, teachers, and counselors on how to keep their kids safe on internet-connected platforms.

For that reason, the National PTA supports local chapters in holding events and information sessions at schools where volunteers and school staff help parents learn how to navigate parental controls on various platforms and answer questions about safe tech use for families.

“We have to have education for our families so they understand,” Johnson said. “Just like professional development.”

Teens are turning to AI chatbots for companionship and advice

While education technologies powered by AI and used in K-12 are supposed to have additional safeguards to meet academic and data privacy requirements, said Torney, many students still rely on less-regulated generative AI tools.

This matters for schools because teens are turning to AI companions and chatbots for social interaction and advice on harmful and sensitive topics. These technologies often provide information that can hurt students’ mental health and, ultimately, their readiness to learn.

About three-quarters of teens responding over the summer to a Common Sense Media survey said they have used an AI companion like Character.AI or Replika, and more than half said they use one regularly. Teens said they used the technology for social interaction and, to a lesser degree, for mental health advice or emotional support. About a third of teens who have used an AI companion said they were as satisfied talking to a chatbot as they were to a real person.

A separate analysis released this summer by the Center for Countering Hate looked at how ChatGPT responded to problematic queries from teen users. The researchers for this study posed as 13-year-olds discussing eating disorders, substance use, and self-harm. The researchers found that ChatGPT responded with harmful advice or information about half the time, such as providing a suicide note, instructions on hiding alcohol intoxication at school, and a plan for creating a restrictive diet.

While ChatGPT also recommended crisis lines and mental health support, those safeguards were easy to bypass or ignore, the report said.

“We’re focused on getting these kinds of scenarios right: we are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately, pointing people to evidence-based resources when needed, and continuing to improve model behavior over time—all guided by research, real-world use, and mental health experts,” an OpenAI spokesperson told Education Week when the Center for Countering Hate report was released.

What do kids need to know to navigate a world full of AI chatbots?

Schools should teach students about how AI works, and when it’s safe and appropriate to use an AI tool and when it’s not, Torney said. For example, it’s risky to have personal, mental-health conversations with a chatbot because they can appear to be caring companions offering helpful advice when in fact it is bad advice.

Chatbots are designed to please and validate users, often mirroring their feelings, Torney said. Understanding that reality is an important part of AI literacy, he added.

“If you’re not recognizing that you’re getting weird outputs, and that it’s not challenging you, those are the places where it can start to get really dangerous,” he said. “Those are the places that real people who care about you can step in and say, ‘hey, that is not true,’ or ‘I’m worried about you.’ And the models in our testing are just not doing that consistently.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Special Education Webinar
Bringing Dyslexia Screening into the Future
Explore the latest research shaping dyslexia screening and learn how schools can identify and support students more effectively.
Content provided by Renaissance
Artificial Intelligence K-12 Essentials Forum How Schools Are Navigating AI Advances
Join this free virtual event to learn how schools are striking a balance between using AI and avoiding its potentially harmful effects.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
A Blueprint for Structured Literacy: Building a Shared Vision for Classroom Success—Presented by the International Dyslexia Association
Leading experts and educators come together for a dynamic discussion on how to make Structured Literacy a reality in every classroom.
Content provided by Wilson Language Training

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Congress Wants to Protect Kids Using AI. Are Their Ideas the Right Ones?
Two bills in Congress aim to build guardrails for kids' use of artificial intelligence.
5 min read
Photo of the United States Capitol with overlayed computer circuitry and the letters "AI".
iStock/Getty
Artificial Intelligence Video These Students are Learning the Math That Makes AI Tick
Rather than study how to use AI, students in this machine learning class work with the math that makes the AI work.
1 min read
Student Nina Dong, second from left, helps classmates with a project examining the Titanic passenger dataset in Clay Dagler's machine learning class at Franklin High School in Elk Grove, Calif., on March 7, 2025.
Student Nina Dong, second from left, helps classmates with a project examining the Titanic passenger dataset in Clay Dagler's machine learning class at Franklin High School in Elk Grove, Calif., on March 7, 2025.
Max Whittaker for Education Week
Artificial Intelligence 5 Best Practices for Crafting a School or District AI Policy
Nearly half of educators say their school or district does not have an AI policy.
Illustration of woman teacher vetting artificial intelligence for classroom.
Weiyi Zhu/iStock/Getty
Artificial Intelligence Teachers Worry AI Will Impede Students' Critical Thinking Skills. Many Teens Aren't So Sure
A majority of educators fear students may become dependent on generative AI tools for basic tasks.
2 min read
Conceptual image of dice with question marks on them with A.I. faded in background.
Andrii Yalanskyi/iStock/Getty