A bipartisan bill introduced in the U.S. Senate Oct. 28 seeks to forbid companies from providing minors with access to artificial intelligence chatbot companions such as Character.ai and Replika.
The legislation—sponsored by Sen. Josh Hawley, R-Mo., and co-sponsored by Democrats and Republicans, including Sen. Richard Blumenthal, D-Conn.—appears to leave room for schools to continue using AI chatbots developed specifically for learning, such as Khan Academy’s Khanmigo.
But, if enacted, it may complicate chatbots’ potential use in career or mental health counseling for students, experts say. And it appears to apply to more general large language models that students often use for class assignments, such as ChatGPT and Gemini.
Meanwhile, a second bill, also introduced this week by Sen. Bill Cassidy, R-La., the chairman of the Senate Health, Education, Labor, and Pensions Committee, is aimed at helping safeguard student data privacy when using AI tools.
The two bills likely signal the beginning of a flurry of legislation aimed at bolstering the safety of AI tools as the technology becomes integral to a broad swath of sectors, including K-12 education, experts say.
“This is just the opening salvo,” said Amelia Vance, the president of the Public Interest Privacy Center, a nonprofit organization focused on protecting student data privacy. “There’s a struggle to figure out: How do we make this safe? How do we hold these companies accountable? And how on earth does this fit into education?”
The legislation has not yet been considered by a Senate committee. But it may have already had an impact. A day after the bill was introduced, Character.ai announced that it would voluntarily ban minors from its platform.
Lawmakers accuse companies of putting profit ahead of children
The legislation introduced by Hawley and Blumenthal banning companies from providing AI companions to minors reflects growing concerns over how this technology can be misused by adolescents. The families of at least two teens have sued tech companies after chatbots allegedly played a role in their children’s deaths by suicide.
The parents of both children spoke at the press conference in which Hawley, Blumenthal, and others introduced the chatbot legislation.
The lawmakers framed their bill as an effort to rein in large technology companies, which they said have put their own revenue over children’s wellbeing.
“The pursuit of profits by Silicon Valley should not consume and destroy America’s children,” Hawley said, without naming any specific businesses. “The new AI revolution that we have been promised will only be good for the American people if it actually protects America’s children.”
“Big tech is using our children as guinea pigs in a high-tech, high-stakes experiment to make their industry more profitable,” added Blumenthal, who also did not single out a particular company.
The legislation requires tech companies’ AI companions—chatbots designed to develop a human-like relationship with their users—to conduct “reasonable age verification” of their users, beyond just requiring them to provide a birthdate.
It also calls for AI chatbots to clearly disclose to users that they aren’t human and hold no professional credentials, including in areas such as mental health counseling. Companies that knowingly provide companion bots to minors that solicit or produce sexual content would be held criminally responsible.
Importantly, the legislation would not apply to chatbots that are part of a broader software application or that are engineered only to respond to questions on a limited range of subjects. That provision was intended to allow for the continued use of “well-made, safe chatbots that could be appropriate and useful in an educational setting,” a Senate aide explained.
It is less clear, however, if the legislation would apply to chatbots that offer services like career counseling or mental health services to students, Vance said.
It’s meaningful that lawmakers from both parties want to completely prohibit children from using a particular technology, Vance added.
“This is not a parental consent. It’s a ban, which is very different from what we’ve done for pretty much every other technology,” Vance said. “They don’t consider it safe and they don’t think the companies are going to fix it.”
Common Sense Media, a research and advocacy group focused on youth and technology, has not endorsed the bill yet, but applauded the lawmakers for introducing “what is really the first bill in Congress to prioritize user safety for AI products, including safety for kids,” said Danny Weiss, the organization’s chief advocacy officer.
The measure is introduced just months after Common Sense reported that about 1 in 3 teens who use AI companions said they find their time with the technology to be more satisfying than time with real-life friends.
The legislation also comes as the Federal Trade Commission is investigating potential problems with chatbots that are designed to simulate human emotions and communicate with users like a friend or confidant. The FTC has sent orders for information to the companies that own ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok.
Bill would put ed-tech companies that violate student privacy on a federal list
Meanwhile, Cassidy’s bill would use a mix of carrots and sticks to safeguard student data privacy, including in AI tools. For instance, it would call for a new federal “Golden Seal of Excellence in Student Data Privacy” to be awarded to schools and districts that implement strong parental consent systems for ed-tech tools.
It would also allow parents to see elements of contracts that school districts sign with tech companies before they are implemented in classrooms. And it would prohibit the use of student photos to train facial recognition AI tools without parental consent.
The legislation also seeks to create a federal list of education technology vendors that aren’t in compliance with student data privacy requirements. Companies that run afoul of those requirements could remain on the list for up to five years. It also seeks to bolster research into how AI can be used to improve teaching and learning and make it clear districts can use federal funds to help teachers better understand AI.
Tammy Wincup, the CEO of Securly, a digital safety platform, said that she was pleased with the Cassidy legislation as a “first step” towards grappling with the safety implications of AI’s use in education.
She cautioned that protecting students in the AI era is a complex challenge.
While districts can block a problematic website, “AI is totally different,” Wincup said. “AI is like water or air. It is going to be built into all tools now, so understanding how [students are] using it, for safety and wellness reasons, but also for teaching and learning, is our first step.”