Artificial Intelligence

Real-Time Data Shows Exactly How Students Use AI on School Technology

By Alyson Klein — March 09, 2026 4 min read
Vector illustration of a robotic trojan horse in a gift box with the letters AI on the top of the box and inside behind the horse.
  • Save to favorites
  • Print

Roughly one in five student interactions with generative artificial intelligence on school technology involved cheating, self-harm, bullying and other problematic behaviors, according to data collected and analyzed by Securly, a company offering internet filtering and other safety services.

What’s more, Securly identified roughly 1 in 50 student-AI interactions as red flags that students might be involved in violence, cyberbullying, or self-harm.

Securly’s analysis looked at nearly 1.2 million interactions in more than 1,300 districts from Dec. 1, 2025, to Feb. 20, 2026.

Educators should take heart that most of the time, students use AI appropriately, said Tammy Wincup, the CEO of Securly, whose competitors include GoGuardian and Lightspeed Systems.

“When a district actually sets some guardrails and policies around their AI usage in schools, 80% of the conversations happening are within the district’s policies,” Wincup said. “That’s the good news on the learning side of the house.”

Why the usage data is so ‘fascinating’

The analysis offers an early window into how students actually use generative AI tools. Most other research on student usage of AI comes from surveys, which rely on student self-reporting.

Securly’s data shows “what are students really doing when they’re writing text into generative AI,” said Jeremy Roschelle, the co-executive director of learning science research for Digital Promise, a nonprofit organization that works on equity and technology issues in schools.

“That’s why it’s fascinating,” he said.

In November, Securly allowed district officials to set parameters around students’ AI use, similar to the way they ask the company to filter out particular types of websites.

If districts opt to use this feature, large language models will “deflect” a student’s query to AI that’s out-of-bounds with district policy.

For instance, if a student tries to use AI to complete an assignment, large language models may instead point to information on the general topic but won’t supply an exact answer. Or if a student asks about dosing for a particular medication, the tool will tell them to ask a trusted adult for help.

Nearly all the deflected student queries—95%—were from students trying to get AI tools to complete their schoolwork for them.

That percentage didn’t surprise Wincup. She expects that when districts allow students to use large language models on school networks and devices, kids will “experiment with understanding the guardrails” placed around the tools and try to get around those guardrails.

Another 2% of the interactions identified as inappropriate related to games. A little less than 1% dealt with sexual content and a similar percentage concerned firearms or hunting. Gambling, drugs, and hate (such as racism and antisemitism) each comprised roughly 0.5% of flagged interactions.

Though only 2 percent of interactions were identified as potentially unsafe, that represents more than 24,000 queries overall. And some of the questions students asked AI were troubling.

For instance, one student directed a large language model to help draft an email to their mother explaining they had suicidal thoughts.

Another student conducted a quick series of internet searches on questions, including “What’s the main nerve in the forearm?” and “What nerve near the wrist carries blood?” Then the student switched to an AI tool, asking it how to commit suicide. (In both of these cases, the identity of the student was ‘unmasked’ by Securly and district officials were made aware of the safety issues.)

Students used ChatGPT more often than large language models created for K-12 schools

Overall, Securly detected a higher percentage of potentially unsafe AI interactions—2%—than potentially unsafe student internet searches, 0.4%.

It’s too early to pinpoint an exact explanation for that discrepancy, Wincup said. She noted that Securly has had many years to hone its system for recognizing when a student’s internet searches may be a sign of danger, while its work with AI interactions is brand new.

Roschelle, meanwhile, is curious about what, exactly, students asked AI in the 80 percent of interactions that were deemed appropriate for school.

How did their prompts and AI’s responses help—or hinder—their understanding of an assignment, an issue, or the world around them, he wondered.

“What we want to do is make sure [AI] is not just appropriate, but is actually valuable for student learning,” Roschelle said.

The analysis also revealed which large language models students use most often.

ChatGPT is by far the most popular, accounting for 42% of interactions. Securly’s AI Chat made up 28%. Google’s Gemini comprised 21%. And other ed-tech tools that embed AI features—including MagicSchool, SchoolAI and BriskTeaching—comprised 9%. (That data isn’t nationally representative because only districts that use Securly have access to Securly AI. But Wincup believes “big tech” large language models are probably most popular in all districts.)

AI puts education technology leaders in a new position, Wincup said.

“They’re no longer just buying things and setting things up like this,” she said. This is a moment “where they have to have visibility in order to help their district make not just great tech decisions but make great teaching and learning decisions.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Too Many Initiatives, Not Enough Alignment: A Change Management Playbook for Leaders
Learn how leadership teams can increase alignment and evaluate every program, practice, and purchase against a clear strategic plan.
Content provided by Otus
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Beyond Teacher Tools: Exploring AI for Student Success
Teacher AI tools only show assigned work. See how TrekAi's student-facing approach reveals authentic learning needs and drives real success.
Content provided by TrekAi
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Building for the Future: Igniting Middle Schoolers’ Interest in Skilled Trades & Future-Ready Skills
Ignite middle schoolers’ interest in skilled trades with hands-on learning and real-world projects that build future-ready skills.
Content provided by Project Lead The Way

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Data How Teens and Young People Use AI Tools for Learning and Mental Health Support
Two reports detail ways young people are engaging with AI and how it impacts their mental health.
2 min read
Art teacher Lindsay Johnson, center, has students explore how to use generative AI features at Roosevelt Middle School, on June 25, 2025, in River Forest, Ill.
Art teacher Lindsay Johnson, center, has students explore how to use generative AI features at Roosevelt Middle School, on June 25, 2025, in River Forest, Ill. As the use of AI among teens and young adults increases, many are using it to seek out mental health advice.
Nam Y. Huh/AP
Artificial Intelligence Are Teens Just Using AI to Cheat? Well, Not Quite (If You Ask Them)
There’s fear among many educators that students are using AI to do most of their critical thinking.
3 min read
Photo collage of a high school boy dressed in casual wear sitting among open books, concentrating on his tablet with books scattered all around him and a graph chart and asterisk as part of the collage in the background.
iStock/Getty
Artificial Intelligence Moms Across the Political Spectrum Urge Caution on AI in Schools
Mothers of kids in school are concerned about the impact of AI on learning and social skills.
4 min read
Students grab Chromebooks during Casey Cuny's English class at Valencia High School in Santa Clarita, Calif., Wednesday, Aug. 27, 2025.
Students pick up their Chromebooks during an English class at a high school in Santa Clarita, Calif., on Aug. 27, 2025. Pushback against the overuse of technology in schools is growing, fueled partly by the expanding use of AI.
Jae C. Hong/AP
Artificial Intelligence From Our Research Center Are AI Literacy Lessons Now the Norm? What New Survey Data Show
Educators are "meeting the AI moment," one expert said.
4 min read
A student uses a laptop to work on an assignment during class on Aug. 28, 2024, in Aurora, Colo. New EdWeek Research Center data show that many students are already being taught AI literacy.
A student uses a laptop to work on an assignment during class on Aug. 28, 2024, in Aurora, Colo. New EdWeek Research Center data show that many students are already being taught AI literacy.
Godofredo A. Vásquez/AP