Some principals and superintendents are reporting a new challenge created by artificial intelligence: parents using chatbots, like ChatGPT and Gemini, to write complaints about school and district policies.
The problem is that complaints written with AI can have a very legalistic tone and indiscriminately reference a litany of potential legal issues. This causes school and district staff (and their lawyers) to sink hours of time into responding to what might normally only take a few minutes, school and district leaders told Education Week.
Handling complaints about school policies and leadership decisions is part of the job for school and district administrators. But those who have experienced this new trend are worried that time-consuming, AI-generated complaints will soon be the norm.
Kenny Rodrequez, the superintendent of Grandview C-4 School District near Kansas City, Mo., has seen an increase in the past year in what he suspects are complaints from parents that were written by AI. Principals, school board members, and other district-level administrators in Grandview C-4 have also received these types of complaints, Rodrequez said.
In the past, an email that might have simply expressed frustration with a disciplinary action or bad grade is now a catalog of serious legal allegations, said Rodrequez.
“A lot of times, it is going to be just a kitchen-sink approach. It’s going to have anything and everything that AI determined that the district could have possibly violated,” Rodrequez said, ranging from accusations of civil rights violations to IDEA infractions. AI “doesn’t know the specifics [of the situation] because that wasn’t put into the system.”
Those are typically the first clues for Rodrequez that he might be dealing with a complaint written by a chatbot. He’ll take several of the same steps he would with any other complaint, like reviewing it and checking if anyone else in the district has already had contact with the parent. But these complaints can be very long and detailed, and responding to every issue raised can take a while.
Rodrequez said he’s now more likely to ask an attorney to review this type of complaint, to make sure the language is very carefully crafted. And he’s heard that this practice of parents using AI to write complaints is happening across his state.
Even after crafting a careful reply, Rodrequez said he sometimes receives another AI-generated email in response. That’s typically when he’s sure the complaint was written by AI.
“It just gives you another long diatribe of stuff that really has nothing necessarily even to do with what I just said. I’m not saying that’s a gotcha, but that’s another way that we’re now saying, we know this is AI,” he said.
Complaints that once might have taken 15 minutes to review and respond to can now take three to four hours of staff time, not counting the lawyers’ time, said Rodrequez. And then there’s the legal fees. His district hasn’t had to hire more staff to handle the increase in work around responding to complaints, but Rodrequez said that could be a possibility in the future.
How to notice the hallmarks of AI-generated emails
Shortly after announcing a new cellphone policy for DeWitt Clinton High School in New York City, principal Pierre Orbe received an email in May from someone claiming to be a parent. The fact that the parent was not happy about the new policy wasn’t what surprised Orbe—it was how litigious the email sounded.
Orbe uses generative AI tools a lot in his job, and the email struck him as having some hallmarks of AI-composed text: “AI will come in, it’ll start to really bullet things out. It’ll give you these strong argumental points.”
Orbe estimates it took him about two weeks to formulate a response that touched on every issue raised and after consulting multiple people in the district. Ultimately, his response wasn’t different than it would have been to a normal complaint, he said, but the radius of people involved was much bigger in order to address the legal concerns.
For his response, Orbe actually used a generative AI tool, asking it to help soften the tone of his email.
“I give the input to just say, let’s curve some of the edges on this one and just make sure that the parent knows we are taken aback by some of these claims, but we do take them seriously.”
The parent never responded to Orbe’s emails requesting to meet with Orbe and a member of the district’s administration. A second, similar complaint came in June from someone else claiming to be a parent of a student in his school.
Orbe said that he was not able to confirm if the people—or person—who sent him complaints last spring were really parents. They might have been students or a community member. Their names didn’t match any on record with the school, and they never responded to Orbe’s emails.
“It’s certainly new,” he said. “We’re just used to families coming with very specific concerns. Like ‘I’m not sure who to speak to about my child’s grades.’ Simple things to that nature. ‘Who do I bring this note to for my child’s illness?’”
Orbe tries to give parents ample opportunity to raise problems and questions directly with school leadership. He and his assistant principal host a weekly check-in for parents, with translators available, where Orbe can share important information and families can come with questions or concerns. He’s hoping that will help keep AI-generated complaints to a minimum.
Concerns rising that more AI-crafted complaints are coming
Katie Law, the principal of Arapaho Charter High School in Wyoming, has also dealt with this challenge. The school had an ongoing issue with a parent when Law received what she now believes was an AI-written complaint from the parent. Her first thought was that the parent had obtained a lawyer based on the tone.
In addition to the extra work and money—from lawyer fees—Law said the experience was also disheartening.
“What it does do is put in the back of your mind, well, now at least one person has figured this out, how to use AI to make their complaints gain more footing,” and more are likely to follow, she said.
Even if a complaint is likely AI-generated, school administrators can never be sure, said Andrew A. Manna, a lawyer who represents K-12 schools and a partner with Church Church Hittle + Antrim in Noblesville, Ind.
“I have seen communications citing legal authority that are lengthy narratives obviously written by AI,” he said. School administrators still must read and investigate all issues raised and apply the appropriate school policies, he said.
“Even though the AI-generated complaint might be hard to manage, we are still obligated to process it like any other communication,” Manna said.
If staff members are starting to get bogged down with these kinds of complaints, school districts can explore technical options to send automatic responses that detail when a live staff member will circle back to them, said Mellissa Braham, the associate director of the National School Public Relations Association. Depending upon the contents of the email—such as if there’s a public records request—schools may be required to respond within a certain time frame, she said.
Encourage face-to-face meetings as a solution to the problem
Schools should not discourage parents from using generative-AI tools to communicate with teachers, principals, or district leadership, Braham said.
“It will just create more anger and frustration,” she said. “Instead, I would focus on helping people understand how they can communicate with us, in what ways, and setting expectations for how we will respond to them.”
So how can schools honor their duty to respond to parent and community concerns without getting stuck in an endless cycle of calling lawyers to respond to AI-generated complaints? Rodrequez recommends a low-tech solution: inviting the parent or community member to meet face to face. Not only is it often more efficient, in-person meetings help build better relationships.
“Most of the time, [parents] are frustrated for whatever reason,” he said. “And if we can meet with them, I always feel like we can have a better opportunity to alleviate those frustrations.”