Last December, early on a Sunday morning, Amanda Lafrenais tweeted about her cats.
“I would die for you,” the 31-year old comic book artist from Clute, Texas wrote.
Cat #1: pwease feed me thank you
Me: ha, cute.
Cat #2: IF YOU DON’T FEED ME RIGHT NOW I’M CALLING THE POLICE I CAN’T BELIEVE YOU’D TREAT ME LIKE THIS I AM DISAPPOINTED I DIDN’T RAISE YOU TO BE LIKE THIS
Me: I would die for you.
— Amanda Lafrenais (@AmandaLafrenais) December 16, 2018
To human eyes, the post seems innocuous.
But in an age of heightened fear about mass school shootings, it tripped invisible alarms.
The local Brazosport Independent School District had recently hired a company called Social Sentinelto monitor public posts from all users, including adults, on Facebook, Twitter, and other social media platforms. The company’s algorithms flagged Lafrenais’s tweet as a potential threat. Automated alerts were sent to the district’s superintendent, chief of police, director of student services, and director of guidance. All told, nearly 140 such alerts were delivered to Brazosport officials during the first eight months of this school year, according to documents obtained by Education Week.
Among the other “threats” flagged by Social Sentinel:
Tweets about the movie “Shooter,” the “shooting clinic” put on by the Stephen F. Austin State University women’s basketball team, and someone apparently pleased their credit score was “shooting up.”
A common Facebook quiz, posted by the manager of a local vape shop.
A tweet from the executive director of a libertarian think tank, who wrote that a Democratic U.S. senator “endorses murder” because of her support for abortion rights.
And a post by one of the Brazosport district’s own elementary schools, alerting parents that it would be conducting a lockdown drill that morning.
“Please note that it is only a drill,” the school’s post read. “Thank you for your understanding. We will post in the comment section when the drill is over.”
Such is the new reality for America’s schools, which are hastily erecting a massive digital surveillance infrastructure, often with little regard for either its effectiveness or its impact on civil liberties.
Social media monitoring companies track the posts of everyone in the areas surrounding schools, including adults. Other companies scan the private digital content of millions of students using district-issued computers and accounts. Those services are complemented with tip-reporting apps, facial-recognition software, and other new technology systems.
Florida offers a glimpse ofwhere it all may head: Lawmakers there are pushing for a state database that would combine individuals’ educational, criminal justice, and social-service records with their social media data, then share it all with law enforcement.
Across the country, the results of such efforts are already far-reaching.
The new technologies have yielded just a few anecdotal reports of thwarted school violence, the details of which are often difficult to pin down. But they’ve also shone a huge new spotlight on the problems of suicide and self-harm among the nation’s children. And they’ve created a vast new legal and ethical gray area, which harried school administrators are mostly left to navigate on their own.
“It’s similar to post-9/11,” said Rachel Levinson-Waldman, a lawyer with the liberty and national security program at the Brennan Center for Justice at the New York University law school. “There is an understandable instinct to do whatever you can to stop the next horrible thing from happening. But the solution doesn’t solve the problem, and it creates new issues of its own.”
Monitoring Students’ Online Lives
Why the growing push to monitor students’ online lives?
Consider the trail of digital footprintsleft by Nikolas Cruz, the disturbed teenager accused of killing 17 people and injuring 17 others at Marjory Stoneman Douglas High School in Parkland, Fla., in February 2018.
Before the shooting rampage, Cruz took to Instagram to post pictures of weapons and write that “I wanna f---ing kill people.” He searched the internet using phrases like “is killing people easy” and “good songs to play while killing people.” Cruz used his phone to record videos of himself planning the massacre. And he allegedly used school computers to look up instructions on how to build a nail bomb.
“If you’re responsible for the safety and security of a school, you have to pay attention to the places where harm is being foreshadowed,” said Gary Margolis, the CEO of Social Sentinel, which claims “thousands” of K-12 schools in 30 states are using its service.
Margolis said it’s unfair to focus on the false positives that may slip through a company’s monitoring system. Any harms pale in comparison to the benefits of what is caught. He pointed to a recent incident in which Social Sentinel flagged a college student who threatened on Twitter to shoot his professor for scheduling an early morning exam. (The student, who said he intended no harm, was arrested.)
Margolis also noted that school shootings remain statistically rare, emphasizing instead Social Sentinel’s work around more prevalent issues of suicide and self-harm.
But it’s high-profile mass tragedies such as Columbine, Sandy Hook, and Parkland that are driving the national conversation and a lot of decision making around school safety and security. And technology companies are clearly taking note.
After the Columbine attack 20 years ago, for example, there was a dramatic increase in the percentage of schools using security cameras to monitor their schools, federal data show.
More recently, the trend has shifted towards vacuuming up digital data and scanning them for possible warning signs.
The embrace of such tools by parents and K-12 administrators alike has led to a fresh boom in the school safety technology market, with a handful of established companies and a growing crop of startups now competing to offer ever-more comprehensive surveillance capabilities.
In April, for example, a company called Securlywas at the annual conference of the Consortium for School Networking, pitching K-12 school technology officials on its rapidly expanding suite of services.
Part of the appeal of the new digital surveillance technologies deployed by schools is their relatively low sticker price.
In Michigan, for example, the 17,000-student Grand Rapids district this school year is paying Gaggle a little less than $71,000 to monitor its network traffic and alert staff members to troubling content.
Texas’s 12,300-student Brazosport Independent School District, meanwhile, is paying $18,500 per year to Social Sentinel for its social media monitoring services. That cost of about $1.50 per student appears to be broadly typical of what the company charges.
The low fees belie the value of the service Social Sentinel offers, said CEO Gary Margolis.
When Securly launched in 2013, its lone offering was a web filter to block students’ access to obscene and harmful content. The federal Children’s Internet Protection Act requires most schools to use such tools.
A year later, though, Securly also began offering “sentiment analysis” of students’ social media posts, looking for signs they might be victims of cyberbullying or self-harm.
In 2016, the company expanded that analysis to students’ school email accounts, monitoring all messages sent over district networks. It also created an “emotionally intelligent” app that sends parents weekly reports and automated push notifications detailing their children’s internet searches and browsing histories, according to a presentation delivered at the conference.
Then, in 2017, Securly also began monitoring all that information for potential signs of violence and attacks. It added a tip line, plus a layer of 24-hour human review of flagged threats schools can opt into.
“Kids cry out for help at all times,” said Mike Jolley, Securly’s director of K-12 safety. “You don’t ever shut off caring for your students.”
That kind of language is now pervasive throughout the industry, said Amelia Vance, the director of education privacy at the Future of Privacy Forum, a Washington think tank.
Vance said it’s meant to deliver a clear message to schools:
“You’re safer if you have us watching everything.”
‘Privacy Went Out the Window’
In her 2019 bookThe Age of Surveillance Capitalism, scholar and activist Shoshana Zuboff described the new engine driving America’s economy: the ability to translate people’s online behavior into digital data that can be used to make predictions about what they’ll do next.
That model allowed companies like Google and Facebook to quickly become multibillion-dollar behemoths, before the broader societal implications of their business models could be fully considered.
Something similar is now happening in the K-12 security market.
A Bloomington, Ill.-based company called Gaggleoffers a window into what the trend looks like in practice.
Every day, Gaggle monitors the digital content created by nearly 5 million U.S. K-12 students. That includes all their files, messages, and class assignments created and stored using school-issued devices and accounts.
The company’s machine-learning algorithms automatically scan all that information, looking for keywords and other clues that might indicate something bad is about to happen. Human employees at Gaggle review the most serious alerts before deciding whether to notify school district officials responsible for some combination of safety, technology, and student services. Typically, those administrators then decide on a case-by-case basis whether to inform principals or other building-level staff members.
While schools are typically quiet about their monitoring of public social media posts, they generally disclose to students and parents when digital content created on district-issued devices and accounts will be monitored. Such surveillance is typically done in accordance with schools’ responsible-use policies, which students and parents must agree to in order to use districts’ devices, networks, and accounts.
Hypothetically, students and families can opt out of using that technology. But doing so would make participating in the educational life of most schools exceedingly difficult.
It’s just the way the world works now, said Gaggle CEO Jeff Patterson.
“Privacy went out the window in the last five years,” he said. “We’re a part of that. For the good of society, for protecting kids.”
Earlier this year, the companyreleased a reportdetailing its results between June and December of 2018. The report said Gaggle had successfully flagged 5,100 incidents that “required immediate attention for imminent and serious issues.” Of those, 577 reportedly involved imminent threats of someone planning an attack or violence against others.
Documents obtained from Gaggle’s K-12 clients, along with interviews of administrators in those districts, illuminate the messy realities behind those numbers.
Take the 17,000-student Grand Rapids school district in Michigan.
A public relations consultant for Gaggle referred Education Week to the district, suggesting the company had helped prevent planned violence against a school there.
Indeed, last December, local news outlets were abuzz with reports of a thwarted school shootinginvolving a 15-year old student.
In an interview, Larry Johnson, the Grand Rapids district’s director of safety, described the incident. Threatening messages were initially posted on Snapchat, he said. The student involved then used the district’s network to send emails about those posts to friends. Gaggle flagged the emails, leading the company to alert district officials, who in turn called the Grand Rapids police.
The student was arrested before the next school day. The teen was later expelled.
But when asked if there had been a credible plan to attack the school, Johnson demurred.
The student “took it as a joke,” he said. “We have a criminal justice system in place that gets the opportunity to determine what is serious.”
Now, put the incident in context.
The shooting threat/joke was just one of nearly 3,000 incidents in Grand Rapids schools flagged by Gaggle between August and February of this school year, according to a dashboard summary provided by the district.
More than 2,500 of those were minor violations, mostly involving profanity.
And files obtained from the district via a public-records request offer a granular look at the details behind hundreds of incidents caught by Gaggle’s system:
- More than three dozen Grand Rapids students were flagged for potential suicide or self-harm, usually for storing files or sending messages including words such as “hate myself,” “hurt myself,” and “end my life.”
- More than two dozen students were flagged for storing or sending offensive or pornographic images or videos.
- Students were flagged for possible violence towards others for storing files containing the words “abused me” and “raped.”
And among those flagged for possible profanity & hate speech:
- At least a dozen students who stored or sent files containing the word “gay.”
- A student who stored a file named “biology project” with the word “shit” in it.
- A student who stored a file named “Poetry Portfolio” with the word “pussy” in it.
- A student who stored a file named “Odyssey Essay” with the word “bastard” in it.
How does a district balance the benefits, costs, and burdens of reviewing and following up on such a torrent of alerts, especially when they range from alarming to ambiguous to ridiculous?
Johnson, the Grand Rapids safety director, acknowledged the challenge.
The system can be a real time-suck. And he’s concerned about students’ rights.
But any such downsides pale in comparison to getting thanks from parents grateful that the technology alerted them that their child was contemplating suicide, he said.
“I think it’s a necessary evil,” Johnson said.
‘Big Brother Is Watching’
That’s exactly the mindset that Chad Marlow wants to challenge.
“Does it make sense to say we are going to hurt millions of students in an effort to prevent one child from being harmed?” said Marlow, the senior advocacy and policy counsel for the American Civil Liberties Union.
A March blog postoutlines what the ACLU considers to be the real threats related to school surveillance: chilling students’ intellectual freedom and free-speech rights. Undermining their reasonable expectations of privacy. Traumatizing children with false accusations. And systematically desensitizing a generation of kids to pervasive surveillance.
The experiences of other K-12 Gaggle clients help illuminate such concerns.
Evergreen Public Schools in Washington state, for example, started using the company’s service this school year. Between September and mid-March, the system flagged more than 9,000 incidents in the 26,000-student district.
The overwhelming majority—84 percent—were for minor violations, such as profanity.
A handful helped the district prevent fights and get help for kids thinking of hurting themselves, said Shane Gardner, the district’s director of safety and security. None could reasonably be considered to have prevented violence against a school.
“We haven’t ever unraveled an incident where it was, ‘Boy, good thing we caught this kid, because he had a gun in his guitar case,’” Gardner said.
Dozens of other alerts, however, have left Evergreen officials scrambling to figure out on the fly how to best respond to a wide range of situations they hadn’t anticipated.
What are the implications, for example, when a teen is flagged multiple times for “inappropriate” language in a college admissions essay that describes his difficult upbringing? What about when students are flagged for offensive language in plays or journal entries they’ve written as class assignments?
Evergreen eventually decided to turn off Gaggle’s filters for profanity and hate speech, Gardner said.
Then there are the alerts generated by vague messages between friends. How is a school district supposed to respond when one student writes to another, “Tomorrow it will all be over?”
In that case, Gardner said, the Evergreen district sent local police to a family’s home in the middle of the night to conduct a welfare check. It ended up being a “breakup situation” that wasn’t serious.
And perhaps most troubling, what are the legal and ethical considerations for schools when students plug their personal devices into district-issued computers, leading Gaggle’s filters to automatically suck up and scan their private photos and videos?
That’s happened numerous times in Evergreen schools, Gardner said.
One student was flagged for having photos of himself taking bong hits. Other students were flagged for personal photos showing fights and nude images that could be considered child pornography. Evergreen school administrators responded by notifying parents, police, and the National Center for Missing and Exploited Children.
Marlow of the ACLU described such situations as outrageous.
There’s a constitutional amendment barring the government from policing speech, he noted. There’s a reason it comes first in the Bill of Rights.
What about the students in a culturally conservative community who are questioning their sexuality, he asked, or the Trump supporters in a liberal community who are exploring their political beliefs? Is their freedom to research new identities and ideas compromised when principals and parents are alerted to everything they type and search?
In addition, Marlow asked, how do schools and companies know they’re not making things worse? If students know that administrators and parents are going to be alerted when they discuss self-harm or suicide with friends, for example, might that actually deter them from seeking help?
And schools should never monitor private digital content, Marlow said. Period.
“It should not be incumbent on students and families to figure out when they’re being placed at risk and adjust for it,” he said. “There are automatic adverse consequences when there is state surveillance.”
That’s not exactly the message Evergreen schools are delivering to its students and community, though.
“Every time we talk to kids, we remind them that Big Brother is watching,” said Gardner, the district safety director.
Accepting Constant Surveillance?
Many students seem well aware of that new reality.
Sometimes students with a concern simply email themselves, with the expectation that algorithms will flag the message for adults, said Jessica Mays, an instructional technology specialist for Texas’s Temple Independent School District, another Gaggle client.
One student “opened a Google Doc, wrote down concerns about a boy in class acting strange, then typed every bad word they could think of,” Mays said. At the end of the note, the student apologized for the foul language, but wrote that they wanted to make sure the message tripped alarms.
For proponents, it’s evidence that students appreciate having new ways of seeking help.
But for Levinson-Waldman, the lawyer for the Brennan Center for Justice, it raises a bigger question.
“Are we training children from a young age to accept constant surveillance?” she asked.
So far, at least, that’s a conversation that K-12 officials don’t seem to be having.
Determined not to become the next Columbine or Parkland or Sandy Hook, schools are eagerly searching out new technologies. Companies feed those fears, then respond by offering new services. The systems are then deployed with minimal forethought or oversight.
One more example.
Earlier this year, Social Sentinel flagged for the Brazosport school district in Texas another social media post. A recent graduate of the district tweeted a photo of herself pointing a shotgun at the sky, along with the message “I shot my first flyer, and it was my first time shooting a gun,” followed by smiley-face and clapping-hands emojis.
Brazosport school officials, who did not respond to multiple requests for comment, do not appear to have taken any action in response to the post.
Margolis, the Social Sentinel CEO, said he thus failed to see the possible harm.
“Why would it have a chilling effect if the superintendent of the school might see something that slips through the system about someone went hunting?” he asked. “There’s no threat.”
Last month, however, the ACLU filed a lawsuitover a similar situation that turned out differently. A New Jersey school district suspended two high school students for using Snapchat to share pictures of legally owned guns used during a weekend at a private shooting range.
The case is still making its way through the courts.
Will these powerful new surveillance tools become entrenched in schools, before any kind of carefully considered consensus can be reached?
That’s where recent history and current trends seem to point.
Even Patterson, the Gaggle CEO, acknowledged feeling conflicted about the dynamic.
He prays to never see another school shooting. He wants to believe Gaggle will benefit students whose cries for help are too often ignored. He hopes that can be done in ways that still allow kids to make normal childhood mistakes, without suffering life-altering consequences.
But the demands of the market could work against those wishes.
Five years ago, Patterson said, Gaggle would never have considered adding a social-media monitoring service. It was too invasive.
Now, he sees it as inevitable.
“I know I would have rebelled against some of my own products,” Patterson said. “But the world has changed.”
Research assistance provided by Librarian Maya Riser-Kositsky.
A version of this article appeared in the June 05, 2019 edition of Education Week as Schools Deploy Massive Digital Surveillance