Facial recognition technology is on the rise in schools across the country, but a new report from the federal government suggests its accuracy can vary widely depending on race and gender.
African-Americans and Asian-Americans can be between 10 and 100 times more likely to be misidentified by the technology than white people, and women are more likely to be falsely identified than men, says the National Institute of Science and Technology report, which tested nearly 200 systems with photos of more than 8 million people.
The report cautions that not all facial recognition systems are created equal: “Different algorithms perform differently.”
School officials in several states have turned to facial recognition tools to help minimize the number of shootings and security breaches in schools. Many districts have adopted the systems as an automated way to track whether the people entering and leaving the building belong there.
The report focused on the technology’s increasing presence as a law enforcement tool, but the findings have implications for schools as well, according to Sarah St. Vincent, a human rights attorney and surveillance and digital rights expert who serves as director of Cornell Tech’s Computer Security Clinic.
“People of color fought and suffered for the right to be able to get into schools and other buildings the same way that white people do,” St. Vincent said. “If facial recognition is a barrier to that, that’s a problem.”
By adopting facial recognition as a security tool to curb gun violence and other potential problems, St. Vincent argues, schools restrict access to non-white students, parents, and teachers.
“It’s really chilling that so clearly the more layers of oppression you are probably experiencing, the less accurate facial recognition will be for you,” St. Vincent said.
The New York State Department of Education earlier this year halted a planned pilot of facial recognition software in schools after civil rights advocates raised privacy concerns.
St. Vincent says she hasn’t seen research indicating that facial recognition is a more accurate security system than a human presence might be. “There’s this perception that facial recognition is objective or less flawed,” she said. “We view tech as having this aura of magic or perfection that it actually should not have.”
Efforts are underway to develop more sophisticated artificial intelligence in facial recognition technologies that will not perpetuate racial and gender biases. In the meantime, here are some questions St. Vincent recommends district leaders ask as they explore the use of facial recognition technnologies:
- Has the manufacturer commissioned a third-party test of the system, or has it simply tested the product itself?
- How many false positives (identifying someone as a match when they are not) and false negatives (failing to identify someone who is a match) did the manufacturer find in its research?
- How accurate is the system for children? For women and girls? For people of color?
- What is the risk or harm schools are trying to prevent by installing this system, and why would this system be the best way to address that risk?
- How should schools evaluate the effectiveness of a facial recognition system?
- Are there other options that are less intrusive and less likely of misidentifying people of color?
- How is data collected by the system stored and what are the plans for securely deleting old or unnecessary data?
- Is there a risk that students, teachers, staff, parents, or others will associate with one another less freely on school grounds if a school uses facial recognition technologies?
- Will use of these technologies exacerbate the existing problem of students of color facing disproportionately high rates of disciplinary action?
- Have school officials given members of the community opportunities for input on whether or not to use facial recognition systems?
A version of this news article first appeared in the Digital Education blog.