CORRECTION
From criminal sentencing to credit scores, algorithms and artificial intelligence increasingly make high-stakes decisions that have big implications for peoples’ freedom, privacy, and access to opportunity.
Despite the almost-blind faith we often put in such “artificial agents,” it’s no secret that they are often biased, according to a new report from the RAND Corporation.
And more than ever, RAND researchers Osonde Osoba and Bill Welser said in an interview with Education Week, it’s important to raise awareness about the role that algorithms play and to push for a public accounting of their impact—particularly in areas that involve the public interest, including the field of K-12 education.
“For the longest time, any time questions of bias came up, hardcore researchers in artificial intelligence and algorithms dismissed them because they were not ‘engineering concerns,’” Osoba said. “That was OK for commercial toys, but the moment the switch was made to applying algorithms to public policy systems, the issue of bias no longer became a triviality.”
The new RAND report, titled “An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence,” does not focus on education. Instead, the authors lay out examples such as the algorithmic bias in criminal sentencing (as documented in a series by nonprofit news organization ProPublica) and the problems with Tay, a chatbot developed my Microsoft that was supposed to learn the art of conversation by interacting with Twitter users—and quickly began spewing racist and vulgar hate speech.
Artificial agents can process the immense streams of data now running through society in ways that humans can’t, making them a necessary tool for modern society, the RAND researchers write. But too often, they say, the public ascribes objectivity and neutrality to algorithms and artificial intelligence, even though most function as a “black box,” and some have been shown to result in different outcomes for different groups of people.
Where does such bias come from?
The individual humans who program the artificial agents, who may have biases they are not even aware of; a pool of computer- and data-scientists that is far less diverse than the populations their products eventually impact; and biases in the data that are used to train the artificial agents to “learn” by finding patterns, RAND concluded.
All those issues are found in abundance in the ed-tech field.
Just this month, Education Week took a look at the growing field of “curriculum playlists”—educational software programs that rely on algorithms to choose what types of instructional content and learning experiences students have each day in the classroom. We’ve also looked at algorithm-driven tools for providing career- and college-guidance and for hiring teachers.
What if such tools are biased against students of color, or students with special needs? How would educators, parents, and students even know?
Such questions are both realistic and important for the field to be asking, Osoba and Welser said.
On the K-12 side of the equation, “Educators need to not cede complete control to the computer,” Welser said. That means being aware which products used in the classroom, school, or district rely on algorithms and artificial intelligence to make decisions; understanding what decisions they are making; and paying attention to how different groups of students are experiencing the products.
But what about algorithm-driven products that are now in use in hundreds of districts across the country? How can the public know if there is some kind of systematic bias at work in how students are being assigned classroom lessons, or given advice about their post-secondary plans?
Generally speaking, Osoba said, the keys are for developers to make their algorithms more open and transparent on the front end, then to conduct tests for “disparate impact” on the back end.
It’s too early to try to regulate the field or mandate such testing across the board, Welser said.
But now is the time for the conversation to begin happening in earnest, he said.
“There should be a shared idea about when you do disparate impact studies. Is it when 300 children use your product? Or 100,000 children? " Welser said.
“People in the ed-tech space haven’t yet really put their heads around this.”
An earlier version of this story incorrectly spelled the last name of Bill Welser of the RAND Corporation.
See also: