Education Opinion

Will Little Sally Go to Yale or to Jail? There’s an Algorithm For That

By Marilyn Rhames — March 19, 2014 6 min read
  • Save to favorites
  • Print

When I was earning my masters in teaching, one of my professors told my class that third grade reading test scores helped government officials project prison population ten years out.

I’ve never been keen on conspiracy theories, and I tend to believe the best in people, so I shrugged and assumed it wasn’t true.

Fast forward 10 years. I’m sitting in my school’s professional development meeting last week and learning the basics of the NWEA MAP test, a computerized adaptive assessment my students take three times a year. We were analyzing students’ past test scores, when one of my instructional leaders picked Little Sally’s (not her real name) scores as an example.

“She’s only in the third grade, but based on winter test results she’ll score just a 16 on the ACT,” the administrator said before a room of teachers. “A 16 isn’t high enough to get into Yale.”

The message that followed went something like: We have to increase the rigor of our instruction to work backwards to put her on an Ivy League trajectory.

Okay. I embrace ommon Core State Standards and I’m all for making kids college and career ready. But I began to struggle with the implications of this test on multiple levels.

First , we were talking about a 9-year-old privileged white girl, and I wondered if she were a poor Black girl if we would have been so alarmed that her decade-away eligibility for acceptance into an elite university was at risk.

Second, I recalled that class in grad school when my professor warned me that third grade test scores would be used to build more prisons. If those expensive, sophisticated, super-secretive computerized algorithms that use MAP test data to predict who could test into Yale, then why couldn’t those same algorithms label a child as a prime candidate for jail?

I felt myself shifting in my seat at this point. The temperature in the room seemed to rise a few degrees. I like to think of myself as a reformer, a progressive thinker, and one who supports the notion of reasonable teacher accountability in student academic achievement. I have never been anti-testing or anti-standards because I’ve worked at schools that didn’t do right be the children.

But good intentions don’t always produce good results.

So I raised my hand and politely questioned the accuracy of the MAP test. Sometimes, I said, the results from fall to winter and winter to spring can dip or spike drastically, so how can we trust the test?

My school leader said the test was tried and true, passing out a NWEA one-pager that read:

“The study’s results are based on grade level (K-11) samples of at least 20,000 students per grade. These samples were randomly drawn from a test records pool of 5.1 million students, from over 13,000 schools in more than 2,700 school districts in 50 states. Rigorous post-stratification procedures were then used to maximize the degree to which both status and growth norms are representative of the U.S. school-age population.”

How could I argue against such stringent research conditions?

Then the middle school social studies teacher asked a question that helped me place the root source of my distrust. She asked, “How do we know the data will not be used for evil?”

No one in the room could answer her question. People just chuckled and brushed it off, just like I did when I was in teaching school.

I’m older and a little wiser now: that teacher’s question is no laughing matter.

Just last month, The Verge magazine broke the story about the Chicago Police Department’s new “Heat List.” A sociologist--coincidentally from Yale University--built an expensive, sophisticated, super-secretive algorithm that consumes a massive amount of data to compile a list of about 480 of the potentially most dangerous people in Chicago. These people are purported to be most likely to kill or be killed in street violence, CPD Superintendent Garry McCarthy said a recent ABC news interview.

Police commanders pay unannounced home visit to the individuals on this list to proactively encourage them not to kill anybody.

The official name of this tactic is Custom Notification, but I’m told it’s called “Hug a Thug,” by the cops. (See video)

The Verge article revealed that some people on the list, like 22-year-old Robert McDaniel, had no felony criminal record or even a gun possession conviction. In fact, the Chicago Sun-times reported that none of the 50 people who had received their police “hug” had ever been convicted of a felony. Many were shocked and wondered why their names were added to this watch list.

Some people are calling it Orwellian, citing the big-brother society of the book 1984.

Some people are calling this high-tech racial profiling.

“Are people ending up on this list simply because they live in a crappy part of town and know people who have been troublemakers?” asked Hanni Fakhoury, an Electronic Frontier Foundation staff attorney, to The Verge’s Matt Stroud. “How many people of color are on this heat list? Is the list all black kids? Is this list all kids from Chicago’s South Side?”

The prophetic implications of the student MAP test are eerily similar to this “Heat List.” One is predicting who’s going to Yale while the other is predicting who’s going to jail. And because the algorithms are proprietary, the public may never know if the MAP data is used to inform the Heat List projections or vice versa.

But wait--that’s a crazy conspiracy theory, right? I apologize.

I’m not opposed to testing student achievement growth. I don’t think the MAP test is intrinsically evil. In fact, I see great potential in teachers having an outside snapshot on how their students are developing over three intervals of time.

It’s the lack of transparency around these algorithms that gives me pause. It’s the arrogance that proclaims that this test can accurately predict the future college success of a 9-year-old student. Secrecy and pride are just a bad combination.

I didn’t like that these algorithms claimed to quantify the precise value my teaching added to a student’s academic growth (remember the public disclosure of “value added” results that drove one Los Angeles teacher to suicide?) The MAP test results make up 60 percent of the Chicago public school district’s new accountability metric for schools.

But I’m even more concerned that the weight of these algorithms could add tremendous pressure on little third graders who just want to read the next Diary of a Wimpy Kid book, not worry about what college they may or may not test into.

Besides, even if all the little third graders in the United States miraculously showed monumental academic growth this year, tests like the MAP test are norm referenced, which means that there would always be a bottom 50 percent pool who would on the surface appear to be slackers.

So where does a forward-minded teacher like me go from here? A teacher who wants her students to be college and career ready but is worried that many could also be unjustly stigmatized by it? A teacher who sees great usefulness in growth data like this, but also sees tremendous danger?

Growing up Black in a racially isolated, low-income urban community, the odds were not in my favor. I struggled with reading in elementary school, and to add to my problems I was a poor test-taker. Fortunately for me, there weren’t any computerized algorithms to legitimate the idea that I wasn’t fit for college when I was in the third grade. Many of my teachers knew I was smart and told me that I could go.

I didn’t go to jail, but I didn’t go to Yale—I went to Columbia University instead.

Yes, poor literacy skills in early education have links to incarceration rates later in life. Algorithm projections or not, I will continue to strive to help my students to beat the odds. That’s what good teachers do.

Go Lions!

The opinions expressed in Charting My Own Course are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.