I was having lunch with a professor at my local university recently, and talk turned to international comparisons of student achievement—the education policy scholar’s mother lode for data analysis and prognostication these days. Just this week, in fact, Education Week reported that “top policy groups are pushing states toward international benchmarking as a way to better prepare students for a competitive global economy.”
I mentioned a column by Gerald Bracey in the February issue of Phi Delta Kappan, which made light of the latest Programme for International Student Assessment (PISA) data. The professor snorted. “We can’t be buying into Gerald Bracey’s little feel-good stories,” she said. “We have to look critically at the enormous problems we have, and address these shortfalls in the American educational system.”
While I’m perfectly willing to admit that things are not at all rosy in way too many American schools, I have never been entirely sure of the point of international comparisons. We can’t even get a handle on the real differences between successful, thriving schools and miserably failing schools in our own country. And I don’t think that making our standards, textbooks, and tests uniform across 50 states would lead to consistency in outcomes. This is an enormous and vastly diverse nation. Discrepancies are caused by variables we probably haven’t even identified yet. Still, there’s this national compulsion to compare our numbers to the numbers “over there,” and moan about the results.
I’ve been a Gerald Bracey fan for years. Although he’s a pretty cerebral guy (I admit his detailed deconstruction of statistical models sometimes leaves me, well, behind), he seems to be having a good time poking at the very grim world of education policy making. I enjoy his blog on the Huffington Post, and his Rotten Apples awards, where he takes on educational sacred cows questioning the importance of NAEP tests; the U.S. News and World Report’s ranking mania; shady dealings in the Reading First program; and, my personal favorite, the fact that PISA data on reading could not be released last year after the Organization for Economic Cooperation and Development determined that the booklets for U.S. test-takers had been misprinted and data was spoiled.
Bracey frequently makes the point that the United States is and remains an economic superpower, despite a couple of decades of what, at first glance, appears to be lackluster, middle-of-the-pack performances in the international academic horse race. He routinely debunks the myth that higher student test scores are causally connected to whether the U.S. dominates world economic markets. If so, how is it that the World Economic Forum recently ranked the United States first among 131 nations in “global competitiveness”?
This finding doesn’t mean that we can relax, but it does suggest that we might start taking a more nuanced look at what international test data really tells us.
In the Kappan article, Bracey uses PIRLS international reading data to do a “thought experiment.” Of the 39 nations tested, the Russians took the top slot with a score of 565. American kids clocked in at 540, above the median international score of 500. When the U.S. scores are disaggregated by ethnicity, the nuance begins to emerge. Asian-American students get a 567; white students, 560; Hispanic students, 518; black students, 503; and American Indians, 468. Asian-American kids ranked first, and white American kids came in third in international comparisons. Disaggregating by economic status is even more revealing: In U.S. schools where fewer than 10% of the students live in poverty, the score was 573. Well-off American kids are kicking their global competition to the curb, it seems, at least in 4th grade literacy.
Not that any of this matters to the critics, who choose to see our very diverse public school system as magically homogenous when it suits their arguments. Bracey’s comment here is perfect: “One thing these rankings make clear is that anyone who makes statements about ‘American schools’ is speaking about an institution that doesn’t exist.”
Except, of course, when it comes to other policy analysts’ own “thought experiments.” Everyone’s got a point to make, and people will use data in ways that support their perspectives. What I don’t understand is this blanket urge to tie all national economic problems to schools, or the desire to paint all public schools as abject failures. Some schools may be unsuccessful, but society is also failing whole groups of children, and at a prodigious rate.
I live in southeastern Michigan, ground zero for the struggling auto industry. I understand the pressing need for innovation and technical expertise, for a well-trained workforce, and for connections between schools and the human capital we want them to produce. I understand that it is folly to sweep overwhelming economic problems under the rug. This isn’t about feel-good public relations for schools; it’s about figuring out everywhere it hurts. It’s about providing better solutions rather than the “let’s compare more data” approach.