Yesterday, the triennial PISA results were announced, prompting a paroxysm of spastic pontificating. Hands were wrung, familiar talking points were rehashed, and PISA Overlord Andreas Schleicher once again took the results as his cue to lecture American educators and policymakers on the wonders of common standards and the perniciousness of school choice. (Not that Schleicher has ever seemed an especially strategic operator; I’m curious whether the cheerleading of this international bureaucrat will really help the cause of the Common Core.) Anyway, the funny thing is that all this gnashing of teeth is, quite literally, for nothing. There are at least seven reasons I don’t give a fig about the PISA results. What are they?
One, international test score comparisons suffer from the same banal problems that bedevil simple NCLB-style comparisons. PISA results say nothing about the value schools are adding; they merely provide simple cross-sectional snapshots of achievement.
Two, Shanghai was tops in the rankings--which ought to prompt a whole lot of questions. Brookings’s Tom Loveless, as usual, has been out there waving the red flags. Nearly two months ago, Loveless explained: “Shanghai’s population of 23-24 million people makes it about 1.7 percent of China’s estimated 1.35 billion people... About 84 percent of Shanghai high school graduates go to college, compared to 24 percent nationally... And Shanghai’s parents invest heavily in their children’s education outside of school... At the high school level, the total expenses for tutoring and weekend activities in Shanghai exceed what the average Chinese worker makes in a year. Further, Shanghai does not allow the children of migrants to attend its high schools.” Comparing U.S. performance to that of Shanghai isn’t apples and oranges; it’s applesauce and Agent Orange. By the way, it’s worth perusing Loveless’s broader critique of international comparisons in his 2012 Brown Center report.
Three, one of the amusing touches of schadenfreude this time around was Finland’s ratings plunge--from its perennial first-place perch to 12th in math, fifth in science, and sixth in reading. Now, don’t get me wrong. I dig Finland. But I’ve been exhausted by the fad-chasers who’ve been peddling the line that Finland had cracked the code of educational excellence. Truth is, I never believed that or that Finland’s successes could be readily imitated elsewhere, and I’m skeptical that its schools fell apart between 2009 and 2012. All of which means, again, I just don’t find these results that useful.
Four, there are questions about the stability and validity of results. Heck, let’s set aside questions of test administration or how much confidence we have that tests are being administered even-handedly and with fidelity in dozens of different nations. The OECD has already acknowledged that “large variation in single (country) ranking positions is likely” in the PISA results. Statistician Svend Kreiner, at the University of Copenhagen in Denmark, said that an inappropriate model is used to calculate the PISA rankings. He has argued that country scores fluctuate significantly depending on which test questions are used in the analysis. In the 2006 reading rankings, for instance, he writes, depending on how scores were compiled, Canada could have been finished anywhere from second to 25th and Japan anywhere from eighth to 40th.
Five, the whole things provides a depressing excuse for the usual suspects use PISA as an excuse to shill their usual wares. Common Core boosters cheered for that. Dennis van Roekel said it’s all about poverty. Arne Duncan touted the need to embrace Obama administration reforms. Yawn.
Six, we know these folks are doing nothing more than rehashing talking points because there’s no earthly way to tell what explains the performance of one nation or another. The problem is what we call an “overspecified” model. The PISA universe only includes 65 “economies” (nations, states, cities, and such). Meanwhile, there are tens of thousands ways in which these places vary. They have different lifestyles, cultures, economies, political regimes, religious traditions, health care systems, diets, norms, school calendars, school facilities, educational resources, teaching populations, and so forth. Trying to imagine that one can tell which one or two variables are responsible for how well fifteen-year-olds read or do math reflects a breathtaking hubris.
Seven, using PISA results to judge school quality poses the exact same problem as using NCLB-style tests to conclude that schools in a bucolic, leafy suburb are “better” than those in a chaotic city rife with broken families. There’s a lot of stuff going on, and only the foolhardy would insist that any differences are necessarily due to educational strategies rather than non-school factors.
You can keep the PISA scores. I find the whole thing a triennial exercise in kabuki theater. I suppose there’s no great harm done, but that seems an awfully low bar given all the hullabaloo.