Note: Jonathan Plucker, the Raymond Neag Professor of Education at the University of Connecticut, is guest posting this week. This post was co-authored with Leslie Rutkowski and David Rutkowski, both assistant professors at Indiana University and coauthors of the Handbook of International Assessment (Chapman & Hall, 2014). Plucker can be found on Twitter at @JonathanPlucker, Leslie Rutkowski can be found on Twitter at @lrutkowski, and David Rutkowski can be reached via email at drutkows@indiana.edu.
Here’s a headline you won’t read in a U.S. newspaper: “U.S. teens’ contemporary art knowledge remains stable in international comparisons.” The headline may feel familiar—international tests typically show the U.S. making few gains, with consistent, middle-of-the-pack performance. So it’s not middling U.S. performance that is out of the ordinary. Rather, it’s the content that rarely figures into popular conversations about international test results (and the health of our educational system), at least as played out in most headlines. What we measure becomes what matters, as study after study shows us (and as Jonathan discussed yesterday).
For this reason, international organizations have a large and growing say in what we teach and how we teach our children. However, that level of influence no longer stops at the national level. In trickle-down form, the OECD’s Programme for International Assessment, or PISA, is playing an increasingly important role in the way individual U.S. schools are ranked and compared. The PISA-based OECD Test for Schools, administered in the U.S. by CTB-McGraw Hill, promises to place individual schools in league tables with countries in terms of their performance (e.g., better than Kazakhstan in reading, worse than England in math, no different from Germany in science, etc.).
It’s an interesting prospect, to be sure. But as PISA spills over into local educational decision making, it is reasonable to question the motives and consequences of an international economic organization calling the shots in local educational systems. In an article in the most recent issue of the Phi Delta Kappan magazine, we suggest that schools may want to think carefully about participating in such assessments.
Before participating in any localized version of an international assessment, schools should understand what PISA, or any international assessment, can and can’t tell them. For example, although the test may be adequate for comparing educational systems overall, it does not specifically focus on what schools have been tasked by their state departments of education to teach; rather, PISA focuses on what the OECD feels 15 year-olds should know and be able to do to operate successfully in a modern, global economy. But it does not tell schools what 15 year-olds in their districts should know and be able to do to work successfully in their state or local economies. For this reason, educators and policymakers should note that PISA, TIMSS, PIRLS, and other high-quality, international assessments are good measures for lots of different settings but probably not great for any particular country, district, or school.
And this brings us back around to weighing the consequences of local school participation in an international assessment. Of special interest is the idea that the OECD is creating a market for a particular kind of knowledge, over which they have sole control. This thesis is put forward by David in a recent paper, in which he argues that the OECD creates an opportunity to directly influence schools in the U.S., where, historically, PISA results have been second (or third) page news. This sort of access to local schools allows the OECD to promote its free-market agenda and to steer local curricula toward the content valued by this organization, but not necessarily material that is critical for successful participation in the U.S. economy.
Our thoughts on this issue come from an abundance of caution rather than testing paranoia. Two of us (Leslie and David) have worked at the IEA on international assessments, and all three of us regularly use international assessment data in our work. So we’re not coming at this as Luddites or card-carrying members of the anti-testing movement. But chewing up more student and teacher time with yet another test, when legitimate questions exist about the value of the information, feels like a poor decision. It may feel sexy to say, “Our students outperformed the French and Norwegians in science this year!”, but is that science content likely to lead to success in your local, state, and U.S. economies? Maybe, but maybe not, and educators and policymakers should closely examine any local version of an international assessment before administering it to our students.
--Jonathan Plucker, Leslie Rutkowski, and David Rutkowski