Kevin Carey’s dismissal of “test score inflation” provides an ideal opportunity to talk about the book I finished this weekend, Measuring Up: What Educational Testing Really Tells Us, by Dan Koretz, a psychometrician at the Harvard Grad School of Education – hardly an opponent of testing.
Koretz calls “test score inflation,” in which gains on tests used for accountability dramatically outpace gains on low stakes tests, the “dirty secret of high-stakes testing.” If you compare NAEP trends and state score trends, you’ll see that state scores have increased significantly more than NAEP scores since NCLB was adopted.
To understand why test score inflation is a serious problem, you have to understand the sampling principle of testing. Koretz provides the following example: Suppose we want to evaluate students’ vocabulary. A typical high school student knows 11,000 root words, but a test can only include a sample of these words – maybe 40. If we design our test well, we can still learn something about the breadth of each student’s vocabulary. But we don’t really care if the student knows the 40 words on the test; rather, we care about the larger domain from which these words are sampled.
Now imagine that for weeks before our test, I drilled students incessantly on those 40 words. Voila! They perform exceptionally on the test. Yes, their vocabularies have increased by 40 words. Maybe these are 40 really important words - the so-called “test worth teaching to.” But proficiency in the domain that my test is intended to measure has not expanded by the same amount. I’ve seen this over and over again; administrators and teachers figure out which concepts are consistently on the test, and which aren’t, and they alter their instruction accordingly. The trouble is that if we administer a slightly different test, drawing on a broader range of concepts from the domain we care about, kids haven’t mastered them.
Carey explains that this is just a standards mismatch problem - i.e. state test standards are not the same as those used on national tests. Koretz takes Carey’s critique head on in this passage:
“Alignment is a lynchpin of policy in this era of standards-based testing. Tests should be aligned with standards, and instruction should be aligned with both....And alignment is seen by many as insurance against score inflation. For example, a principal of a local school that is well known for the high scores achieved by its largely poor and minority students gave a presentation to the Harvard Graduate School of Education a few years ago. At one point, she angrily denounced critics who worry about ‘teaching to the test.’ We had no reason to be concerned about teaching to the test in her school, she asserted, because the state’s test measures important knowledge and skills. Therefore, if her faculty teaches to the test, students will learn important things.
This is nonsense, and I have a hunch about what I would find if I were allowed to administer an alternative test to her students. Alignment is just reallocation by another name. Certainly it is better to focus instruction on material that someone deems valuable, rather than frittering time away on unimportant things. But that is not enough. Whether alignment inflates scores depends also on the importance of the material that is deemphasized. And research has shown that standards-based tests are not immune to this problem. These tests too are limited samples from larger domains, and therefore focusing too narrowly on the content of the specific test can inflate scores.” (p. 253-254)
We only care about test scores if they translate into general improvements in children’s academic skills that generate meaningful improvements in their life chances. If these gains don’t translate to tests that measure similar skills – basic reading and math competencies - what are the chances that they are going to help them succeed in the workplace or in college? And that is a very good reason to worry about test score inflation.
Spoiler alert: NY state test scores are out next week, if not sooner. What should we make of NYC’s flat NAEP scores alongside state test improvements so large they’re unbelievable? Kind of makes you wonder.