To the Editor:
Your front-page story (“National Clout of DIBELS Test Draws Scrutiny,” Sept. 28, 2005) gives the impression that the argument is whether the right reading test to give young children is the Dynamic Indicators of Basic Early Literacy Skills or some other skills assessment, such as the Phonological Awareness Literacy Screening tests. Missing, except for a brief quote from P. David Pearson, is a discussion of what I think is the real problem.
If reading researchers Frank Smith and Kenneth Goodman are right, and I think they are, the “skills” children need to pass DIBELS and similar tests are the result of reading. The use of DIBELS and its cousins encourages test preparation in the form of skills training, which is a confusion of cause and effect.
In other words, practicing reading nonsense words quickly, in preparation for the DIBELS test, will not contribute very much to helping children learn to read. But the experience of reading comprehensible and interesting texts will result in the ability to read, as well as develop the capacity to read nonsense words quickly. Good readers can easily read the boxed list of nonsense words presented with the story, whether they have had extensive skills training or not.
The correlation between DIBELS scores and subsequent reading-test performance is spurious. Both are the result of the experience of real reading.
Los Angeles, Calif.
To the Editor:
Thanks for your article on DIBELS. There are a couple of issues that bear further treatment.
One is that the whole test can be downloaded by anybody, even a computer-smart kid. So abuses are possible, and, as I hear from teachers, quite common. The stakes are high for all concerned to raise DIBELS scores. And that’s not hard to do with the test accessible on the Internet.
A second concern is the misuse of the statistical terms “validity” and “reliability.” Those were thrown around a lot in the quotes by the promoters of DIBELS, yet the test producers have no data that meet the statistical criteria for uses of the terms.
And perhaps the key point is that the very fact that the test is “quick and easy” means that life decisions about millions of kids are being made on the basis of inadequate, minimal information about performance on bits and pieces of nonreading tasks. Throw in that there is no consistency in how the tests are scored—it all happens fast, and so how benevolent or not the tester is greatly affects the scores kids achieve. The tester scores on-the-fly, during the minute each subtest takes, and has to watch a stop watch at the same time. So the test lacks “inter-rater reliability,” one preferred use of the term “reliability.”
DIBELS is so flawed and weak a test that, without the coercion being applied for its use by the No Child Left Behind Act enforcers in Washington, it would never pass review for adoption for the uses being made of it on any level by competent reviewers.
Language, Reading, and Culture
University of Arizona
A version of this article appeared in the October 12, 2005 edition of Education Week as Reading Experts Question Efficacy of DIBELS Test