How Semantic Magic Creates 100 Million More Standardized Tests Than Actually Are Given
|How semantic magic creates 100 million more standardized tests than actually are given|
Are U.S. students overtested? Just how much standardized student testing is there? How much of the average student's career is spent in activities related to standardized testing? How much of the average teacher's school year is spent in preparing for or monitoring the administration of standardized student tests?
To people outside the field, these questions may seem tedious and the topic mundane. Within the field of education, however, this anxiety-producing subject spawns tense arguments. If one believes that test-taking and test-preparation time have no intrinsic value, as some standardized-testing critics do, it matters a great deal how much time testing and test preparation take up. The more activities one can categorize as "test preparation" and the more time spent in test-related activities one can count, the greater the waste of instructional time can be claimed in criticizing standardized testing. Some strong language has been employed to convince us that our students are overtested.
The number of standardized student tests and the amount of student time spent taking them each year should be pretty dry facts, easy to come by. One could, after all, spend less than a day and make a few telephone calls to test-development companies and the offices of the Council of Chief State School Officers and get a rough estimate.
The test-development companies themselves made an estimate in 1990 of between 30 million and 40 million standardized student tests administered per year, each one encompassing about three to four hours of student time. With about 40 million elementary and secondary school students in the United States, their estimate translated to about one test per student per year, averaging less than one day's time per student per year. Other surveys done in the early and mid-1980s by the Northwest Regional Labs and the precursor of the Center for Research on Evaluation, Standards, and Student Testing showed about the same level of standardized school testing then.
During the flurry of activity over national testing proposals in the early 1990s, Congress asked the U.S. General Accounting Office to make estimates. The GAO surveyed all states and a national sample of 500 school districts. The answer: about 36 million tests given districtwide, or 3.4 hours per student annually on average. Only one-fourth of these tests were given for high (student) stakes. When all other standardized academic tests not given districtwide were factored in, such as college-entrance exams, Chapter 1 tests, state advanced subject-area exams (in New York and California), and so on, the estimate grew to around 42 million tests and four hours of time annually per student (fair disclosure: I was privileged to work on that project). Again, given a number of elementary and secondary school students in the United States of about the same magnitude, that total number of tests translated to an average of one test per student per year.
The GAO study also attempted to estimate the degree to which state mandates added to, rather than replaced, pre-existing district-level tests. A systematic sample of school districts revealed that half of them dropped pre-existing district tests when their states mandated new tests. What at first glance seemed like additional testing was really replacement testing. Indeed, in some cases, new state tests administered at a few grade levels replaced old district tests that had been administered at more grade levels.
There is one study done by steadfast critics of standardized testing, however, that stands apart, claiming that there are far, far more standardized tests administered every year than the other studies imply. This study declares that U.S. students "are subjected to too much standardized testing" and that standardized testing "devours" teaching time and "looms ominously" in students' lives. This critical study also stands out for invoking the use of some rather unusual arithmetic to count standardized tests.
|This critical study also stands out for invoking the use of some rather unusual arithmetic to count standardized tests.|
I have picked out one of the more concise passages to illustrate how the authors of the study count tests. This particular passage refers only to college-entrance exams, those administered by the College Entrance Examination Board--the SAT--and the American College Testing program--the ACT:
"[W]e contacted the College Board and ACT directly and were informed that 1,980,000 SATs and 1,000,000 ACTs were given in 1986-87. We thus have relatively firm figures on the number of such college-admissions tests given. But there are several ways of counting the number of separately scorable subtests in these testing programs. The SAT has two subtests, the SAT-Verbal and the SAT-Math. Moreover, two subscores are reported for the SAT-Verbal, namely reading and vocabulary. Also, almost all students who take the SAT take the Test of Standard Written English, the TSWE. Thus, in calculating the number of separately scorable tests involved in SAT testing, we have offered the alternative perspectives of two subtests (the SAT and TSWE) as a basis for a low estimate, and five (Math, Verbal, Reading, Vocabulary, Verbal Total, and TSWE) as a basis for a high estimate. Similarly, the ACT assessment has four subtests, but since a composite score is also calculated, we have used four and five as bases for high and low estimates. The results ... indicate that between nearly 4 million and 10 million SAT subtests and 4 million to 5 million ACTsubtests are administered annually. ... Altogether, then, we estimate that in 1986-87, 13 million to 22 million college-admissions 'tests' were administered."
The quote goes on to sum the total of all standardized student tests, and not just college-entrance exams, but the quotation marks around the word "tests" disappear, et voila …, all parts of tests become whole tests:
"In sum then ... we estimate that between 143 million and 395 million tests are administered annually to the nation's population of roughly 44 million elementary and secondary school students, equivalent to between three and nine standardized tests for each student enrolled in elementary and secondary schools."
You are forgiven if you found this excerpt confusing. At the beginning of it, a test is called a test. In the middle, the reader is told that tests have parts. Those separate parts are counted up and, in the next paragraph, the parts are called tests. After this semantic magic is complete, the authors feel confident in telling the public that there are from three to nine times as many standardized student tests administered annually as there actually are.
Using this same arithmetic, baseball games are really nine to 10 baseball games, since each inning is a discrete part, with a separate score. Or, maybe baseball games are really 18 to 20 baseball games, since each half-inning is a discrete part, with a separate score.
You can see the semantic difficulties we would have if all parts of things became in language the things of which they are parts. What, then, would one call the wholes to distinguish them from the parts?
Another oddity of this study is the unnecessary use of "estimates"; unnecessary because two telephone calls to the SAT and ACT offices can provide exact counts of the numbers of SATs and ACTs administered in any given year. There is no need for estimation.
|Test-basher research is not meant to be good research; it's meant to be good propaganda. That, it might be.|
Yet another strange aspect of the study is its peculiar interpretation of a "lower-bound estimate." At the beginning of the excerpt, an ACT exam is referred to in the singular, the way most people refer to it, and the total annual number of ACTs administered is declared to be 1 million. After the authors do their parts-as-wholes counting, they end up with an "estimate" for the annual number of ACTs of from 4 million to 5 million. Four million is their "lower-bound estimate" for a number of tests they had just claimed for a fact to be only 1 million.
The manner in which the study's authors count up other standardized student tests besides college-entrance exams is similar, only the estimation process is even mushier. At least with college-entrance tests they start their "estimation" with a known number of tests. With state and local district tests, they use another report's rough guess for how much state testing exists, and three telephone calls to three U.S. school districts form the base of their estimate for the number of district tests. Then, they do their turning-parts-into-wholes routine.
Oh, and I almost forgot. They count state tests twice: once as statewide tests, and then again as districtwide tests.
Are the test bashers being misleading with their parts-as-wholes, double counting, and other creative arithmetic methods? Of course. Is it effective? In certain circles, yes.
Last summer, public television's "Merrow Report" broadcast a show entitled "Testing, Testing, Testing" in which it was asserted to be a fact that U.S. students face from 140 million to 400 million standardized tests each year, which amounts to from three to nine standardized tests per year per student.
Last fall, USA Today ran a front-page story entitled "Are Kids 'Tested to Death'? Educators Ask Whether Effort Is Paying Off." The story claimed that U.S. students face "somewhere between 140 million to 400 million standardized tests each year, the kind that make you fill in the bubble with a No. 2 pencil."
It is more than a bit ironic that the same critics who claim that standardized tests lack validity present research results that lack validity. In another part of the testing critics' study, for example, it was declared that "mandatory testing consumes some 20 million [pupil] school days." This large number was presented as support for their assertion of "too much standardized testing." But, if one simply divides the 20 million school days by the 44 million students taking the tests, one arrives at an average amount of testing time of less than half a day per year for each U.S. student. The authors neglected to perform this calculation.
I wrote an article a few years ago that described how test bashers do economic analyses of testing programs. Called "Test Basher Benefit-Cost Analysis," it is still available on the World Wide Web. Some colleagues from a noneducation discipline who read it told me that it reaffirmed their belief that much education research is of poor quality. I disagreed. Test-basher research is not meant to be good research; it's meant to be good propaganda. That, it might be.
Perhaps the worst consequence of the duped "Merrow Report" and USA Today stories is to fortify those who might wish to attack state and local testing directors who are just doing the job that the vast majority of Americans--and USA Today--have told them they want them to do: maintain and enforce common, high standards. After all, common standards beget standardized tests. State and local testing directors are the most excruciatingly fair-minded people I have ever met. They don't deserve to endure the hassles brought their way by jaundiced researchers who prefer obfuscation to clarity, and ideology to accuracy.
Vol. 17, Issue 26, Pages 32, 34Published in Print: March 11, 1998, as Test-Basher Arithmetic