What Are We Doing To Our Schools?
|‘Accountability’ may be a politically correct slogan, but it's a flawed reform strategy.|
Accountability has become the politically correct slogan for educational reform. It would be risky for any candidate (either for a superintendency or the presidency) to question the wisdom of proposals to distribute rewards and sanctions to school systems based on students' test scores. Yet these campaign proposals would have significant negative consequences.
First, the average test score of a state or school district is more closely linked to poverty than to anything else: Typically, the higher the proportion of low-income children, the lower the test score. Moreover, states with high poverty rates, on average, spend substantially less on education than do wealthier states. An accountability system based on test scores, therefore, risks taking resources from poor jurisdictions and giving them to the rich. If that were to happen, lower-income children would be the losers.
An accountability system based on test scores risks taking resources from poor jurisdictions.
Moreover, the rewards and sanctions would be based on flawed measures of performance. Standardized-test scores tell us little about the strengths or weaknesses of schools. In addition to poverty, the scores tell us mostly about which students take or are "excused" from taking the tests, test familiarity, and cramming for the test. For example, school districts that place special education or language-minority students in separate programs, and therefore exclude them from the tests, artificially raise their average scores in comparison with jurisdictions that mainstream these students. Further, a district with a high dropout rate will have inflated test scores because only the higher-achieving students remain in school to take the test. Under those circumstances, the district simply is not serving lower-achieving students, yet its scores give the impression that it is a superior district. In contrast, a district that retains a high proportion of students in school is at a disadvantage in the test comparisons.
The proposals also would establish counterproductive incentives by placing states and school districts under pressure to give higher priority to raising test scores than to the best interests of students. Districts would have a strong incentive to exclude potentially low-achieving students from taking the test by assigning them to special programs. Moreover, extensive evidence shows high retention and dropout rates in the grade immediately preceding the test-administration year, a fact that artificially inflates test scores. In a recent report, for example, Marguerite Clarke and colleagues at Boston University present data from Texas' highly publicized testing program suggesting that many students are being retained in 9th grade, the grade before the Texas Assessment of Academic Skills is administered. Thus, anticipated test results, as well as the results themselves, appear to work together to increase grade retention and decrease high school graduation rates.
|The proposals would establish counterproductive incentives by putting states and districts under pressure to give higher priority to raising test scores than to the best interests of students.|
These problems would occur regardless of the sophistication or uniformity of the tests used to measure performance. For example, the National Assessment of Educational Progress tests may give the illusion of objectivity, but no sampling design can assure representativeness when jurisdictions have dramatically different rates of student dropouts, grade retention, assignment to special programs, and exclusions from the test. Indeed, the objectivity of NAEP is already at risk as states increasingly view the test as a potential source of pride or embarrassment and, therefore, have incentives to teach to the test or exclude low-achieving students from taking it.
If, instead, each jurisdiction were to choose its own test, the problems would be compounded. A school district's apparent success or failure would be idiosyncratic, depending on the difficulty of the test, its familiarity to teachers and students, and the criterion of success. If, for example, the criterion were test-score gains, the incentive would be to give a new test, so that initial scores would be low and subsequent gains would be high as the test becomes more familiar—a common effect when an incoming superintendent institutes a new test. We would then read about how individual schools had made, apparently overnight, miraculous gains in test scores.
Test-based accountability would turn education into a "cram course" to raise test scores.
Perhaps most disturbing, the emphasis on test-based accountability would seriously weaken academic standards in public schools, because it would turn the education program into a "cram course" designed to raise test scores. It also would ensure that incidents of cheating continued and perhaps increased. Private schools apparently have already recognized the deleterious effects of high-stakes testing programs; they do not participate in them.
There is another risk: If the focus on standardized testing, and the rewards and sanctions associated with it, has adverse effects on the teaching environment, it will become increasingly difficult to attract and retain highly qualified teachers and principals. Recent news reports suggest that the problem is already serious. Shortages can be expected to increase, particularly in low- income schools, where both current shortages and testing pressures are most severe.
The bottom line is that the quality of a school's education program cannot be accurately measured by a standardized test, let alone compared with that of another school or jurisdiction. The federal accountability proposals would almost certainly lead to misleading results and counterproductive practices.
Iris C. Rotberg is a research professor of education policy at George Washington University in Washington. She formerly was a program director at the National Science Foundation and a senior social scientist at the RAND Corp.
Vol. 20, Issue 9, Pages 44,46