Education Opinion

NAEP Is for Assessment, Not Accountability

By Arlene Zielke — November 07, 1990 5 min read

Thomas C. Boysen’s thoughtful diagnosis of the ills of our current “patchwork” of student testing is unfortunately weakened by his recommended treatment-the expansion of NAEP (Commentary, Oct. 10, 1990). His logic is akin: to spraying water to put out a chemical fire-the mix is wrong, and will fuel, rather than extinguish, an already deteriorating problem.

Mr. Boysen is right when he supports the need for an assessment program that tests students on what they are taught; decries the use of “inferior indicators” as a gauge of educational effectiveness; warns about the lack of state and federal regulations relating to standardized tests; and longs for “clear” achievement measures and standards that allow us to “give good feedback to students and teachers.”

But he attributes ecumenical properties to a very temporal instrument. Expanding the National Assessment of Educational Progress will only exacerbate, not relieve the problem of ''patchwork’’ testing. Because the current NAEP cannot address all of the above objectives, expanding NAEP will not mean that all of the other tests Mr. Boysen eschews disappear. NAEP will become just another addition to the long list of tests that confuse many parents as well, apparently, as Mr. Boysen.

NAEP is designed to serve a national purpose, not to compare states or school districts, or to serve as a diagnostic tool that can inform us about instructional improvement or individual student progress. As a representative national model, NAEP assesses only a small part of educational progress. It does little to inform us about the “quality” indicators of instruction, such as teacher competency, parental involvement, instructional leadership, adequate resources, and community support. Nor does it inform us about student creativity, higher-level thinking skills, or student thought processes.

Furthermore, the test does not give any insights to a state that has received a low NAEP ranking. A high-ranking state, on the other hand, might be deluded into complacency because of its NAEP ranking. The focus of the test is always on the right answer, rather than on how a student derived an answer. How can this kind of instrument possibly “elevate what goes on in the classroom”?

Superintendent Boysen seems not to be fazed by curriculum driven by testmakers. But the National PI’A is. While local decisionmaking about the curriculum is frequently cumbersome and inefficient, we also know that the topdown effect of a national test with highstakes repercussions for both students and teachers is debilitating. At a time when school-site management, teacher and parent empowerment, and decentralization are being recognized as essential to the next phase of school improvement, expanding NAEP establishes a paradigm. diametrically opposed to local decisionmaking. How can a single test adequately take into consideration the various needs, goals, income levels, and other factors relating to more than 120,000 school buildings in more than 14,000 school districts?

The National PI’A agrees with the American Academy for Education that “when test results become the arbiter of future choices, a subtle shift occurs in which fallible and partial indicators of academic achievement are transformed into the major goals of schooling.” In other words, when the test becomes the curriculum, major decisions about programs and goals will be made by the test-makers-not by the parents and educators. The National PI’A is not willing to give over the community’s right to participate in testing policy to a small group of unaccountable psychometricians.

When the current NAEP legislation was passed (P.L. 100(297), it authorized a five-year trial expansion program consisting of a 1990 trial (in 8th-grade mathematics) and another in 1992 (in 4th-grade reading and 4th- and 8th-grade mathematics). The Congress also authorized a five-year evaluation of this activity, presumably so that subsequent decisions about the future direction of NAEP development would be based on systematic, analytical, and valid information-a virtue upon which all testing decisions and policies should be based.

The National PI’A, while opposed to a mandated testing program, was generally supportive of an honest and thoughtful pilot process in the interest of improving assessment. However, in a December 1989 action, the 24- member National Assessment Governing Board chose a course that would subvert the intention of the Congress by recommending ''to remove the prohibition against the use of NAEP test and data reporting below the state level” before the Congressional statutory ink was even dry.

This N.A.G.B. action constituted a serious breach of the confidence the National PTA placed in the policymaking process of piloting and evaluation en route to determining the most effective course fur national assessment.

The experience convinced us that the politics of testing policy, rather than the wisdom of a logical process of test development, will inevitably prevail. We are not willing to permit 24 people, who are unelected and have independent, quasi-legislative powers unaccountable to the U.S. Secretary of Education and 40 million elementary and secondary public- school students, to become the national school board of the country.

The National PTA is not opposed to a fair system of accountability. But Mr. Boysen makes a number of quantum intellectual leaps in his Commentary that our organization believes are illegitimate and serve to confuse the policy debate about testing. First, accountr ability is not synonymous with NAEP; second., expanding NAEP will not drive all of the other tests out of the system; and third, NAEP does not serve as an instrument of school improvement, but as an assessment of only the most basic of student skills.

Sometime in the future, NAEP may be on the way to providing more appropriate testing that will effectively reduce reliance on multiple-choice questions, standardization, and decreased occupation with who is on the bottom and who is on the top, and will provide more meaningful and authentic test results. Portfolios, writing samples, and more multi-faceted testing instruments clearly move in that direction. What we don’t need, however, is a system that provides the illusion or pretense of accountability.

Finally, if there are school districts that use a “patchwork” of unrelated and unmeaningful tests and are frustrated by this predicament, they should be reminded that someone made the decision to create that problem. Commercial tests just don’t appear in school buildings and classrooms; they are purchased by school boards and administrators who are also responsible for developing responsible testing policy.

The National PTA is not unsympathetic to the problems encountered in the testing maze, but the focus should be on the kind of school improvement Mr. Boysen ultimately advocates. And NAEP won’t get us there.

A version of this article appeared in the November 07, 1990 edition of Education Week