On Making NAEP a National 'Blueprint' For Education Policy
The National Assessment of Educational Progress is rapidly becoming a national blueprint for education policy. This change is being made intentionally and with considerable skill. When it is complete, NAEP is likely to have a substantial role not only in measuring education, but also in determining its direction. Handled properly, the transformation of NAEP has the potential of making the assessment the most powerful and beneficial policy tool in American education.
A Congressionally mandated research project, NAEP has been collecting data on the academic performance of young Americans in reading, writing, mathematics, and other subjects since 1969. It is the largest single source of testing data in the field of education, and the only regularly conducted national survey of student performance in grades K-12. The program is funded by the U.S. Education Department and is currently conducted under contract to the Educational Testing Service.
Originally, NAEP was designed to supply national benchmarks of student achievement, without delving into the politically sensitive question of how well youngsters learned in one state compared with another. Because of fears that the assessment would lead to a nationally mandated curriculum, NAEP was also designed to have a minimal effect on the content of academic courses.
In the last few years, however, as school reformers have called for greater accountability in education, there has been a desire at both the federal and state levels to develop measures that could provide state-by-state comparisons of student achievement. Such measures would probably influence the content of the curriculum.
NAEP is considered a good vehicle for developing these new measures. A blue-ribbon task force of the Education Department is currently studying ways to strengthen the national assessment. Already, a number of Southern states are field-testing a program that uses NAEP to provide national, regional, and state-by-state comparisons of student performance.
This is not to say that NAEP has not been a beneficial program to date. By providing consistent longitudinal data on what students know and can do, it has enabled us to track trends in academic performance. The research benefits of this information are not inconsequential. Research, however, has had only indirect effects on educational policy. As a direct policy tool, NAEP has been sadly lacking.
This is because NAEP was developed as an "inert" measure--one that is designed neither to change nor to be changed by that which it measures. In the case of NAEP, this inert quality was originally essential. In the political climate in which NAEP was created, a more "reactive" measure--one that has the potential to change or be changed by local or state educational practices-- would not have survived.
Three key factors differentiate reactive from inert assessments: information, consequences, and the ability to change. In other words, for an assessment to be reactive, those affected must know what is being assessed, must care about the assessment, and must be able to change their performance on the assessment. For an assessment to remain inert, one or more of these conditions must be absent.
Until the last few years, the students and teachers selected for NAEP assessments were given very little information on what would be assessed. After the assessment, participants were not given information that specifically identified their deficiencies. In fact, the very concept of deficiency was de-emphasized.
Neither were there any consequences for students, teachers, schools, school systems, or states because of their performance on NAEP. Any change that might have been prompted by the test results was diffused by the lack of any meaningful connection between the results and the individuals or organizations that produced them.
Finally, the participants could not easily improve their performance on the basis of their NAEP results. The relatively global test objectives, the item-sampling procedures, and the lack of specific feedback on instructionally sensitive components made it extremely difficult for schools to intentionally improve their scores. In addition, the emphasis on sociological variables (race, sex, TV watching) directed attention away from the use of NAEP in the evaluation of educational policy.
But an examination of recent changes in NAEP policy and procedures indicates that the new NAEP is well on its way to becoming a reactive assessment program. There has been a massive effort to see that those assessed (schools, school systems, and states) know about NAEP. Detailed information on the objectives assessed and the results obtained are being packaged for maximum public consumption. NAEP results are now written clearly and directly and are being provided to more policymakers more rapidly than ever before.
There have also been efforts to make the educational establishment care more about NAEP results. The assessment is moving toward a greater focus on school organization (testing by grade as well as age) and on curriculum content (testing U.S. history rather than social studies). Pilot efforts now under way that enable states to use NAEP to compare their students' performance with that of schoolchildren nationally and in other states significantly increase the degree to which those assessed care about the results.
The new practice of providing information on variables that are under the control of the educational organization has enhanced the possibility that NAEP will be a catalyst for change. School and teacher questionnaires have been added to those previously used to collect background information on individual students. The shift from a focus on the number of students who responded correctly to an individual item to a scaled index of performance has made analysis of the relationship between performance and other variables easier and more accurate.
These changes are likely to result in a NAEP assessment program that will have a direct impact on education in the United States. NAEP is no longer simply measuring educational progress; it is now attempting to improve it.
Of course, there have been some negative reactions to the new NAEP. Some believe that assessment programs are inappropriate vehicles for educational reform. They argue that assessment should follow, not lead, the curriculum. They fear that allowing NAEP to be used in this way will establish a de facto national curriculum that may be different from that espoused by a particular state or school system. Even if the assessed content turns out to match the curriculum already in place, the very existence of a national test might usurp the prerogative of individual states and school systems to determine their own curricula.
A second concern is that as NAEP changes education, it will itself be changed. Educators wishing to improve their students' performance on the test will want a voice in determining its content. The choice of content will no longer be the prerogative of independent groups of educators. The use of the assessment's results could pass out of NAEP's control. NAEP might also be unable to maintain a role as an innovator in the technology of assessment. There could be great pressure on NAEP to do what is safe, well understood, and traditional.
While these are clear dangers, the process by which NAEP develops the objectives for its assessments--through the consensus judgment of learning-area committees--is specifically designed to prevent the control of NAEP by special-interest groups. The assessment's broad-based approach to item sampling is actually less likely to be restrictive of the curriculum than either current textbooks or the commercially available achievement tests.
In fact, the new NAEP assessment strategies could have a powerful influence in improving student learning without some of the dangers inherent in a "fixed form" testing program. NAEP would not necessarily need to identify individual students or expose successive groups taught by the same teacher to the same test items (a sure way to spuriously raise scores). Thus, it could provide policymakers with far more accurate information on student performance than is now available.
Moreover, the ability to connect NAEP results with specific states, schools systems, and educational settings would permit analysis of the costs and benefits of various policy alternatives. This capability could prove to be the most powerful education-policy tool yet available.
Properly handled, then, NAEP has the potential not only to assess educational progress, but also to assist it. Yes, a reactive assessment, like most powerful tools, has the potential for causing great harm as well as great benefits. But the needs of education today require extraordinary efforts to identify and implement effective policies. The challenge is to recognize and to develop the positive potential of the National Assessment of Educational Progress as a national blueprint for education policy.
Vol. 6, Issue 9, Page 22Published in Print: November 5, 1986, as On Making NAEP a National 'Blueprint' For Education Policy