Opinion
Education Opinion

National Standards and NAEP

By Walt Gardner — March 31, 2010 2 min read
  • Save to favorites
  • Print

When a panel of educators released a set of proposed national standards on March 9 to replace the crazy quilt of locally written standards, the blueprint was rightly hailed as a long overdue step to assure that students are prepared for college or career. But before uncorking the champagne, reformers need to follow through with a plan that provides appropriate evidence about student learning.

What comes to mind immediately is the use of the National Assessment of Educational Progress. Often referred to as the nation’s report card, NAEP would seem to be the ideal instrument for this task. After all, shouldn’t national standards call for national assessment? But this is one instance in which you can’t judge a (test) book by its cover.

To understand the reasons, it’s necessary to rewind the tape. In 1969 when U.S. Commissioner of Education Francis Keppel proposed the creation of NAEP, he merely wanted to provide a general picture of the country’s educational health. His intent was strictly descriptive.

But things got complicated when Congress amended the law in 1988 to allow state-by-state comparisons, and created the National Assessment Governing Board to decide what students of a stipulated age should know. In one fell swoop, the law became both prescriptive and descriptive.

However, because NAEP covers a broad range of knowledge and skills, rather than focusing on any specific curriculum, it’s impossible to know with any certainty if the test measures what students have been taught in class through effective instruction, or what they have brought to class from their backgrounds.

This distinction is crucial if valid inference are to be drawn about teacher effectiveness regarding the new national standards. In “Instruction that measures up"(ASCD, 2009), W. James Popham, the country’s foremost authority on assessment, has written about the need for instructionally sensitive tests.

Popham writes that a test must “accurately reflect the quality of instruction specifically provided to ... students for the purpose of ... mastering the content being assessed.” In this regard, it’s altogether possible that NAEP scores will not confirm scores on whatever national test is finally designed and adopted.

That outcome will no doubt further confuse taxpayers who are frustrated over the slow pace of school progress. Yet it shouldn’t if they remember that all tests are not created equal. Each test has to be designed and administered with the same care as pharmaceutical companies manufacture drugs and doctors write prescriptions for them.

The most recent reminder of this caveat was the release of NAEP reading scores (“Stagnant National Reading Scores Lag Behind Math,” New York Times, Mar. 24). The results showed little or no progress in reading proficiency, continuing a 17-year trend. One explanation is that the $1-billion-a-year reading initiative, Reading First, helped students read words, rather than help them comprehend. But NAEP was designed to measure comprehension.

That’s one good reason why taxpayers need to become more knowledgeable about interpreting test results.

Related Tags:

The opinions expressed in Walt Gardner’s Reality Check are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.