Participation Rates in State-Level NAEP Eyed
A report released this month by a panel of leading scholars raises a knotty question for the National Assessment of Educational Progress's state-level assessment program: To what degree should schools initially targeted to take part in the assessment be required to participate?
The panel, which was made up of 18 members of the National Academy of Education, concluded that in some states too few of the targeted schools are taking part--a situation that could skew the results of the assessment. The report recommends raising the requirements for initial participation in the program.
But federal testing officials, while agreeing that the researchers had flagged a potential problem, said setting those requirements too high could discourage states from joining.
"It's a tradeoff,'' said Gary W. Phillips, the associate commissioner for the education-assessment division of the National Center for Education Statistics, which oversees NAEP. "Do you want to get better information from fewer states?''
'Questions and Doubts'
The NAEP program has for 20 years tested national samples of students. It was expanded in 1988 to permit on a trial basis state-by-state comparisons of student achievement.
Those assessments were conducted in 1990 in mathematics and in 1992 in 4th-grade reading and 4th- and 8th-grade math. This year, 41 states plan to participate in the program.
The academy's report concludes that the state-level assessments have essentially worked and should be conducted yearly. (See Education Week, Jan. 12, 1994.)
But it also signals a potential problem with the rates at which schools in each state participate in the program.
According to NAEP policy, statisticians target schools for the program on the basis of demographic factors. But school officials have the right to decline to participate in the program, frequently because of concern over excessive testing of their students. When first-round schools decline to participate, schools with similar characteristics may be substituted.
States are asked to recruit at least 70 percent of the targeted schools for the assessment.
But under a change in policy made last year, results for states that fail to reach that goal will now be reported separately from those of other states.
In its report, the academy panel notes that participation rates have become increasingly varied among states and that there were more substitutions in 1992 than in 1990.
Moreover, states with low initial-participation rates also had higher scores on the assessments, which suggests the possibility that those results could be inflated.
"There's nothing we had that's conclusive,'' said Robert Linn, an education professor at the University of Colorado at Boulder and a co-author of the study. "It's just that it raises questions and doubts.''
To erase any doubts, the report urges raising the minimum-participation requirement to 85 percent.
Some state-testing directors last week said they would support that change.
"I would go even higher,'' said William Brown, the director of testing for the North Carolina education department, where 95 percent or more of targeted schools participate. "If you have a sample with only 3,000 kids in the whole state and if only 70 or 75 percent of schools participate, your scores could be affected by that.''
Too High a Hurdle?
But officials of the N.C.E.S. and the National Assessment Governing Board, which sets NAEP policy, note that other states would have trouble meeting a higher hurdle.
In the 1992 state-level assessment for 8th-grade mathematics, for example, 15 states had initial-participation rates of 85 percent or lower. The lowest was Maine, where 62 percent of targeted schools took part.
This year, two large states--Illinois and Ohio--have already withdrawn after failing to meet the 70 percent goal.
Further complicating matters, this year the assessments will for the first time include private schools, which have had an even harder time meeting the 70 percent goal.
Although a study by Westat Inc., a Rockville, Md., research firm, found no major differences in the demographic characteristics of the schools that participated and those that declined, analysts say more research is needed to see if the substitute schools were different and whether state scores were affected.
"The question is, do you want it to be statistically pure or do you want to get a report out,'' said Lawrence Feinberg, the governing board's assistant director.
Vol. 13, Issue 17