Proposal To Adjust NAEP Scores for Diversity Mulled

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments


Federal officials are exploring ways to adjust scores from the "nation's report card'' to reflect differences among states in school resources and in the ethnic, economic, and other characteristics of their student populations.

Officials considering the idea say it would make state-by-state comparisons on National Assessment of Educational Progress tests fairer to some states. But critics charge that the changes being discussed would implicitly concede that poor children cannot be expected to do as well in school as their more affluent peers.

If officials from the National Center for Education Statistics decide that the methods currently being studied could provide valuable new insights into how schools are doing, the government could begin reporting how a socioeconomically diverse state such as California might fare on the assessment if its unique demographic characteristics were taken into account.

Adjusting the scores has both advantages and disadvantages, federal statisticians believe.

"The major pro is it gives you an opportunity to look at performance differences with statistically adjusted populations,'' said Gary W. Phillips, the director of the N.C.E.S.

States that might have blamed their students' low performances on socioeconomic factors might have to look for other reasons to explain continued poor performance if those factors were taken into account, Mr. Phillips suggested. On the other hand, other low-scoring states with large disadvantaged populations might see their students' scores increase with such adjustments.

"The major con,'' Mr. Phillips added, "is these are not like real populations.''

But others, including a number of state testing directors, see more fundamental problems with the idea, arguing that it suggests that states are not all being held to the same high educational standards.

"It's probably the worst idea I've encountered in 10 years of closely watching NAEP,'' said Chester E. Finn Jr., a member of the National Assessment Governing Board, which was scheduled to hear a report late last week on the proposal. "Once you start fiddling with the numbers you can ... show anything you like, and then you begin to lose public confidence.''

Leveling the Field

The Congressionally mandated NAEP program has tested national samples of students in certain academic subjects for 20 years. The program became more prominent, however, after it was expanded in 1988 to permit state-by-state comparisons of student achievement.

Some states complained that the results cast them in an unfair light. They pointed out that the program simply scores and ranks states without considering differences among them that could influence those results.

California's 4th, 8th, and 12th graders, for example, scored at the bottom among states on the latest NAEP reading assessment. But, at 22 percent, the proportion of disadvantaged California children taking the assessment was almost 2-1/2 times that of the nation as a whole.

Moreover, 21 percent of the state's schoolchildren are not native speakers of English.

"This is an important issue, and it can't be washed away by saying the only thing the statistical agency should do is report results for the overall population,'' said Commissioner of Education Statistics Emerson J. Elliott.

The assessment's added prominence and high cost have also prompted the N.C.E.S. to look for ways to squeeze more information out of the test. If, for example, the test showed that students in states that emphasized a particular instructional approach fared better on NAEP, then the assessment program would have provided valuable information for all educators.

Federal statisticians, working with the Educational Testing Service, which administers the assessment, have thus far explored a number of ways to account for some of those other variables.

One method, for example, calls for analyzing how a state might fare on the assessment if its population mirrored that of the nation as a whole. Another way would be to look at the state's scores if the nation's population had the same demographic features as the state.

Factors Schools Cannot Affect

Mr. Elliott said the factors department officials are considering taking into account for each state include: students' gender, parents' educational attainment, minority enrollment, the number of parents in the home, the size of the special-education population, and the number of students enrolled in the federal subsidized-lunch program.

"These are largely factors schools can't do anything about,'' Mr. Elliott said.

The factors reflecting school resources--or students' "opportunity to learn''--might take into account state per-pupil spending or class sizes.

N.C.E.S. officials have already presented some of their ideas at meetings this year with a group of state testing directors and with a statistical advisory panel of educators, researchers, and statisticians.

The preliminary analyses show that California's ranking would not have improved greatly if socioeconomic factors were considered. But the District of Columbia would see scores rise, in part because its relative handful of white children tend to be among the nation's highest-scoring students.

Officials of the education-statistics agency said last week that all of the results from the analyses conducted so far were not yet ready for public release.

Separate Report Eyed

The testing directors, representing some 35 states, were overwhelmingly against adjusting scores as part of a regular NAEP report. But they did not oppose disseminating such results as part of a separate research effort, and federal officials said that would be the more likely course they would take.

Either way, absolute scores would have to be maintained in some form in order to track educational progress over time, all those looking at the idea agree.

A few states have experimented with the idea in their own assessment programs by using "predicted scores'' based on demographic variables for schools and districts.

However, such efforts often led to differing expectations.

"Rather than saying we have standards we want all students to meet, these kinds of efforts literally have given us the idea that poor children cannot learn,'' said Edward Roeber, the director of state-assessment programs for the Council of Chief State School Officers. "The implied message is, 'You have lots of poor children, so your scores should be lower.'''

Thomas Fisher, the Florida testing director, said adjusting scores to account for opportunity-to-learn factors would also be difficult because there are no agreed-upon indicators of school quality.

Even if there were, the information that comes from adjusted scores would be of little practical value, Mr. Fisher said.

"My cholesterol count is 203, and that's the low end of normal,'' he said. "But suppose my doctor told me it was 350, but, 'I've adjusted that to take into account the fact that you're overweight and overaged, so you're as healthy as an 18-year-old.'''

"That makes no sense,'' he said.

Vol. 13, Issue 24

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories