Fewer Than Half Of States Join NAEP 'Pre-Test'
Washington--Despite what officials here insist is an "extraordinary'' growth in support for state-by-state comparisons of student achievement, only about 20 states have agreed to participate in a field test of the first assessment to allow such comparisons.
States had until late last week to decide whether to take part in the field test, which is scheduled to be conducted early next year by the National Assessment of Educational Progress.
Although the participation rate is lower than expected, the number is "quite enough for an adequate pre-test," according to Emerson Elliott, commissioner of the National Center for Education Statistics.
In addition, he and others said, many more states--perhaps as many as 40--are expected to participate in the actual assessment, which will take place in 1990.
That participation rate will demonstrate, observers said last week, the "dramatic change in attitude" that has occurred over the past decade on the issue of state comparisons.
As recently as five years ago, they noted, many educators, fearful of unfair or embarrassing comparisons, considered the idea of comparing achievement across states "heretical."
But since that time, support for the concept has grown as policymakers have turned their attention to educational outcomes, rather than "inputs," noted former Gov. Lamar Alexander of Tennessee, a leading proponent of state-by-state comparisons.
This growth in support, Mr. Alexander said, "is as good a symbol as I can think of of the change in the national education agenda."
"This came about," he said, "because the nation began asking the question, 'How are we doing?"'
"You can't tell how we're doing by measuring inputs," Mr. Alexander said. "You have to measure how we come out."
Mr. Alexander and others noted that smaller-scale pilot tests of state-level assessments have convinced educators that such comparisons can be conducted effectively and fairly. The largest such experiment, run by the Southern Regional Education Board, was considered a success by most participants. (See story on this page.)
But despite such enthusiasm, educators in a few states question whether the project will be worth the time and money it will take. The assessment, they argue, will add to students' testing burden without providing the kind of school- and district-level data that could lead to improvements in achievement.
Mr. Alexander acknowledged that state-level comparisons do not go far enough, and said that district-by-district comparisons are the next logical step. "State-by-state comparisons will, in reality, have limited usefulness," he said. "Most problems are solved community by community."
Nevertheless, he said, state comparisons are "a step in the right direction."
Law Authorized Expansion
Created by the Congress in 1969, naep is a federally funded assessment that tests about 100,000 students every two years in reading, writing, mathematics, and other subjects. The assessment is currently conducted by the Educational Testing Service under contract to the Education Department.
The Hawkins-Stafford Education Improvement Act of 1988 pl 100-297, which reauthorized the assessment, also authorized naep to con4duct a trial 8th-grade math assessment in 1990 "with the purpose of determining whether such an assessment yields valid, reliable, state representative data."
Under the law--which was based in large part on the recommendation of a blue-ribbon task force appointed by Secretary of Education William J. Bennett--naep will also conduct a field test in 1989 to give states an opportunity to review the data collection, sampling techniques, and test items that will be used in the 1990 assessment.
This "dry run," which will be conducted over a four-week period in February and March, will test a total of 7,000 8th graders, or about 350 in each participating state.
The 1988 law stipulated that participation in the assessment be voluntary, and many of the states elected not to spend the staff time required to administer the field test.
Rather, according to Mr. Elliott, many decided to invest their resources in the 1990 assessment. States have until Dec. 1 to decide whether to take part in that test.
But those that participate in the pilot will have an opportunity to influence the later test, noted Ramsey Selden, director of the state-education assessment center for the Council of Chief State School Officers.
"It seems to be in a state's interest to be involved, to know what the procedures are going to be like," he said.
He added, however, that a field test with some 20 states will be sufficient, as long as it includes a variety of states.
"You've got to have enough diversity to try the thing out under different conditions," Mr. Selden said.
No 'Political Vacuum'
In contrast to the low participation in the field test, the 1990 assessment is expected to draw a large turnout.
"In many states, the state board, the legislature, and the governor don't want to appear to be against it," said Edward D. Roeber, supervisor of the Michigan Educational Assessment Program. "They are fearful that a report will come out in 1991 and they will not be on it."
"In a political vacuum, states might say no," said Mr. Roeber, who is chairman of the association of state-assessment programs. "But they're not in a political vacuum."
Despite past concerns that such comparisons could embarrass states that perform relatively poorly, added Thomas Fisher, director of assessment for the Florida Department of Education, most state officials now consider the process beneficial.
"It's a win-win situation," he said. "If students score high, you can pat yourself on the back and say you're doing a great job. If they score low, you've discovered an area you need to work on, that needs improvement."
Weighing the Benefits
Officials in Louisiana, for example, said they intend to participate even though the state traditionally ranks near the bottom on measures of educational achievement.
"We understood the impact poverty and high minority percentage would have on test results," said Clarence E. Ledoux, director of the bureau of accountability in the Louisiana Department of Education. "We know going in that we won't rank very high. But we still want to know" how Louisiana's students performed.
States that have decided not to participate--such as Alaska--contend that the information to be gleaned will not be worth the cost and testing time required to administer it.
The Education Department estimates that states will have to pay $140,000 in outlays or staff time for the assessment.
According to Robert J. Silverman, Alaska's director of assessment, the legislative provision prohibiting naep from reporting achievement data on a student, school, or district level "seriously compromises its usefulness."
"If all we get from the assessment is one number," he said, "I don't think our legislature will authorize funds for it. Funds are real tight; we have to target scarce resources."
But Gordon M. Ambach, executive director of the ccsso, predicts that these states eventually will find money and the way to fit the assessment into their testing programs.
"There won't be many more testing cycles before we see 100 percent of the states volunteering to participate in the program," he said.
Paying the Bills
Such an outcome, he and others acknowledge, would have been unthinkable a decade ago. In fact, as officials noted last week, naep's original authorizing legislation specifically prohibited the reporting of state data.
"The chiefs and state-agency people feared that simple-minded, gross comparisons would be destructive, not helpful," Mr. Selden said.
This point of view began to change in the early 1980's, as governors and state legislators--spurred in part by the growing involvement of business leaders in education--demanded data on educational outcomes as a condition for increasing school aid, noted Michael W. Kirst, professor of education at Stanford University.
These policymakers considered educators' resistance to state comparisons "self-serving," said former Governor Alexander.
"Here is a profession that has historically graded children from A to F every six weeks," he said. "Yet when you try to look at the results of schools, and praise those that do well and help those who do not, you get a lot of resistance."
"People who pay the bills want to know the results," Mr. Alexander said. "I think they are entitled to know."
In addition to the pressure from lawmakers, educators were also spurred by former Secretary of Education Terrel H. Bell, who in 1984 began to issue the annual state-by-state "wall chart" of education indicators, which included as a measure of student achievement states' performance on the two major college-admissions tests.
State officials objected to these comparisons as inappropriate. But the wall chart--and the media attention it attracted--made it clear to state officials that "the public and the media were probably going to find some way of comparing states," according to Mr. Ambach.
"The American penchant for finding numbers is tremendously strong," he said. "The question is, if that's the case, we'd better find the best numbers we have, or develop good ones."
In response to such demands, support for expanding naep to allow state-by-state comparisons grew steadily, Mr. Ambach added.
By 1987, the idea was "a foregone conclusion," according to Mr. Kirst of Stanford, a member of Secretary Bennett's blue-ribbon task force.
That panel's report, and the legislation that resulted from it, convinced recalcitrant testing officials that state comparisons were "inevitable," according to Mr. Elliott of the Education Department.
"That was a marked change among testing people," he said. "And the number of states signing up for the assessment is another reflection of this very remarkable change."