The assessment-policy committee of the National Assessment of Educational Progress last week endorsed an ambitious plan for expanding the nation’s “report card’’ on student achievement, while cautioning that the proposal could lead to greater federal control over education.
Discussion of that plan, which would include a redesign of NAEP to provide state-by-state data, dominated a conference here of state testing officials.
“The state-by-state comparison engine has left the station,’' said W. Ross Brewer, director of planning and policy development for the Vermont Department of Education. “We’re all on it. The question is, where do we go with it and what do we do with it?’'
“Generations of students went through school without these comparisons,’' Mr. Brewer added.
“If a piece of the next generation goes through without them, that’s all right, if we take the time to do it right.’'
The endorsement by the NAEP policy committee--which oversees the work of the assessment’s current contractor, the Educational Testing Service--adds further weight to the redesign proposal, which was the focus of a report this spring by a study group appointed by U.S. Secretary of Education William J. Bennett. (See Education Week, March 25, 1987.)
“All of us believe the report was very constructive,’' said Wilmer S. Cody, superintendent of the Montgomery County, Md., schools and chairman of NAEP’s policy panel.
“A lot of the specifics need to be worked out,’' he continued. “But the thrust of the recommendations will move the assessment into a new and important era.’'
Officials in the U.S. Education Department are drafting legislation that would implement the report’s recommendations, according to Emerson Elliott, director of the department’s center for education statistics.
In addition, he noted, a pilot for the expanded assessment is expected to take place in 1990. The pilot effort will probably test 12th-grade students in mathematics, he said.
The department is soliciting bids for a contractor to plan the pilot, which will be part of a 30-month contract to administer the assessment beginning next year. The federal government’s $20.3-million, five-year contract with the E.T.S. for the administration of the assessment expires next June.
Reservations Expressed
The NAEP policy committee, which on May 30 approved in principle its statement endorsing the study group’s report, agreed with most of the recommendations, but expressed strong reservations over the issues of state comparisons and governance.
Quoting experts convened by the National Academy of Education, whose review of the proposals was published along with the report, the policy committee argued that “simple comparisons are ripe for abuse and are unlikely to inform meaningful school-improvement efforts.’'
The committee urged the federal government to move ahead with state comparisons, but “with full appreciation of these concerns.’'
In addition, it warned, the “educational-assessment council,’' which the study group proposed to oversee NAEP, could become subject to federal control, because its members would be selected by the Secretary of Education and the council would be part of a new entity chartered by the Congress.
“This change in governance, when combined with concerns expressed about the possible standardization of a system of state comparisons,’' the committee stated, “may create an unintended impression of considerably increased federal influence over education.’'
All but two of the members of the 22-member committee--which includes Gov. John D. Ashcroft of Missouri; Mary Hatwood Futrell, president of the National Education Association; and Sister Catherine T. McNamee, president of the National Catholic Educational Association--signed the statement.
Antonia Cortese, first vice president of New York State United Teachers, abstained because she was a member of the study group. Chester E. Finn Jr., the Education Department’s assistant secretary for educational research and improvement, also abstained, citing the department’s role in establishing the group.
‘How We Say It’
State officials attending the meeting here, which was sponsored by the Education Commission of the States and the Colorado Department of Education, expressed ambivalence about the expansion of NAEP.
While state testing directors have generally abandoned their long-standing opposition to the concept of state comparisons, they do have “real and genuine concerns about using NAEP’’ as the vehicle for such comparisons, according to Pascal E. Forgione Jr., chief of the office of research and evaluation of the Connecticut Department of Education.
Perhaps the most serious consideration, a number of directors said, is the way in which the results of the new assessment would be reported.
“That’s the most important component,’' said Anne C. Hess, coordinator of student assessment with the Alabama Department of Education. “How we say it is going to mean everything.’'
Added Mark Fetler, coordinator of the planning and information center for the California Department of Education: “If they come out with something like the wall chart, that would be a crying shame.’'
The 50-state wall chart, prepared annually by federal education officials, compares states according to such factors as student performance on college-admissions tests, per-pupil expenditures, and teacher salaries. Complaints about the use of admissions-test scores to rank states has led the Council of Chief State School Officers to undertake a project to provide more accurate comparisons.
Data from the redesigned NAEP could be presented in a series of charts showing how different states match in different ways, suggested Leigh Burstein, a professor of education at the University of California at Los Angeles.
“It may not be a wall chart,’' he said. “It may be a series of wall pictures.’'
Others stress that assessment results must be presented in context, to allow for specific factors, such as a large number of disadvantaged students, that might have influenced a state’s scores.
Matching the Curriculum?
Another worry, Connecticut’s Mr. Forgione and others said, is that students may perform poorly if the content of the assessment does not match their state’s curriculum.
And, if a state expects to receive a low ranking, added Allen S. Hartman, director of the bureau of research and assessment of the Massachusetts Department of Education, it may be reluctant to participate in the program, which would be voluntary.
“If I were in Mississippi, and I saw New York, Michigan, and California joining the assessment,’' he said, “I’m not going to rush into a new program that’s going to advertise that we’re near the bottom.’'
To avoid conflicts, the content of the assessment must be carefully drawn from all state curricula, suggested Ramsay W. Selden, director of the state chiefs’ National Assessment Center.
Interpreting the Data
And, when the assessment data are released, said Frank Newman, president of the E.C.S., educators should interpret the results, as the federal Bureau of Labor Statistics does when it releases economic data.
“Whether you label it good or bad, it will be labeled good or bad,’' added Paul D. Sandifer, director of the office of research of the South Carolina Department of Education. “People releasing the data should take the responsibility of labeling it up front.’'
Any ranking should show how states progress from year to year, said Terry Peterson, special assistant to the president of Wheelock College in South Carolina.
The assessment should also set a standard that enables states to know if they are performing up to their potential each year, Mr. Sandifer continued. Otherwise, he said, no matter how well a state does, it could be ranked near the bottom if all states do well. “If you put all the Nobel Prize winners in physics in this room, one of them would be the dumbest here,’' he said.