Washington--Representatives from 23 nations met here recently to begin devising an international assessment mechanism that proponents hope will enable educators to make meaningful comparisons between the goals and performance of nations’ education systems.
The November conference was sponsored jointly by the Education Department and the Organization for Economic Cooperation and Development, an association of 24 Western, developed nations that serves as a forum for a variety of joint discussions and projects.
Education Department officials, who proposed the initiative more than a year ago, said their push for valid international data is a necessary complement to increased domestic interest in measuring the success or failure of the American education system--as well as to the department’s own aim of holding educators accountable for their performance.
A ‘Larger Context’
Statistics are the “indispensable baseline for accountability,” Chester E. Finn Jr., assistant secretary for educational research and improvement, told the gathering.
“Domestic information alone simply does not provide a big enough picture,” Mr. Finn said. “International information establishes the larger context; it puts domestic data in perspective.”
While a nation may take satisfaction in the math gains its students are making, “there is little cause for pride,” according to Mr. Finn, “if students in other countries are improving at twice the rate.”
Besides pinpointing areas of relative weakness, Mr. Finn said, an international assessment system would spread good educational ideas and shed light on the influence of cultural factors on student achievement.
“We need to look for links between student outcomes and the variables that parents and educators can control,” he said.
Noting Americans’ concern about the superior achievement of students in Japan and other nations, Emerson J. Elliott, director of the Center for Education Statistics, said the United States needs to ask, “Are the sample populations different, or can the differences be explained by something else different about our systems?” Mr. Elliott was the prime organizer of the conference.
The assessment effort was to be discussed at a Nov. 30 meeting of the oecd’s education committee. The painstaking work of fashioning statistical measures valid across national boundaries will presumably be delegated to interested representatives, Mr. Elliott said.
“The education systems in many of these countries are asking the same questions we are asking here,” Mr. Elliott said.
Political Barriers Seen
But many conference participants also cited substantial political and practical barriers to implementation.
“The issue will be on what grounds countries can be meaningfully compared and to what extent they will be willing to be compared,” said Linda Darling-Hammond, director of the education and human- resources program at the Rand Corporation and one of several researchers to address the conference.
Several participants suggested that it will be difficult to get countries to agree, since some will inevitably compare unfavorably with others.
Other representatives said their countries would have to consider the feasibility of the enterprise and the difficulty of collecting the data before deciding to participate. Some said they had little statistics-gathering capability to build on, while others suggested their decentralized education systems would be reluctant to participate in national assessment efforts.
Alan Ruby, director of special programs for the education department of New South Wales in Australia noted, for example, that his country had been unable to persuade the province with the largest Aboriginal population to collect data on the indigenous minority.
Logistical Complications
Participants also said it would be a difficult logistical feat to construct an international assessment mechanism that could compare participation, achievement, and expenditure in disparate systems whose traditional age of enrollment, admission requirements, educational choices, and funding sources differ widely.
“You can measure the number in school at age 17,” Mr. Elliott said, “but being in school at that age in the U.S. is not the same as in Germany, where they have vocational and academic tracks. It means different things.”
Every term to be used must be carefully defined, participants agreed. For example, should expenditures for military training be counted as educational expenditures? Should a country be allowed to count a student one month over age 15 as a 14-year-old for comparative purposes? What is the definition of truancy? When is a student considered a dropout?
“I am reminded of the danger of adding together unlike things and thinking you have a total,” said P.H. Halsey, deputy secretary of Great Britain’s Department of Education and Science.
“There can be flexibility in how a question is asked in each country, but the information must be reported in an international code,” said Neville Postelthwaite of the University of Hamburg in West Germany.
Much of the conference focused on the need to consider how a society’s resources and traditions affect the outcome of educational comparisons.
Participants agreed that a nation’s financial health must be weighed in judging its educational outcomes. But they also pointed out that less obvious social factors can be critical in explaining statistical differences and similarities.
The University of Chicago researcher James S. Coleman said his work demonstrates that the “social capital” of a community and the value it places on children and on education are critical influences on the success of its schools. He urged the conference participants to measure these intangible elements also.
“If you know the countries and know the data, you can talk these things out,” said Mr. Elliott. “The point is not to make differences go away, but to explain them.”