Getting a Global View
Late last month, thousands of students in Singapore sat down to take 90-minute tests in mathematics and science. Although they may not have known it, they were helping to kick off what many are calling the largest, most complex international study of students' math and science achievement ever undertaken.
The Third International Mathematics and Science Study, sponsored by the International Association for the Evaluation of Educational Achievement, or i.e.a., will involve more than 50 countries and one million students worldwide and will take six years to complete. The cost of simply administering the examination to the 20,000-plus U.S. students who will be taking it is expected to top $3.5 million this year alone.
But beyond the sheer size of the effort, the study is important for another reason. It will attempt to shed light on one of the most central questions in education: What works?
"I can probably sit here and predict which countries will do well and which will do poorly, and I don't think we need another study to do that," says William H. Schmidt, a Michigan State University statistics professor directing the United States' participation in the effort. "What we're really trying to understand here is why."
A Natural Laboratory
Not since the successful launch of Sputnik in 1957 have international comparisons of student achievement generated the kind of interest they seem to do now. In part, the renewed attention stems from the national education goals, which were written into federal law just this year. Among the goals is one that calls for U.S. students to be "first in the world" in math and science by 2000. International educational comparisons are intended to help figure out exactly what that means.
International studies also provide a sort of natural laboratory for education researchers.
"Given that many people are reluctant to conduct controlled experiments with our children's education," a National Academy of Sciences panel pointed out in a paper last year, "comparison of natural variation is usually the most feasible way to study the effects of differing policies and practices."
The I.E.A., a private group with headquarters in The Hague, has been conducting international comparisons since the 1960's in several subjects.
Thus far, U.S. students have turned in mixed performances on those assessments. For example, on the organization's Second International Mathematics Study, which took place over the 1981-82 school year, they produced middling scores on test items involving arithmetic and algebra, and they scored below the international average in geometry and measurement. Japanese 8th graders, meanwhile, consistently outscored those from most other nations in all four areas.
U.S. students ranked near the top, however, among the countries that participated in an I.E.A. reading assessment that generated few headlines when it was released two years ago.
Critics point out, however, that some of those studies, like most international comparisons, were simply "horse races" that ranked students with little regard to the complex cultural, educational, and demographic differences among participating nations.
They raise a number of concerns. Were students taught what they were to be tested on? If so, in what sequence? How were they taught it? How much time was devoted to those studies?
A 1992 analysis by Ian Westbury, a University of Illinois researcher, concludes that, at least for the second mathematics study, the test's "curriculum" was tailored more closely to Japan's mathematics curriculum than to the United States'. In areas where the Japanese curriculum was less well matched, U.S. students' scores were comparable to those of their Japanese counterparts.
Attention to Contexts
The newest study, the planning for which officially began in 1990, was designed to address such criticisms.
"The words 'enormously significant' come to mind," says Andrew C. Porter, a University of Wisconsin researcher who sits on a National Research Council board that oversees U.S. participation in international comparisons. He notes that the assessment will include more subjects, more nations, and more attention to classroom contexts than any of its predecessors.
The assessment will be translated into more than 40 languages and administered to three student populations--those in the two adjacent grades containing the most 9-year-olds; those in the two adjacent grades containing the most 13-year-olds; and those in their final year of precollegiate schooling. In each country, 15,000 to 20,000 students will take the test.
The basic test consists of 70 multiple-choice questions and 30 longer open-ended questions. In addition, a smaller subgroup of students will be given an hourlong performance assessment that may require them to conduct a physics experiment or work out and explain in writing a complex math problem.
Different forms of the test will also be given to students who are specializing in math or science. In the United States, that group includes high school students taking more advanced classes in those subjects. And, according to Albert Beaton, the Boston College professor who is coordinating the international effort, that version of the math assessment is expected to be "really stiff" in part because it emphasizes more calculus than many U.S. students are used to seeing.
But those aspects of the study only shed light on what the researchers are calling the "attained curriculum," or what is learned. To gather clues on the "intended curriculum"--what is supposed to be taught--and on the "implemented curriculum"--what is actually taught, researchers have devised other measures.
They have, for example, begun to analyze the most widely used math and science curricular materials in all the participating nations.
"In countries that have very centralized education systems, that task is easy," says Schmidt, the Michigan State professor directing the U.S. participation in the exam. "In other countries, like the United States, that could mean a lot of texts."
In all, the researchers collected more than 1,200 texts and other curricular materials. The documents were analyzed for content, cut into smaller pieces, and coded page by page. The resulting information was then fed into computers.
The preliminary results of that process are expected to be available by early next year. Thus far, however, Schmidt says the analysis has turned up "astronomical differences among countries in regard to what is considered mathematics and science."
"That curricula is what creates educational opportunities," he adds.
For clues to the "implemented curricula," the researchers are also surveying students and teachers in schools where the testing takes place. They will look at students' home backgrounds as well as their classroom experiences, and teachers will even be asked to provide sample lesson plans.
"What we worked especially hard on were questions of curriculum and opportunity to learn," Beaton, the coordinator of the international effort, says. "For example, in the area of tracking, we should know, item by item, where kids in a country are in a track and whether they should have been taught the material."
In addition, three countries--Germany, Japan, and the United States--are paying for a small group of researchers to visit schools to videotape a typical classroom lesson for a subgroup of 8th graders taking part in the assessment.
"In isolation, that would not be much, but, in the context of the larger study, it does provide useful data," Schmidt says.
Coming to a consensus on the framework for the exam's content took two years. Haggling over specific test items took almost as long.
Baffled by Fjords?
Representatives from Indonesia, for example, complained about test questions that referred to seasons. There are none in Indonesia.
The Norwegian representative to the international committee protested that sections on earth science did not include items addressing "ice forms"--common knowledge for students in that part of the world.
American students, in turn, may puzzle over questions dealing with "fjords."
"We decided since we cannot be fair to everyone, we will be equally unfair to everybody," Beaton quips.
The Singapore exams began a round of testing that will take place throughout the Southern Hemisphere this fall. Students in the United States and in other Northern Hemisphere nations will take the exams late next spring.
The final results of the study will not be available until 1996.
Ambitious though it is, the study is not expected to provide definitive answers to teachers' and policymakers' questions.
"That requires finer-grained and more carefully controlled studies," Porter says. But, he adds, "this'll go a lot further in that direction than any past international assessment of student achievement."
Vol. 14, Issue 08