PISA Results Scoured for Secrets to Better Science Scores
When results from the latest international science and math exams were unveiled last month, American media attention focused on how U.S. teenagers stacked up in those subjects against their counterparts elsewhere in the world. And the news wasn’t pretty.
But the report on the 2006 Program for International Student Assessment, or PISA, contains a wealth of other data that offer clues to what educators and policymakers might do to improve U.S. students’ middling test scores.
Launched seven years ago, PISA is unusual in its attempts to assess 15-year-olds’ ability to apply and analyze information, rather than their formal knowledge of the subjects tested. Out of the 30 industrialized countries taking part in the program’s 2006 science exams, U.S. students ranked lower, on average, than their counterparts in 16 other countries. In math, which was tested in less depth in 2006, American teenagers fared even worse. ("Poverty’s Effect on U.S. Scores Greater Than for Other Nations," Dec. 12, 2007.)
The less publicized analyses buried in the 383-page report, though, examine differences in how nations go about the business of schooling—whether local educators have a say in running their schools, for instance, or how much time schools devote to teaching science—and pinpoint which of those practices are statistically linked to better performance on the science portion of the exam.
“It’s not just that we are different. It’s how we are different,” said Andreas Schleicher, who heads the indicators and analysis division for the Paris-based Organization for Economic Cooperation and Development, which runs PISA. “There is a common pattern among countries that do well, even if the causal mechanisms are still to be explored.”
One analysis in the report shows, for example, that students’ science scores are higher in school systems or districts where local schools set their own budgets.
Every one-unit increase on that measure translated to a gain of 27.5 points on the PISA scale score—and that’s after accounting for socioeconomic differences in student enrollment. That finding suggests that some level of autonomy might be a key ingredient for successful schooling, at least in science.
At the school level, the researchers also turned up a handful of factors that continued to be important for science achievement once differences in students’ demographic characteristics were taken into account. Those included:
• Public posting of school test scores. Students in schools where student-achievement data are regularly made public in some way scored an average of 3.5 scale-score points higher than those in schools that do not make student-performance results public.
The report on the 2006 Program for International Student Assessment, which involved testing of 15-year-olds in 57 jurisdictions, included analysis of the effect of cross-border differences in school-level practices on students’ performance on the exam’s science portion. After adjusting for socioeconomic differences, some characteristics, such as school selectivity, were linked to higher average numbers of scale-score points, while others, such as time spent on out-of-school lessons, were linked to lower scores.
• Time on learning. Students scored 8.8 scale-score points higher, on average, for each additional hour of instruction per week. Across all the OECD countries, however, only 28.7 percent of students, on average, spend four or more hours a week in science class.
• School science activities. For each additional unit on this scale, which includes activities such as science fairs and science clubs, student achievement scores rose by 2.9 scale-score points.
• Selective admissions procedures. Students in schools where academic records or recommendations from feeder schools are required for enrollment scored 14.4 scale-score points higher than students in less selective schools.
• Ability grouping. In schools where students are grouped by academic ability for all classes, students scored 4.5 points lower, on average, than students in schools that use the practice rarely or not at all.
While the findings on school selectivity and ability grouping seemed to point in opposite directions, Mr. Schleicher said they were not necessarily incompatible.
“If you are in a selective school, you do better on average,” he said. “But if you stratify the entire system, you would not see a positive impact.”
Countries that separate students into different academic tracks before age 15 fared no better on the exams than those with less stratified school systems, the report also found. Experts contend that’s because such practices tend to exacerbate the effect that poverty has on student performance, leading to greater achievement disparities between students from different socioeconomic groups.
“One of the big messages coming out of all these international studies is that when nations intentionally or unintentionally create a lot of differences for students in learning resources, there’s a drag on achievement,” said David P. Baker, a professor of education and sociology at Pennsylvania State University in State College, Pa.
Mr. Schleicher said Poland offers a case in point. Prompted in part by earlier PISA reports suggesting that tracking worsened the socioeconomic disparities in that country, Poland raised the age at which academic segregation begins to occur by one year between the 2000 and 2003 PISA test administrations.
Over that period, the variation in academic achievement from school to school in Poland decreased markedly, while average test scores rose. Much of that improvement came among lower-performing students, Mr. Schleicher said.
By the same token, some other factors tested, such as the extent to which schools competed for students with other schools in their area or the control that principals had over their schools’ science curricula, seemed to have no effect on student performance after socioeconomic differences were taken into account.
But the handful of factors that seem to link most strongly with student achievement also don’t work in isolation from one another or from the larger social context in their countries, the report notes. For instance, schools that spend more time teaching science also tend to enroll students who are more advantaged than their peers.
When researchers combined all six of the practices that stood out in the analyses, and coupled them with social-background characteristics, they found that the entire bundle of variables helped explain 70 percent of the achievement discrepancies between participating nations, Mr. Schleicher said.
Kudos and Criticism
“This is a very rich source of information, and we should be asking ourselves, given that we know we’re not doing as well internationally, what we can learn about how schools in other countries are run and how we can improve,” said Daria L. Hall, the assistant director of K-12 policy for the Education Trust, a Washington-based research and advocacy group that promotes high standards for all students.
Gerald F. Wheeler, the executive director of the National Science Teachers Association, located in Reston, Va., echoed that sentiment. “We already know we’re in trouble,” he said, referring to the country-by-country student-performance rankings that are most associated with the PISA results. “I’m much more interested in what we can glean from these kinds of studies.”
But some researchers offered more critical reactions to the results. For instance, William S. Schmidt, a Michigan State University researcher, faulted the report for failing to account for differences between participating nations in science curricula or teacher preparation.
“If you do these analyses, and you don’t control for that, you could be misled,” he said.
And Tom Loveless, the director of the Brown Center on Education Policy at the Washington-based Brookings Institution, said the report was “fatally flawed” for drawing policy recommendations from correlational data.
“Correlational evidence is the weakest evidence we have in our arsenal as social scientists,” he said. “There’s an inherent ideological stance that PISA takes in the exams and in their interpretation of the results,” Mr. Loveless contended, citing what he sees as bias against ability grouping and in favor of a more active role for schools in promoting social equity, among other positions.
He suggested the OECD should protect against bias by building a stronger firewall between the education officials who collect the statistics and the researchers who interpret them.
However, Mr. Schleicher and Mr. Baker, who also took part in the PISA study, said the cross-national team of researchers involved took pains to avoid skewing the results to favor any particular country. They also agreed, though, that the findings from the secondary analyses, which were based on survey data from an average of 300 schools per country, were not solid enough to guide policymaking.
“These international comparisons are terrific, because they really shake things up,” said Mr. Baker, “but then other kinds of methods are needed to do the detailed work on what works for what types of populations.”
Vol. 27, Issue 17, Page 10