New research contending that current accountability pressures have yielded no real achievement gains touched off another round of skirmishing last week over the reliability of a series of union-financed studies criticizing high-stakes testing.
Titled “High-Stakes Testing and Student Achievement: Problems for the No Child Left Behind Act,” the new report concludes that students in states with test-based accountability systems have not shown improvement on national assessments.
The reports, “High-Stakes Testing and Student Achievement: Problems for the No Child Left Behind Act,” and “The Impact of the Adequate Yearly Progress Requirement of the Federal “No Child Left Behind” Act on Schools in the Great Lakes Region” are posted by the Education Policy Studies Laboratory.
Those states also tend to have higher dropout rates and larger numbers of students retained in key grades, problems that disproportionately affect minority students, the study concludes.
“The theory is that if we were to exert more pressure on the key players in a state, … they will live up to that pressure by working harder, and that ultimately that will be translated into more learning,” said researcher Sharon L. Nichols, who wrote the report with Gene V. Glass and David C. Berliner. “But there is no systematic relationship with that pressure that affects student learning or learning gains.”
Released Sept. 20 by the Education Policy Studies Laboratory at Arizona State University, the report is the latest in a series of similar studies funded by a center associated with the National Education Association and its affiliates that has drawn sharp criticism from education researchers who deem it “advocacy research” of questionable scholarly value.
The new study’s methodology, those critics maintain, is flawed and designed around the biases of the authors and funders. Some scholars have examined the data and concluded the opposite of the study’s findings: that strong state accountability systems can help improve student achievement.
The study was underwritten by the Great Lakes Center for Education Research and Practice, an East Lansing, Mich.-based group founded by the Michigan Education Association, an NEA affiliate. The center says its mission is to support and disseminate “empirically sound research on education policy and practice.”
The 2.7 million-member NEA filed a lawsuit against the U.S. Department of Education in April, alleging that states are illegally being required to use their own money to carry out the mandates of the federal No Child Left Behind law.
Mr. Berliner, a professor of education at Arizona State, in Tempe, released a similar study in 2003 with Audrey L. Amrein, also of Arizona State, that looked at scores on the National Assessment of Educational Progress in 28 states, as well as results on Advanced Placement and college-admissions exams.
That study was criticized for comparing gains in states with strong accountability programs with national averages on NAEP. A 2003 analysis by two Stanford University professors found that if the NAEP mathematics scores in those states were compared instead with those in states without strong accountability measures, the results were reversed. (“Study Finds Higher Gains in States With High-Stakes Tests,” April 16, 2003.)
In the latest study, Mr. Berliner and his colleagues analyzed policies in 25 states that have long had test-based accountability systems.
They developed what they call a “pressure rating index”—taking into account policies in such areas as student promotion, bonuses for teachers, and transfer options for students in failing schools—to gauge how much pressure those states placed on schools and teachers to improve test results. They then compared progress on the NAEP scores for those states with the national average between 1990 and 2005.
Eric A. Hanushek, a senior fellow at the Hoover Institution at Stanford University, contended last week that the index itself is invalid.
“They’ve spent years trying to attack testing and accountability,” said Mr. Hanushek, who with a colleague, Margaret E. Raymond, conducted the 2003 analysis of the research by Mr. Berliner and Ms. Amrein and came up with a different result. “I can’t quite take it very seriously. They make up their own index, and surprisingly their index gives them all the answers they want.”
Mr. Berliner dismissed Mr. Hanushek’s criticism. “We think our data are as strong as can be,” he said.
Teri L. Moblo, the director of the Great Lakes Center, said the study was “good, clean research, the most high-quality research we can fund.”
She added that the center commissions studies on research topics of interest to the center, but publishes it regardless of the results.
“Our goal is to get it out there and create the conversation,” she said.
A version of this article appeared in the September 28, 2005 edition of Education Week as Union-Funded Study Finds Fault With High-Stakes Testing