States Set Widely Varying 'Proficiency' Bars
What students are expected to know in order to reach proficiency levels on exams in some states may be as much as four grade levels below the standards set in the states with the most rigorous assessments, according to a study by the American Institutes for Research that uses international testing data to gauge states against a common measuring stick.
Released today, the report by the Washington-based research group makes a case for states, as they collaborate on common standards, to use national and international benchmarking to make cutoff scores more demanding and improve the descriptions of what it means for students to be proficient in reading and mathematics at each grade level.
The researchers used National Assessment of Educational Progress benchmarks to compare each state’s standards against the benchmarks for the same subjects used in two international assessments, the Trends in International Mathematics and Science Study, or TIMSS, and the Progress in International Reading Literacy Study, or PIRLS, during 2007, the most recent year all three types of assessments were administered. Researchers then analyzed the percentage of students in each state who would meet minimum proficiency according to their state standards and the common international standards.
Measured against the international benchmarks, the gaps between states for students were so great, the report notes, that the difference in actual proficiency between students in states with the most and least rigorous standards was double the national achievement gap between black and white students on the National Assessment of Educational Progress in 2007, then about two grade levels. At the 4th grade level, only Massachusetts had more rigorous state standards than the international standards, and its standards for 4th grade math were comparable to those required for a typical student in the highest-performing TIMSS countries, such as Japan, Taiwan, Singapore, and Hong Kong. For 8th grade math and 4th grade reading, only Massachusetts and South Carolina had standards comparable to those of the best-performing countries.
‘Short Selling’ Students
Gary W. Phillips, the AIR’s vice president and chief scientist, who wrote the report, called state-proficiency standards “the educational equivalent of short selling.”
“Rather than betting on student success,” he said in the report, “the educators sell the student short by lowering standards.”
A comparison of 4th grade students scoring at the proficient level in math on 2007 state assessments vs. an internationally benchmarked common standard show dramatic differences in what is considered proficient. Of all states, only Massachusetts had more students perform at the proficient level on international standards than on state standards.
For comparison, Mr. Phillips points to two winners in the federal Race to the Top grant competition: Massachusetts and Tennessee. Massachusetts’ bar for 8th grade math proficiency is two full standard deviations above Tennessee’s proficiency bar; that gap, the study found, represents more than four grade levels’ difference between proficient 8th graders in the two states. Tennessee changed its achievement standards this year, but such gaps remain among all states.
“It documents again what we’ve long known, which is on current state tests the bar for proficiency is literally all over the map,” said Michael Cohen, the president of Achieve, a Washington-based nonprofit group that works with states to evaluate their academic-content and testing standards.
“It’s been that way for a while,” he said. “Each state gets to do this entirely on its own, and they respond differently to the pressures of [the No Child Left Behind Act].” The federal law gauges schools’ progress by gains in student proficiency against the targets the states set for performance on their respective tests.
The AIR researchers found the percentage of students who reached proficiency in 4th grade math and reading and 8th grade math were strongly inversely proportional to the rigor of the achievement benchmarks—to the extent that the report suggests low state proficiency bars may account for up to 60 percent of the gains states have reported in student performance in the years since the NCLB legislation was passed by Congress in 2001.
Similarly, a 2007 study by the Washington-based Thomas B. Fordham Institute found the states with the highest proficiency standards regressed to average standards since NCLB was implemented in 2002.
The AIR report also echoes a report released this time last year, in which the National Center for Education Statistics compared state standards with those of the National Assessment of Educational Progress. The NCES study found, for instance, that across 2003, 2005, and 2007 assessments, the distance between the five states with highest standards and lowest standards in 4th-grade reading was comparable to the difference between NAEP’s “basic” and “proficient” achievement levels.
Benchmarking a New Way
Mr. Phillips argues that states now tend to set descriptions of, and cutoff scores for, different content-proficiency levels using recommendations from panels of local educators, researchers, and other stakeholders. Such panels have access to information about whether other countries used particular test items, but usually not until the end of the standards-setting process, “when their minds are already made up,” he said.
The AIR recommends instead that states use a benchmark method to set proficiency levels.
First, the state would reach a consensus on academic-content standards and field-test a representative pool of test questions based on them. It would compile the questions in order from easy to hard, and link the scaled items statistically to equivalent questions in other states and countries. Then content experts would use both the questions and performance descriptions from other states and tests to describe what students should know and be able to do at each proficiency level. Finally, those descriptions would be used to set cutoff scores for the state content assessments.
“The new method uses international benchmarking starting out, so the teachers and other panelists in the workshop have a broader foundation for what they are doing,” Mr. Phillips said.
A more detailed description of the method will be published in early 2011, in the book Setting Performance Standards, Second Edition by Gregory J. Cizek, an educational measurement and evaluation professor at the University of North Carolina at Chapel Hill. Mr. Phillips said the AIR also will update the findings after the state, TIMSS, PIRLS and NAEP assessments are administered together again in 2011.
Three states—Delaware, Hawaii, and Oregon—have already taken the first step.
In this year’s spring high school math assessments, Oregon embedded sample questions from PISA, which tests the math performance of 15-year-olds in countries in the Organization for Economic Cooperation and Development. While the sample questions did not count for students’ scores, they were used to benchmark the state test against international standards.
From there, with input from educators and researchers, the Oregon education department has recommended changing the proficiency descriptions and cutoff scores for each grade’s assessments, according to Anthony Alpert, the assessment director for the department. The state board of education is set to vote this week on the new proficiency standards for math, with other subjects in the works.
If the new standards are approved, Oregon’s proficiency cutoff scores would increase by half of a standard deviation at every grade level, Mr. Alpert said.
“For some of our communities, international benchmarking isn’t necessarily their highest priority, so we’re still talking with our communities about why vertical alignment and international benchmarking is critical to our students’ readiness to compete in the global workplace,” Mr. Alpert said. He said he hopes to create “a [testing] system that is better—more consistent with the expectations that other states have for their kids and other countries have for their kids.”
Vol. 30, Issue 10