Statistics Agency Gauging State ‘Proficiency’ Thresholds
The statistical arm of the U.S. Department of Education is conducting a study to see how states’ definitions of student academic proficiency compare with the way it is spelled out by the prominent national test known as “the nation’s report card.”
The goal of the study is to use that test, the National Assessment of Educational Progress, as a “yardstick” to judge the different state proficiency thresholds, said Mark S. Schneider, the commissioner of the National Center for Education Statistics, which is overseeing the project.
“For the first time, we would have a way of comparing states’ proficiency standards,” Mr. Schneider said in an interview. “Everybody’s been wanting something like this,” he added. “I’m very anxious to get this out the door.”
The proficiency levels states set on their individual assessments would be judged against those set in 4th and 8th grade reading and mathematics for NAEP, Mr. Schneider said. NCES officials hope the study is complete and ready for public release by March or April, he added.
Improving student proficiency in reading and math has emerged as a dominant national theme in K-12 education in recent years. Under the federal No Child Left Behind Act, which was signed into law in 2002, states are required to test students annually in grades 3-8, and at least once in high school, in those subjects. Scores on each state’s tests are used to evaluate whether schools and districts have made adequate yearly progress toward the goal that all students be deemed “proficient” by 2014.
But the content of state tests and the standards states use to determine which students are proficient vary enormously. In addition, the discrepancy in the percentages of students whom states report as having achieved the target is enormous, leading critics to question the reliability of states’ tests as measures of academic progress.
Partly because of those inconsistencies, NAEP results are scrutinized by educators and policymakers as a uniform benchmark against which students from every state can be judged. States are required to have a sample of their students participate in NAEP every two years in 4th and 8th grade reading and math to receive federal education funds.
Aside from proficient, NAEP’s other achievement levels for student performance are “basic” and “advanced.” NAEP standards for proficiency are widely regarded as more difficult than many of those set for state tests.
Mr. Schneider discussed the goals of the proficiency study at the quarterly meeting of the National Assessment Governing Board, on Nov. 17. The 26-member board sets NAEP policy.
Wesley D. Bruce, the assistant superintendent for assessment for the Indiana education department, said he first heard of the research project at a meeting of the Education Information Management Advisory Consortium last month. That organization, which Mr. Bruce chairs, advises the Washington-based Council of Chief State School Officers on school data issues.
Mr. Bruce predicted that the federal study could draw widespread attention and provide state officials with useful information. But he also said differences between the percentages of students who achieve proficiency on NAEP and those who meet that threshold on state tests could be explained partly by the different content presented at each grade level. Such explanations, however, are likely to be lost on the public when the study is released, he said.
“The concern is the rush to judgment,” Mr. Bruce said. The public might look at the results of the study and think, “If you’re not right at the NAEP level, you got it wrong,” he said.”
Mr. Schneider, who was nominated as NCES commissioner by President Bush last year, said his agency’s work on the study began before he took the job, though he has taken an interest in the project. The NCES is one of three research centers housed within the Institute of Education Sciences that Congress created in 2002 in reorganizing the Education Department’s research operations.
Mr. Schneider, who is scheduled to serve as commissioner until his term expires in 2009, said the study was likely to draw broad scrutiny. “People are going to have to look at this,” he said. They will “look at the models, look at the methods, and look at the results.”
The lead researcher is Henry I. Braun, who holds the title of distinguished presidential appointee at the Educational Testing Service, the giant nonprofit research and testing organization in Princeton, N.J. His work will build on earlier NCES-sponsored research on the alignment between NAEP and state proficiency standards, Mr. Schneider said.
The study is based partly on a comparison of the cutoff scores states use on tests to judge whether students are deemed proficient against the corresponding scores used by NAEP, Mr. Braun said. The research examines how well populations of students who are deemed proficient in individual states fared on the national assessment.
Although he believes the study is likely to spark debates among policymakers and the public, Mr. Braun also said he hoped it would advance research into the various standards used by states.
“There’s a basic uncertainty that can never be eliminated by statistical methods,” he said. The study, he added, “is the most reasonable way we can think of to [examine] state proficiency standards on a purely statistical basis.”
Vol. 26, Issue 13, Page 13Published in Print: November 29, 2006, as Statistics Agency Gauging State ‘Proficiency’ Thresholds