Tuesday, I posted a report from my colleagues over at Politics K-12 about Senate Republicans taking the U.S. Department of Education to task over its “results-driven accountability” process that evaluates states on how well they are teaching students with disabilities.
Among the concerns of the GOP members of the Senate Health, Education, Labor and Pensions committee: States are being graded in part based on the gap between the scores of students with disabilities and their typically developing peers on the National Assessment of Educational Progress. Using the test that way turns the NAEP into a high-stakes assessment that it was never intended to be, lawmakers wrote in an Aug. 4 letter, which demands that department officials explain their decision-making process.
But in an interview with my colleague Lauren Camera, an Education Department official said that the NAEP scores will be used only until all states have adopted assessments that are “aligned with college and career standards.” The official also noted that states are evaluated on multiple measures, and that only a fraction of the points that a state could earn through the new evaluation system—12 out of a total of 42—are related to NAEP scores ( The department has a list of each state’s scores for this year, click on the link under each state’s name labeled “matrix.” )
Since the 2004 reauthorization of the Individuals with Disabilities in Education Act, states have been required to collect and report data to the Education Department, which in turn rates them on whether they are meeting the requirements of the federal law. The federal government has the authority to pull its federal funding from a state based on continued low ratings, but to this point, that has not happened.
Prior to this year, states were only rated on compliance data, such as whether they evaluated students in timely fashion, or met prescribed deadlines for conducting due process hearings. From this year on, states will be graded both on a combination of compliance factors and on the actual academic performance of students with disabilities.
As might be expected, states did not perform as well under the new evaluation system as they had under the old process, but the department held that up as an indication that they were enacting appropriately tough standards.
This isn’t the first time that concerns have been raised about using NAEP scores as part of the evaluation system. The National Association of State Directors of Special Education, based in Alexandria, Va., said the department was pushing the limits of the tests’ intended use when the accountability system was still on the drawing board. Among NASDSE’s concerns: Students with disabilities have traditionally been underrepresented among students who take the test; the test isn’t yet aligned to the Common Core, so it might not reflect what students are learning in the classroom, and it’s only given every two years, so there’s a lag in the results.
“We were not opposed to the focus on outcomes, but the devil is in the details, as they say,” said Nancy Reder, NASDSE’s deputy executive director for governmental relations, in an interview. “If [the department] is using NAEP as a placeholder, then maybe they shouldn’t have used it at all.”