Value of Yearly Special Ed. Reviews Questioned
Just as it has every June since 2006, the U.S. Department of Education last month delivered a rating to each state and territory based on the performance of its special education programs.
The ratings, intended to fulfill the Individuals with Disabilities Education Act’s requirement that “measurable” and “rigorous” targets be met on the 6.7 million school-age students enrolled in special education, are derived from reams of information that each state submits on a yearly basis. The data covers everything from student dropout rates in special education, to the percentage of children who were evaluated for special education needs within federally mandated timelines, to whether students with disabilities found work after they graduated.
But when you ask state and federal officials if the effort has led to better education for students with disabilities, the answer that comes back is: We’re not sure.
“We do not have the answer to that question, especially in relation to cause-and-effect,” said Alexa E. Posny, the assistant secretary of education for special education and rehabilitative services, who was also a state education chief and special education director in Kansas before assuming her federal role.
Ms. Posny said the education department is conducting an evaluation process that would help it figure out if the department is asking states the right questions to get at the essential—yet difficult-to-measure—concept of improved education for students with disabilities. At the same time, she said, the department plans to have “frank conversations” with state officials.
The timing is crucial, with the federal laws governing general education and special education under congressional scrutiny as they come up for reauthorization. Whatever federal and state officials decide they have learned from this effort may end up incorporated as changes in those laws.
“We do understand the complexity and the incredible work that is required,” Ms. Posny said. “We are making a concerted and concentrated effort to look at what indicators really do tell us,” Ms. Posny said. “We’re taking a look at every single indicator to say, do we need all of these?”
States are required to create a “state performance plan” on a six-year cycle that sets goals for special education performance in 20 different areas for school-age students with disabilities. Goals for an additional 14 evaluation areas, or “indicators,” must be reported for the approximately 343,000 infants and toddlers with disabilities covered under the IDEA. Each district in a state also must report information in these areas and receives a rating from the state.The states report their progress in annual performance reports, and the whole process is commonly abbreviated as SPP/APR.
For the first two years of the program, while the process was still under development, states were all given the modestly titled top category “meets requirements.” The states only started receiving different ratings in 2007.
The other three categories, in descending order, are needs assistance,” needs intervention” and needs substantial intervention.” That last rating has not yet been given to any state.
Hard to Quantify
From the perspective of many state special education directors, having the federal government re-examine the process is essential. Special education directors of large and small states, in states that have done well throughout the process and in states that have had challenges meeting their goals, all suggest that the intensive data process has had positive effects, but such effects are almost impossible to quantify. In some cases, directors say the benefits of examining data closely are outweighed by the difficulty just in collecting the information.
In Alaska, a state that has gotten the top rating four years in a row, “every indicator allows us to have a conversation with each [school district],” said Arthur Arnold, the state director of special education. He acknowledged that his state’s top-notch record is partly a reflection of its relative youth—no creaky data collection systems that needed to be overhauled—and its small population.
“I don’t know if it’s changed practice, but it’s allowed us to look more closely at the 20 indicators that the [office of special education programs] has asked us to look at,” Mr. Arnold said.
But in larger states with urban systems that have historically struggled with providing special education services, whatever positives exist seem outweighed by problems, directors there say.
“The SPP and the APR continue to be overburdened and not as effective as [federal officials] want them to be,” said Rebecca H. Cort, who oversees special education for New York. New York has been in the “needs assistance” category since 2007. “There has been too much focus on strict compliance when what do we really care about? We care about whether kids are making progress, and if they’re graduating,” Ms. Cort said.
Shifting to Outcomes
Data collection has been the backbone of the federal special education law, which was first adopted in 1975 as the Education for All Handicapped Children Act. There have been long-running complaints, however, that the IDEA focuses too much on compliance data, like making sure certain reports are written on time, and not enough on outcomes, such as how many students in special education graduate ready to enter college or the workforce.
The reauthorized IDEA was supposed to change that. The law set out several priority areas that states must monitor, among them ensuring provision of a “free, appropriate public education” in the “least restrictive environment,” and guarding against disproportionate enrollment of racial and ethnic groups in special education. The monitoring requirement led to the creation of the state performance plan and annual performance report process.
But the questions that states have been asked over the years have changed, making it difficult to compare statistics from year to year. And, despite the law’s attempt to shift the focus away from compliance, state ratings are only measured on a subset of indicators, not all of them. The subset of indicators that determine the ratings are all based on compliance, such as meeting deadlines and providing timely due process hearings. Though the other “performance” or “outcome” questions are reported to the government, they don’t determine what a state’s rating will ultimately be.
“We don’t know if the U.S. education department’s focus has led to an improvement in student performance. We do know it’s led to a great compliance with law,” said Fred Balcom, the state special education director from California, which has met federal requirements for the past two years.
Ms. Posny, the federal special education official, said that the government has held off on evaluating states based on performance indicators both because it is not required under the law, and because states could face punishments that ultimately hurt students.
“We could ultimately have to take money away, and that would be counter-intuitive to the thinking that the states that require the most amount of help probably need the most amount of dollars,” she said.
Ed Steinberg, the special education director in Colorado, says he wants to see more attention paid to performance. Right now, his state, which only now moved up into “needs assistance” after being in the lower “needs intervention” category for three years in a row, is focused mainly on meeting the letter of the law in the SPP/APR process, he said.
“Everything we hear from [the federal office of special education programs] is that over time, we are going to have more of the indicators that are bought into the mix, specifically the performance indicators,” Mr. Steinberg said. “I think there would be a broad welcome of that; some of what we’re measured on would have a real connection with student needs.”
At the same time, Mr. Steinberg said, he understands the delay. Improving graduation rates for students with disabilities would involve improving the education system in a district or state as a whole. “These are such systemic changes on a [district] level, I can understand them saying ‘we want to give states a little bit more time to get their act together with that, ‘ ” he said.
And, just as a simple letter grade doesn’t always tell the full story, neither does a rating. Two states that “need assistance” may have weaknesses in entirely different areas, based on their goals and their indicators. That creates a handful of concerns, state directors said.
One is that it’s just difficult to explain to parents or others who aren’t deeply involved in data management what the underlying problem is with a low rating. The explanation often requires digging into specific indicators.
“I’m a big believer in government and regulatory actions being comprehensible by average citizens. Some of the way the reporting is required to be done makes that a very big challenge for us in trying to help our families understand what it all means,” said Tameria Lewis, the assistant superintendent of special education for the District of Columbia. The District has received “needs intervention” ratings for four years.
The variation in goals among states also makes it hard to compare states, even though it’s a natural inclination, said Ms. Cort, the special education director for New York. “Is it better not to meet a rigorous target, or to meet a relatively meaningless target?” she said.
Ms. Posny said that states should keep the emphasis on the good work that they are doing.
“What the states need to highlight is every single thing they are doing so right and so well. They have made such tremendous progress, and that’s what I would share with the public. Take credit for the movement you have made and be honest with them” on the issues that are still problem areas, she said.
Many of the directors mentioned that, though the process itself may seem unwieldy, they’ve been happy with the direct support they have received from federal officials as they create their annual reports. The District of Columbia has received the stiffest sanctions of any agency so far, with the federal government directing the city to spend some of its federal money in specific areas. Even then, said Ms. Lewis, the federal officials have been “very hands-on with us. They have genuinely tried to hear our concerns and our challenges.”
The National Association of State Directors of Special Education would like to see the department eventually pare down the number of indicators, as well as get rid of the duplicated data requests that come from the department, said Marcia Harding, the new president of the organization’s board and the special education director for Arkansas. That state was in the “needs assistance” category this year.
If the department can stick to one calculation—for instance, either graduation rates or dropout rates—states can collect information that is useful to compare across time, Ms. Harding said.
But even in its current state, the process has had value for Alabama, said Mabrey Whetstone, the director of special education for that state. For example, his state has used the data collected on disproportionality to work on its identification practices for special education, even though those indicators are not used for determining a state’s rating. Alabama received a “meets requirement” rating for the second year in a row, and sent out a press release to mark the occasion.
“We’re not resistant to the SPP/APR process at all,” Mr. Whetstone said. “We just want to work together to work out processes.”
Vol. 29, Issue 36
Get more stories and free e-newsletters!
- Executive Director, Human Resource Services (Data Analysis and Strategic Alignment)
- Duval County Public Schools, Multiple Locations
- High School Director at KIPP Delta Public Schools
- On-Ramps, Blytheville, AR
- Claypit Hill Elementary School, Wayland, MA
- Senior Associate
- Great Schools Partnership, Portland, ME
- Superintendent, Fayetteville-Manlius Central School District
- Fayetteville-Manlius Central School District, Manlius, NY