To the Editor:
In response to “Special Education’s Future,” an article that appeared in the “10 Big Ideas” special report, I would add that this broken system is supported and enforced with broken data, which itself costs millions of dollars for often dubious value (January 9, 2019). The cost in dollars and frustration is even higher if we include unfunded teacher and administrator time.
Data are not just numbers. Data derive meaning from the realities on the ground, the way that they are collected, the design of the evaluation, and even how they are used. Data that are valid for one purpose might not be valid for a different use.
For example, states have some freedom to choose eligibility requirements for certain special education programs, and are also allowed some choice of assessment methods and formulas.
This state choice is perfectly reasonable, but then comparing results between states is invalid and tells us nothing about program quality or effectiveness.
There are a multitude of special education indicators and other required metrics. Some are fairly straightforward. Others are “correct” but of limited use because there’s no context. How do we interpret an outcome unless we have comparison data for children without disabilities or children not receiving services?
Still other indicators overlap with each other, requiring duplicate calculations.
Better data do not mean more data. In fact, we should collect less, but with clear objectives and using appropriate evaluation designs. We collect these data for a reason: to improve education and outcomes for children with disabilities. It is past time to have an honest discussion about whether the current special education data system is achieving that goal.
Dana Manning
Research Associate
University of Kentucky
Lexington, Ky.