Analysis Finds NCLB Waivers Too Often Maintain Flawed Accountability Practices

By Holly Kurtz — February 20, 2014 4 min read
  • Save to favorites
  • Print

The No Child Left Behind flexibility waivers granted by the U.S. Department of Education could have been the ultimate teachable moment, an opportunity to both innovate and learn from the lessons of the past decade.

Instead, a new analysis has found that, when it comes to the state accountability systems used to identify low-performing schools, the waivers often continue the problematic policies of the current law, for which reauthorization is now seven years overdue. These problems include an over-reliance on one-time snapshots of student performance in reading and math and a reluctance to consider non-test-based indicators such as attendance rates or longer-term postsecondary outcomes.

The analysis, which appears in the current issue of the peer-reviewed journal Educational Researcher, uses psychometric standards and research evidence to evaluate the validity, reliability, fairness, and transparency of the waivers, which have been granted to 42 states, the District of Columbia, and Puerto Rico. (Puerto Rico was not included in the analysis because its waiver was still pending at the time of publication.)

Some of the findings of the analysis directly contradict Education Department officials’ previously stated goals for the waivers, which were first issued in 2011 as a stop-gap measure to address the law’s overdue reauthorization. For instance, during a September 22, 2011 media briefing on the waivers, a senior department official said:

“The purpose [of these waivers] is not to give a ... reprieve from accountability, but rather to unleash innovation. We remain absolutely committed to accountability. We’re not interested in giving flexibility for business as usual.”

Yet the analysis finds that states continue to rely on one-time, one-year snapshots of student performance in the form of math and reading proficiency rates and inscrutable, arbitrary composite indices such as letter grades for schools.

“The problems of NCLB were well known shortly into implementation, yet little was done to mitigate the unintended consequences,” write authors Morgan Polikoff, Andrew McEachin, Stephani Wrabel and Matthew Duque.

For example, in a June 10, 2011 phone call with reporters, U.S. Secretary of Education Arne Duncan said he’d like to give states the ability to focus on student gains over time rather than proficiency rates that are onetime snapshots of performance.

“Proficiency rates are not a good measure of school performance,” said analysis lead author Morgan Polikoff, an assistant professor of education at the University of Southern California’s Rossier School of Education. “They’re a measure of primarily who the students are in a school.”

In other words, higher poverty schools generally have lower proficiency rates, regardless of whether the schools’ instructional methods are successful or not. Yet more than half of the waivers (24) permit recipients to continue the NCLB practice of using proficiency rates as at least one way to identify low-performing schools.

Polikoff and his co-authors also found that, even in the 20 states that do account for student growth, change over time is just one piece of a composite measure used to identify low-performing schools. Often, that piece is quite small, as low as 14 percent for Kentucky high schools. In the meantime, five states (Arkansas, New Hampshire, Pennsylvania, Wisconsin, and West Virginia) rely entirely on proficiency rates to identify low-performing schools.

Even states that do account for student growth may be unfairly penalizing their highest-poverty schools. That’s because the department does not permit waiver recipients to use growth models that control or account for student demographics. Growth model results are much more weakly correlated with demographic factors than are snapshot-in-time proficiency rates. However, they still have the potential to place higher-poverty schools at a disadvantage.

Growth models cannot account for demographic differences because the perception is that this would be equivalent to setting higher standards for some groups than for others. But Polikoff notes that NCLB’s Safe Harbor provision has long permitted schools to set different standards for different groups. (Safe Harbor sets different improvement standards for different student groups when one or more subgroup has missed accountability goals.)

While the waivers have fallen short of fixing NCLB, the analysis finds that, in many states, they are still an improvement over the original law. The analysis identifies Massachusetts and Michigan as poster children for reform. Michigan considers test results in multiple subjects other than math and reading. Massachusetts has reduced the chance of meaningless, onetime spikes and dips by using multiple years of data to identify low-performing schools. In addition, all but five waiver recipients consider factors other than proficiency rates when identifying their lowest performing schools. High school graduation rates, the most common indicator, are used by 26 states. Here, again, Polikoff believes that states have missed opportunities for innovation. He suggests that states could have incorporated a wider array of non-test-based indicators, such as school-climate surveys, longer-term results from higher education, and even simple attendance rates, which are a crude proxy for student engagement.

In addition, when it comes to the test themselves, most states (28) continue to rely on results from just two subjects, math and English-language arts. This is perhaps the biggest puzzle of all since NCLB already requires students to be tested in an additional subject, science. So states could have incorporated science results without increasing the amount of testing.

“The unfortunate truth is that what gets measured often becomes the focus,” said Polikoff. “I think everyone by now knows that schools have a lot of important outcomes that aren’t captured fully by math and reading.”

Related Tags:

A version of this news article first appeared in the Inside School Research blog.