Opinion
Accountability Opinion

California Data Show Flaws in Federal Regulations

By Charles Taylor Kerchner — August 02, 2016 5 min read
  • Save to favorites
  • Print

When Congress passed the Every Student Succeeds Act in 2015, it was celebrated as relief from its predecessor, but as regulations emerge it’s beginning to look like its evil sibling of the discredited No Child Left Behind Act’s name-and-shame policy. In a report released Monday two California organizations are pushing back with data.

A report issued by Policy Analysis for California Education (PACE), as a part of its research partnership with the state’s CORE districts, argues that the summative single list of bottom 5 percent schools required in pending regulations would fail to indentify some of the schools that need comprehensive support. The regulations would unjustly target some schools that were making good progress but whose overall scores were still low.

Data from CORE Districts

Authors Heather Hough, Emily Penner, and Joe Witte illustrate the issue with data from California’s CORE districts, which include Los Angeles, San Francisco, Fresno, Long Beach, Oakland, and Santa Ana. Although only six districts, they contain 923 more high poverty (Title I) schools than do 26 states.

Both academic performance and academic growth are required elements in the federal guidelines for academic measurement. But as the graphic below shows, only a few schools fall in the lowest 5 percent of both low achievement and low growth in achievement (red dots). For the most part, the schools with low achievement (blue dots) are different from the schools with low academic growth (orange dots).


Under pending federal regulations, achievement and growth, along with graduation rates and chronic absence, must be combined into a single indicator to pick the bottom 5 percent of schools for comprehensive assistance by an external body. But analysis shows that each of the four operate independently. For example, only 4 percent of the schools that were in the bottom tier of academic achievement were also in the bottom tier of results for English learners.

Seeking A ‘Flashlight Not A Hammer’

This report is not just picking at nits with the feds. The question of what schools are labeled as failing and what happens to them is central to improving schools and holding accountable schools that don’t improve. CORE’s guiding principle that data should be a “flashlight and not a hammer” is running up against the view that there must be a single, easy to understand, list of schools labeled as failing.

“A summative rating is a hammer,” said Heather Hough, one of the report’s authors. “There are so many dimension school quality that no matter how you boil them down, it’s just a label. What we want with this multiple measure system is to tell the story. Then we are encouraging people to highlight what needs work, but also what is working. A school can receive some credit, say, for having a school with a difficult population but a good attendance rate.”

The creation of the list and attached sanctions was the great failure of NCLB, particularly as the list of labeled schools grew longer, and states were unable to either intervene or improve them. Although the list of sanctions have been removed from federal law, the proposed regulations for ESSA run the same risk.

The PACE/CORE analysis illustrates that most of the schools that might be identified as needing the most draconian intervention may, instead, need more targeted help.

An Alternative Approach

It illustrates a better way forward. Using the CORE data as an example, take the two percent of schools that have both low achievement and little progress and target them for comprehensive intervention. Then, disaggregate the data for the next tier of schools. For those schools that hit the bottom on English learner performance, target intervention on this issue. Those schools which score lowest on measures of school quality—such factors as student access to and completion of advanced coursework, postsecondary readiness, school climate and safety, student engagement, educator engagement—require a different kind of intervention.

Even the proposed rules for targeted help appear unworkable, when tested with real data from real schools. These rules wisely require reporting of results by student subgroup. Otherwise, small groups of students get lost in a school’s averages. But the proposed regulations would single out schools for targeted intervention if any of the subgroups fell below the designated level. The data show that upwards of 60 percent of all schools would then require targeted support.

“From our perspective, it is a question of resourcing. If 65 percent of schools are identified, can the federal and state governments actually support help?” said Hough.

Darling-Hammond Also Urges Multiple Measures

The PACE/CORE analysis is not the only California pushback against the proposed regulations. In testimony before the Senate Health, Education, Labor and Pensions Committee, Stanford University professor Linda Darling-Hammond argued for a dashboard approach with multiple indicators and against a single summative score. She illustrated the point by referring to the multiple measures presented in her own children’s report card:

In all of those years of parenting, it never once occurred to me to ask any of these schools for a "single summative score" to describe my child. I didn't need it to understand how my child was doing, and in fact it would have gotten in the way. I wanted and needed to know exactly where they were doing well and where they were in need of help, so that I could support them. The school needed that information as well. In fact, in my own personal experience, two of my children are dyslexic and while they performed well overall, the need for additional support in reading would have been masked if a single rating were the measure the school focused on.

Against Counting Only ‘Proficiency”

In addition, University of Southern California professor Morgan Polikoff has written a letter, signed by dozens of academics and practitioners, pointing out the weakness in using the numbers of students who score at the “proficient” level as the metric for school performance. Continuing that practice from the prior law, “incentivizes schools to focus on those students near the proficiency cut score.” Students whose performance is substantially below or substantially above the proficiency threshold would receive much less attention, he argues.

Both Congress and the U.S. Department of Education have some difficult decisions to make in the next few weeks as the regulations for ESSA move forward. They will need to decide between flashlights and hammers. Are school performance data to be used primarily as a way to name-and-shame schools, demoralize their staffs, and make it difficult to recruit new teachers. Or will data be used as a way to light the way toward continuous improvement.

Although the regulations may be finalized by this administration, their consequences will be visited on the next one. Both Congress and the candidates should pay attention.

The opinions expressed in On California are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.