Opinion
Accountability Opinion

Closing the Measurement Gap

By Jennifer Booher-Jennings — October 29, 2007 6 min read
  • Save to favorites
  • Print

In June, the U.S. Department of Health and Human Services released a report identifying high-mortality hospitals for heart patients. What’s striking about the report is not the number of hospitals on the list, but how few there were.

Only 41 hospitals—less than 1 percent of all hospitals nationwide—were identified as high-mortality. Yet in the 2004-05 school year, 26 percent of American schools landed on education’s comparable list—those that did not make adequate yearly progress under the federal No Child Left Behind Act.

How is it possible that education has so many more organizations on its failing list? It’s not that there is a performance gap between schools and hospitals. The trouble is the profound measurement gap between education and medicine.

Medicine makes use of what is known as “risk adjustment” to evaluate hospitals’ performance. This is not new. Since the early 1990s, states have rated hospitals performing cardiac surgery in annual report cards.

The idea is essentially the same as using test scores to evaluate schools’ performance. But rather than reporting hospitals’ raw mortality rates, states “risk adjust” these numbers to take patient severity into account. The idea is that hospitals caring for sicker patients should not be penalized because their patients were sicker to begin with.

In practice, what risk adjustment means is that mortality is predicted as a function of dozens of patient characteristics. These include a laundry list of medical conditions out of the hospital’s control that could affect a patient’s outcomes: the patient’s other health conditions, demographic factors, lifestyle choices (such as smoking), and disease severity. This prediction equation yields an “expected mortality rate”: the mortality rate that would be expected given the mix of patients treated at the hospital.

Lawmakers have acknowledged that we must take the mix of patients into account to fairly compare hospitals—so why not do the same in education?

While the statistical methods vary from state to state, the crux of risk adjustment is a comparison of expected and observed mortality rates. In hospitals where the observed mortality rate exceeds the expected rate, patients fared worse than they should have. These “adjusted mortality rates” are then used to make apples-to-apples comparisons of hospital performance.

Accountability systems in medicine go even further to reduce the chance that a good hospital is unfairly labeled. Hospitals vary widely in size, for example, and in small hospitals a few aberrant cases can significantly distort the mortality rate. So, in addition to the adjusted mortality rate, “confidence intervals” are reported to illustrate the uncertainty that stems from these differences in size. Only when these confidence intervals are taken into account are performance comparisons made between hospitals.

Both the federal and state governments have acknowledged that we must take the mix of patients into account to fairly compare hospitals—so why not do the same in education?

Though the distance between the federal departments of Health and Human Services and of Education represents only a brief jaunt down Washington’s C Street, these agencies’ ideas of how to measure organizational performance are coming out of different worlds altogether. Consider the language used by the Health and Human Services Department to explain the rationale behind risk adjustment:

“The characteristics that Medicare patients bring with them when they arrive at a hospital with a heart attack or heart failure are not under the control of the hospital. However, some patient characteristics may make death more likely (increase the ‘risk’ of death), no matter where the patient is treated or how good the care is. … Therefore, when mortality rates are calculated for each hospital for a 12-month period, they are adjusted based on the unique mix of patients that hospital treated.”

What’s ironic is that the federal government endorses entirely different ideas about measuring performance in education. In medicine, governments recognize that hospitals should not be blamed for patients’ characteristics that they do not control. When educators make this point, they are censured for their “soft bigotry of low expectations.” In medicine, the government acknowledges that no matter how good the care is, some patients are more likely to die. In education, there is a mandate that schools should make up the difference, irrespective of students’ risk factors.

Perhaps most of us could agree that if the sole goal of accountability systems is to compare organizations, risk adjustment is the most fair and accurate approach. When patients in a hospital die, we rarely claim that the hospital is failing. By the same token, the fact that some students in a school are failing does not mean that the school is failing. Sound public policy should be able to distinguish these two conditions.

The fact that some students in a school are failing does not mean that the school is failing. Sound public policy should be able to distinguish these two conditions.

Nonetheless, even if a school is not performing worse than expected given the unique risks of the students it is serving, there is still a compelling public interest in remedying the situation. Accountability systems do not seek only to measure organizations’ performance. They have the additional goal of spurring schools to improve the quality of education provided and to lay an equal playing field, especially for poor and minority children. We have rightly seen tremendous political opposition to the idea of lowering expectations based on students’ social backgrounds.

By separating the issue of measurement from the issue of social goals, risk adjustment has the potential to break the stalemate in this argument. Risk adjustment could make accountability a two-stage process, in which we first ask the question, “Is the school responsible?” By all means, when schools are performing worse than they should be, given the population they are serving, holding educators’ feet to the fire is a reasonable response.

But when a school is not a low-performing outlier, it makes no sense to castigate educators and hope for the best. We’re simply not going to get the best results for our kids that way. Instead, funding for a robust set of interventions should be made available to these schools. It is the task of education research to figure out which interventions give us the biggest bang for our buck. The key point, however, is that we cannot expect to get the same results for disadvantaged kids on the cheap.

Which out-of-school factors should be part of the risk-adjustment scheme? Any factors out of the school’s control that could affect student outcomes. Some of these might include prior achievement, poverty, age, race and ethnicity, gender, attendance, type of disability, residential mobility, and whether the student is an English-language learner. Attention would need to be paid to the issue of “upcoding,” through which a school could create the appearance of a more disadvantaged population than truly existed. In medicine, careful monitoring and periodic audits have gone a long way to keep upcoding in check.

The most common objection to risk adjustment is that it lets educators off the hook. This is far from the truth. Risk adjustment places blame where it belongs. In some cases, this will be at the feet of schools and teachers. However, risk adjustment would likely reveal that, in large measure, we can blame our city, state, and federal governments for failing to make sufficient investments in public education to produce the results they demand.

If we close the measurement gap, we can begin a radically different conversation about what investments in public education are necessary to give disadvantaged kids a fair shot. The likely educational dividends for these children make risk adjustment a risk worth taking.

A version of this article appeared in the October 31, 2007 edition of Education Week as Closing the Measurement Gap


Commenting has been disabled on edweek.org effective Sept. 8. Please visit our FAQ section for more details. To get in touch with us visit our contact page, follow us on social media, or submit a Letter to the Editor.


Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Teaching Webinar
6 Key Trends in Teaching and Learning
As we enter the third school year affected by the pandemic—and a return to the classroom for many—we come better prepared, but questions remain. How will the last year impact teaching and learning this school
Content provided by Instructure
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Ensuring Continuity of Learning: How to Prepare for the Next Disruption
Across the country, K-12 schools and districts are, again, considering how to ensure effective continuity of learning in the face of emerging COVID variants, politicized debates, and more. Learn from Alexandria City Public Schools superintendent
Content provided by Class
Teaching Profession Live Online Discussion What Have We Learned From Teachers During the Pandemic?
University of California, Santa Cruz, researcher Lora Bartlett and her colleagues spent months studying how the pandemic affected classroom teachers. We will discuss the takeaways from her research not only for teachers, but also for

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Accountability States Make It Hard to Tell How Much Schools Are Spending, Report Says
The vast majority of states aren't publishing spending data in a visually appealing or comprehensive way, according to EdTrust.
3 min read
Group of people with large pens, coins, calculator, clip board, magnifying glass and studying numbers, charts and receipts.
iStock/Getty Images Plus
Accountability Did Washington D.C.'s Education Overhaul Help Black Children? This Study Says Yes
Researchers said the district's "market-based" reforms accelerated achievement versus other districts and states.
5 min read
Accountability Opinion What Next-Gen Accountability Can Learn From No Child Left Behind
As we ponder what's next for accountability and assessment, we’d benefit from checking the rearview mirror more attentively and more often.
4 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
DigitalVision Vectors/Getty
Accountability Opinion Let’s Make Transparency the Pandemic’s Educational Legacy
Transparency can strengthen school communities, allow parents to see what’s happening, and provide students more of the support they need.
3 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
DigitalVision Vectors/Getty