Opinion
School & District Management Opinion

Innovative Reforms Require Innovative Scorekeeping

By Lisbeth B. Schorr — August 25, 2009 6 min read
  • Save to favorites
  • Print

President Barack Obama has made clear that we must systematically identify “what works,” both for budgetary reasons and to ensure that public money supports effective social programs and policies. The president and his budget chief recognize how tricky it is to make that determination. In June, Office of Management and Budget Director Peter Orszag released a statement describing how the administration will make sure that spending decisions are based not only on good intentions but also on strong evidence.

No single, circumscribed program can turn things around in an entire community or for a whole population. Nor can complex social programs and policies be tested like new drugs.

Serious social reformers today agree that rigorous efforts to determine “what works” are essential. But, depending on what the administration considers “strong evidence,” these efforts risk sabotaging or marginalizing some of the most innovative attempts to solve intractable social problems. I worry that, in defining what constitutes “the best available evidence” of effectiveness, the OMB and federal agencies will follow the constricted approach of the Coalition for Evidence-Based Policy and the U.S. Department of Education’s What Works Clearinghouse. These and similar organizations claim scientific rigor by insisting that public and philanthropic support go only to programs shown to be evidence-based through experimental evaluation methods, preferably involving random assignment of participants to experimental and control groups. The implication is that this methodology can determine definitively and objectively, uncontaminated by human judgment, whether any intervention—be it a pill, a model program, or an ambitious institutional change—produces a different outcome than would otherwise occur.

Unfortunately, no single, circumscribed program can turn things around in an entire community or for a whole population. Nor can complex social programs and policies be tested like new drugs. The interventions that turn around inner-city schools, strengthen families, and rebuild neighborhoods are not stable chemicals manufactured and administered in standardized doses. They are sprawling efforts with multiple components, some of which may be proven experimentally, but many that can’t be because they require midcourse corrections and adaptations to fit local circumstances.

Reformers in virtually every domain—from education to human services and social policy—have been learning that the most promising strategies are likely to be complex and highly dependent on their social, physical, and policy context. Very few efforts to improve education for at-risk students, prevent child abuse, increase labor-market participation, or reduce teenage pregnancy or homelessness succeed by applying a single, bounded intervention. They depend on community capacity to take elements that have worked somewhere already, adapt them, and reconfigure them with other strategies emerging from research, experience, and theory to make a coherent whole.

The search for silver bullets is giving way to an understanding that, to make inroads on big social problems, reformers must mobilize multiple, interacting strategies that take account not only of individual needs but also of the power of context. President Obama has urged that we stop treating unemployment, violence, failing schools, and broken homes in isolation and put together what works “to heal that entire community.” That’s the thinking behind the president’s proposed Promise Neighborhoods initiative, inspired by the accomplishments of the Harlem Children’s Zone.

What is remarkable about the collection of activities that the Harlem program comprises, and what has captured the attention of funders, reformers, and politicians, is that they build on one another; each is shaped to add to and multiply the impact of the others. Theory and experience suggest that the long-term results of these coherent efforts will ultimately be a critical mass of engaged, nurturing families, well-educated students, community values that support education and responsibility, and an infrastructure to sustain results that cannot be achieved by isolated programs aimed only at individuals.

The trouble is that scaling up such collections of reforms is hard, and determining what, exactly, works is even harder.

In assessing the success of complex, interactive efforts to improve outcomes, experimental methods cannot be the sole arbiter of effectiveness.

As a family-support program in King County, Wash., has discovered, the “rigid, narrow accountability” that funders demand forces programs to “keep doing only what worked yesterday, instead of what works today.” In an internal evaluation, it found that the very qualities that make the program effective are the qualities that make measurement so difficult.

The obstacles to demonstrating effectiveness, which become even more formidable in moving beyond the programmatic, are best overcome with a clear focus on results.

In the 1990s, the state of Vermont established state-local partnerships so people in all domains could do everything likely to contribute to school readiness. Their focus on results encouraged innovation and local problem-solving and replaced rigid regulation of inputs with rigorous accountability for accomplishments.

Vermont leaders knew they would never be able to prove that each piece of what the partnerships did was effective, but they were able to show that the entire strategy dramatically improved lives. Trend lines that had shown increasing damage in the form of child abuse, infant mortality, school failure, and teenage pregnancy began to turn around and move in the right direction soon after the partnerships instituted policies targeting those outcomes.

The evidence came from timing (the curves began to turn in communities where the interventions were initially implemented, and then in the whole state as the interventions went statewide); from theoretical connections established by research (for example, that high-quality supports to young families can reduce child abuse and changed community norms can reduce teenage pregnancy); and from the accumulation of data (including practitioner observations and official data from hospitals, health departments, and schools).

Had the Vermont partnerships been limited to “proven” interventions, or had they tried to set up interventions as randomized experiments, they would have had neither the money nor the flexibility to provide the services that made such a remarkable difference for the state’s children and families.

When an orientation toward results pervades planning, management, and implementation of new initiatives, it is easier to meet the challenges of accountability and evaluation. Evaluation becomes a way to support rigorous, contemporaneous collection of data on progress toward clearly defined goals, rather than an after-the-fact assessment of what succeeded (or didn’t).

Experimental methods are not always the best or even most 'scientific' way to obtain credible evidence.

Developers of complex social reforms aren’t the only ones who find that experimental methods are not always the best or even most “scientific” way to obtain credible evidence. Calls to re-examine what constitutes credible evidence come even from medicine. The Roundtable on Evidence-Based Medicine of the federal Institute of Medicine recommends that randomized clinical trials should not continue to be considered the gold standard, as they seem useful only in limited circumstances, including a narrow range of illnesses and the absence of multiple problems in an individual patient.

Many education researchers have reached a similar conclusion. In the American Educational Research Association’s Handbook of Education Policy Research, David L. Weimer suggests that “the typical evaluation model focuses attention on one or a small number of policy impacts with unambiguous desirability, and only assesses policies already in place.” He points out that truly novel ideas cannot be assessed within this model because they have yet to produce data that can be used to measure impacts.

Policymakers radically diminish the potential of reforms if they allow themselves to be bullied into accepting impoverished definitions of credible evidence. Just as the Obama administration is on the cutting edge of reform by recognizing the importance of complexity in many arenas of social policy, so must it encourage innovation in efforts to determine “what works.”

Related Tags:

A version of this article appeared in the August 26, 2009 edition of Education Week

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School Climate & Safety Webinar
Praise for Improvement: Supporting Student Behavior through Positive Feedback and Interventions
Discover how PBIS teams and educators use evidence-based practices for student success.
Content provided by Panorama Education
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
IT Management Webinar
Build a Digitally Responsive Educational Organization for Effective Digital-Age Learning
Chart a guided pathway to digital agility and build support for your organization’s mission and vision through dialogue and collaboration.
Content provided by Bluum
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Data Webinar
Drive Instruction With Mastery-Based Assessment
Deliver the right data at the right time—in the right format—and empower better decisions.
Content provided by Instructure

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

School & District Management Some Districts Return to Mask Mandates as COVID Cases Spike
Mask requirements remain the exception nationally and still sensitive in places that have reimposed them.
4 min read
Students are reminded to wear a mask amidst other chalk drawings on the sidewalk as they arrive for the first day of school at Union High School in Tulsa, Okla., Monday, Aug. 24, 2020.
Chalk drawings from last August remind students to wear masks as they arrive at school.
Mike Simons/Tulsa World via AP
School & District Management Women Get Overlooked for the Superintendent's Job. How That Can Change
3 female superintendents spell out concrete solutions from their own experience.
4 min read
Susana Cordova, former superintendent for Denver Public Schools.
Susana Cordova is deputy superintendent of the Dallas Independent School District and former superintendent for Denver Public Schools.
Allison V. Smith for Education Week
School & District Management Opinion You Can't Change Schools Without Changing Yourself First
Education leaders have been under too much stress keeping up with day-to-day crises to make the sweeping changes schools really need.
Renee Owen
5 min read
conceptual illustration of a paper boat transforming into an origami bird before falling off a cliff
wildpixel/iStock/Getty
School & District Management Opinion Principals Are Running Scared. Here's How to Steady Them
Mentorship is an old idea with new currency, write the authors of a recent book on helping school leaders thrive.
Phyllis Gimbel & Peter Gow
5 min read
Illustration of a hand holding a flashlight to help guide a person out of a dark space
iStock/Getty