Education Opinion

The Rhetoric of Reform: Does Research Count?

By skoolboy — July 09, 2008 3 min read

“Better schools. Higher scores. And satisfied parents. That’s the record of the D.C. Opportunity Scholarship Program.”

Thus begins Secretary of Education Margaret Spellings’ column in yesterday’s Washington Post. In this piece, she seeks to rally public support to renew the DC Opportunity Scholarship Program (OSP), which provides scholarships up to $7,500 to use towards the costs of a participating private school, including tuition, fees, and transportation. The authorizing legislation stipulated that priority for scholarships was to be given first to students attending schools that were judged in need of improvement (SINI) under NCLB standards.

Last month, the Institute of Education Sciences, the research arm of the U.S. Department of Education which Spellings heads, released the results of the Congressionally-mandated evaluation of the OSP, which reports impacts after two years. As the first federally-funded private school voucher program in the U.S., the OSP is a political football, and this evaluation report and its predecessors have been pored over by policy wonks across the land. The statute that authorized the OSP mandated that it be evaluated in terms of its impact on student test scores and school safety, as well as a more ambiguous criterion of “success,” which was operationalized in the study as parents’ and students’ satisfaction with their schools. The evaluation used a randomized controlled trial (RCT) to assess the impact of the OSP.

The executive summary of the report tells the tale, in unambiguous terms. (a) After two years, there was no effect of the OSP on reading or math test scores either for students who were offered a scholarship or those who actually used a scholarship. (b) If we look at 10 different subgroups of students—girls or boys, students attending SINI or non-SINI schools at the time of application, elementary or high school students, those from application cohort 1 or cohort 2, or students performing relatively higher or lower at the start of the study—there were no statistically significant effects of participating in the OSP on math for any subgroup, and for reading, three subgroups (students attending non-SINI schools at the time of application, relatively high-performing students, and students from cohort 1) might have done better than their nonparticipating peers. But even here, the evaluators caution that the statistical significance of these effects did not hold up when conventional adjustments for multiple comparisons were made. In other words, these subgroup effects might be due to chance, given how many comparisons were being made at the same time. Notably, the subgroup specifically identified in the legislation—students who had attended a SINI public school under NCLB—did not do better either in reading or math.

skoolboy isn’t crazy about using public funds to support private schools, but he’s a big supporter of using public funds to support the education of children in D.C., who historically have been among the lowest performers in the nation. Congress authorized this program, it’s survived legal scrutiny, and it’s deserving of a fair shake. But distorting the results of an evaluation doesn’t serve the public good. If Ms. Spellings wants to argue that the program should be renewed by Congress because parents are more satisfied with their child’s school, or because they are less likely to report serious concerns about school danger, she’s welcome to make that argument. Those are good outcomes, and some might argue that they’re ample justification for renewing the program. (Others might point out that students who received scholarships did not report higher levels of satisfaction with their school, or better school safety.) Or, alternatively, one could argue that the program needs more time to mature in order to be successful. But let’s not kid ourselves, Madame Secretary: the evidence on the academic success of the D.C. Opportunity Scholarship program—measured on your preferred metric, scores on standardized reading and math tests—is far too weak to make a persuasive case. Misrepresenting the evidence does honor neither to education research nor to education policy.

The opinions expressed in eduwonkette are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.