Truthiness in Education
At a time when America’s education policymakers have nominally embraced the idea of tying school reform to “scientifically based research,” many of the nation’s most influential reports are little more than junk science. A hodgepodge of private “think tanks” at both the state and national levels wield significant and very often undeserved influence in policy discussions by cranking out an array of well-funded and slickly produced—yet ideologically driven—research.
Often written by people with little discernible expertise and invariably not subjected to peer review, these reports consistently end with a findings section that supports the ideological preferences of the research sponsor.
Moreover, the research offered by many private think tanks commonly violates the standard canons of social science inquiry. The American Psychological Association’s research standards, for example, lead to the following questions about original research:
• Is the research question significant, and is the work original and important?
• Have the instruments used in the research been demonstrated to have satisfactory reliability and validity?
• Are the outcome measures clearly related to the variables with which the investigation is concerned?
• Does the research design fully and unambiguously test the hypothesis?
• Are the participants representative of the population to which generalizations are made?
• Is the research at a stage advanced enough to make the publication of results meaningful?
Most think tank publications are not original research. A large number are what might best be called “policy briefs.” Quality policy briefs, however, generally include a comprehensive consideration of previously published research and synthesize what is known in order to draw out policy implications. Accordingly, the usefulness of such briefs to guide sound policy is strongly related to the adequacy of their reviews of the literature. Sadly, the influential think tank reports we have been reading rarely provide either a comprehensive review of the literature or a defensible interpretation of the findings of whatever scant research is cited. They tend to opt instead for highly selective reviews of the literature and a necessarily skewed reading of the insights offered by that research.
In addition to these policy briefs, think tanks publish a fair number of reports that offer analyses of data. Research design and methodology questions thus come to the fore. Valid research findings require that the data used are sound, and that the methods used to do the analyses are appropriate and properly executed. As we note below, these reports are at least as problematic as the policy briefs, with problems ranging from severely flawed data to inappropriate methods, to broad conclusions not supported by the evidence provided.
Despite their lack of scientific value and, in some instances the lack of logic or coherence, think tank reports are widely disseminated through mainstream media outlets. To a remarkable degree, they shape and drive news coverage of key education topics in state legislatures and Congress, as well as in the press (including Education Week). Their findings become part of the conventional wisdom without ever having been subject to expert review. This may be good partisan politics, but it is terrible social science, and it harms efforts to improve the nation’s schools.
Unfortunately, academic experts rarely review or criticize think tank reports. As a rule, social scientists consider most of these reports to be of little value and best ignored. The primary purpose of social science review and criticism is to further a deliberative process in which knowledge is advanced, methods improved, and conclusions tested. Most think tank reports are, from this perspective, literally a waste of time. Yet for millions of children and their parents, teachers, and communities, these reports are of vital importance, since they are repeatedly used by policymakers to shape the nature and scope of available educational opportunities.
Because the stakes for America’s children are so high, the two academic centers with which we’re associated—the Education Policy Research Unit at Arizona State University and the Education and the Public Interest Center at the University of Colorado at Boulder—together launched the Think Tank Review Project (www.thinktankreview.org) to provide expert reviews of think tank reports. The reviews, written for a general audience, assess reports in much the same fashion as would a reviewer for a scholarly, peer-reviewed journal.
In 2006, the first year of the project, 13 think tank reports were reviewed. The results are disturbing. Only one, or perhaps two, could be considered to have even minimally passed expert muster. Moreover, the same flaws emerged repeatedly, over different reports reviewed by different scholars. For instance, empirical analyses have been shockingly shoddy, and the findings, conclusions, and recommendations have consistently extended beyond those analyses. That is to say, even setting aside the flaws in the analyses, the conclusions reached by the authors have been unsupported by the analyses. Also, as already noted, the ideological beliefs of the authors (and their think tanks) appear to have distorted the methods used, shaped the literature reviewed, and determined the results and recommendations of the reports.
As the Think Tank Review Project enters its second year, we thought it would be useful to look back at 2006 and highlight some of the worst work reviewed. Toward that end, we have created the Bunkum Awards in Education.
This year’s grand prize, the Caveat Emptor Award, goes to the Lexington Institute, headquartered in Arlington, Va., for a report that claims to demonstrate the success of California’s Proposition 227, an anti-bilingual-education ballot initiative passed in 1998 that imposes English-only teaching methods in most classrooms. The findings of the Lexington report, which is titled “Immersion Not Submersion, Vol. III,” rest on a smorgasbord of bad data, severely flawed methodology, and a willful disregard of a large body of conflicting research evidence. In fact, our reviewer found that if the authors had correctly identified the teaching used by the districts studied, they would have reached the opposite result: that the English-only districts were doing the worst.
The first runner-up, to which we give the Truthiness in Education Award, is the Washington-based Thomas B. Fordham Institute, an arm of the like-named foundation. Fordham issued two reviewed reports, “Trends in Charter School Authorizing” and “The State of State Standards 2006.” For each, Fordham authors collected data, analyzed it, and then presented conclusions that their own data and analyses flatly contradicted. The reviewer of “The State of State Standards 2006” described it as “selectively data-mined” and “seriously lacking in methodological rigor.” Yet a New York Times editorial on Dec. 31, 2006, uncritically cited the report for the proposition that “states that commit to rigorous standards and accountability systems can make progress” in closing the gap between “low-income students [and] their affluent counterparts.”
A policy center at Harvard University and the New York City-based Manhattan Institute share the second-runner-up honor, the Damned Lies Award for Statistical Subterfuge. The Program for Education Policy and Governance at Harvard won for a report called “On the Public-Private School Achievement Debate,” while the Manhattan Institute was recognized for two reports it issued claiming that Florida’s policy of holding students back (flunking them) improved those students’ later performance: “Getting Ahead by Staying Behind: An Evaluation of Florida’s Program to End Social Promotion” and “Getting Farther Ahead by Staying Behind: A Second-Year Evaluation of Florida’s Policy to End Social Promotion.”
All three of these reports demonstrated a flair for the resolute use of statistics to achieve a desired outcome. The Harvard report, however, deserves special recognition. Dissatisfied with the work of other researchers, who found private schools to have worse academic results than public schools when educating comparable students, the authors of the report offered an alternative model using, at best, tangentially related statistics that failed to factor in the student demographic differences that were supposedly at the core of the analysis.
Honorable-mention awardees include the Cato Institute, based in Washington, for a report about teacher quality, “Giving Kids the Chaff: How to Find and Keep the Teachers We Need.” After sensibly describing the importance of high-quality teachers, its authors take a leap of faith, ungrounded in their own research or the larger body of existing evidence, to conclude that choice and vouchers offer the best strategy for recruiting and retaining high-quality teachers.
American politicians often tell us how much they value our schools. Yet many continue to base important decisions on this sort of shoddy research. Imagine if doctors or NASA engineers based their decisions on studies so poor that they could never survive the scrutiny of peer review by experts in the field.
We do not consider the Think Tank Review Project reviews to be the final word, nor is our goal to prevent think tanks’ participation in the public dialogue over school reform. That dialogue is, in fact, what we hope to improve and encourage. In our view, the best ideas come about through rigorous critique and debate, and ideas presented in think tank reports should be part of the process.
Vol. 26, Issue 25, Pages 32, 44Published in Print: February 28, 2007, as Truthiness in Education