Education research in this country is locked in a crucial conflict between those who think hard and some who think hardly at all.
“For want of a nail, a shoe was lost. For want of a shoe, a horse was lost. For want of a horse, a leader was lost. For want of a leader, a battle was lost. For want of a victory, a Kingdom was lost.”
Such is the homily recited to alert all to the significance of detail. And it is neglect of detail that puts the federal Institute of Education Sciences at risk of losing an up-to-now hard-fought and important battle.
Education research in this country is locked in a crucial conflict between those who think hard and some who think hardly at all. Grover J. “Russ” Whitehurst, the director of the institute, has, with his staff, constructed a powerful battle strategy and has been winning stunning victories in this war. In relatively few years, the institute—a reorganization of the research arm of the U.S. Department of Education—has reshaped much of the conscious paradigm regarding American education research. The Institute of Education Sciences, along with the National Institute for Child Health and Development and the National Science Foundation, is supporting research initiatives using rigorous randomized experiments to evaluate educational products and practices. The eventual goal is to provide schools and instruction with a far deeper foundation of scientific evidence.
Those who advocate “critical theory,” “social justice analysis,” “deconstructionism,” endless exploratory and ungeneralizable case studies, thick but otherwise unrepeatable “description,” and various forms of politically correct analysis have been overcome in waves by Whitehurst and company’s intellectual volleys, intrepid criticisms of fatuous inquiry and mindlessness, and insistence upon research rigor.
For a measure of the progress Whitehurst and his Education Department colleagues have made in reshaping research, one need only read requests for proposals now emanating from the Institute of Education Sciences and compare them with those issued in others eras when we knew the federal education research agency as the National Institute of Education or the office of educational research and improvement. The quality of proposal submissions and the capacity of those submitting proposals are also like night and day, the bright part of the cycle favoring today’s institute.
The imposition of a hard-sciences format on education research has yet to pay off in practical breakthroughs regarding reading and mathematics instruction, school organization, or important policy issues such as those surrounding class size or performance pay for teachers. But one cannot reasonably expect fast turnarounds on these fundamental dimensions. The issues are too complicated, and the institute has to fund what is submitted to it, not necessarily what it would prefer to be submitted. More progress could be made in the upcoming competitive bidding of the Education Department’s technical-assistance centers and regional educational laboratories.
Still, despite a record of strategic success that few bureaucratic entities ever see, the Institute of Education Sciences’ mission, as well as a scientific research paradigm, are still at risk. Whitehurst and crew deserve kudos for progress, but the time has come to ensure that the horse’s shoes are firmly attached. Success necessitates not only doing the right things, but also doing things right. Here are illustrative problem areas for the Institute of Education Sciences that need major managerial attention:
Recent competitive bidding on eight research centers has not gone well. Topics around which competitions have been organized appear to be a crazy quilt of politically determined priorities rather than a logical pattern of intellectually comprehensible inquiries, the answers to which would be capable of advancing education effectiveness.
Success necessitates not only doing the right things, but also doing things right.
For example, instead of a concern for enduring priorities such as effective reading and mathematics instruction, teaching and teachers, performance incentives, and accountability, the institute has requested bids on peripheral or amorphous topics, such as rural education, higher education, and state and local policy. This landscape is not a total waste. Centers on significant issues such as measurement have also been established through bidding. Still, progress and prospects for more progress are uneven.
A second problem is management of center competitions. Of the eight advertised competitions for research centers, three have apparently ended in no result. Presumably, none of the dozens of submissions met minimum standards for a center on higher education, early-childhood education, or state and local policy. Whereas such low quality is possible, it is unlikely; and the institute has jeopardized its credibility by not being forthcoming about problems.
Any scientific competition that elicits such a huge swath of submissions, occupies literally hundreds of thousands of respondent hours, and then results in a 37.5 percent default rate is flawed. It was either ill conceived—asking the wrong questions—or ill managed—judging submissions poorly. Evidence supports both suppositions
Education research resources are now remarkably thin in the United States. Large foundations, with the exception of Spencer, Smith Richardson, and a very few others, have virtually forsaken the field and instead pursue various forms of advocacy, usually without being encumbered by empirical evidence. Therefore, the loss of approximately $30 million in federal money that could have flowed from fully funded competitions is a large loss, both in percentage terms and research opportunities.
There are microlevel problems too. Center-proposal reviews have been distressingly uneven. Some are exemplary in their rigor and adherence to the canons of scientific peer judgment. Others are infested with individual reviewers’ values, naive observations or ignorance about education, and gratuitous comments.
One hopes upcoming competitions for research endeavors such as technical-advisory centers and regional laboratories will be better managed.
National Center for Education Statistics
Collection and distribution of statistical information further illustrates the present-day need to pay attention to management detail. The NCES has made progress over the past quarter-century, and is now far closer to being a credible and helpful federal statistical agency. Several past commissioners launched the modern agency onto a productive path and, sometimes, bravely defended it against efforts at inappropriate political intrusion. Cognoscenti will remember the NCES “reading anomaly” and Clinton administration efforts to influence reported findings.
Instead of a concern for enduring priorities such as ... performance incentives and accountability, the institute has requested bids on peripheral or amorphous topics.
Despite a positive upward trajectory, the NCES lately has been treated as an Education Department stepchild. It has been deprived of stable leadership, subjected to inappropriate politicization, and starved for adequate resources. At a time when education matters more than ever before in the nation’s history, NCES data regarding important issues such as education resources and spending levels are routinely issued two and three years after the fact. The NCES appears, under such circumstances, to be irrelevant to the policy debate.
Here is an example of a particularly egregious situation. Most readers will be surprised to learn that the NCES does not routinely collect or report teacher-salary information in a manner generalizable to the nation. The major suppliers of teacher-salary data are teachers’ unions, the National Education Association and the American Federation of Teachers. Each of these organizations is generous in sharing its data, and it should be mentioned that they strive to be accurate in their reporting. However, it is unconscionable that the nation’s education policy analysts and researchers must routinely rely on remuneration information supplied by parties at interest, school employees.
The list of neglected data-collection areas could be extended, but the point would be the same. The NCES needs the same kind of practical attention from Director Whitehurst and his staff as they have given to reconceiving America’s education research paradigm.
What should be done? Here is a beginning set of suggestions:
• Come clean regarding completely unfunded competitions. Acknowledge what went wrong that can be corrected.
• If centers are “recompeted,” ensure that new themes are of enduring consequence.
• Strive for consistency, objectivity, and professionalism in proposal reviews.
• Provide the NCES with stable leadership, political insulation, and proper staffing.
• Rearrange NCES priorities to ensure full coverage of policy-relevant topics and timely reporting.
It should be said in Whitehurst’s behalf that he, too, operates in a less than perfect world. He has inherited much of his staff, and his budget is seldom what it should be. Still, one would not like to see the remarkable progress that has already been made on major strategic issues lost to inattention to important practical details.
Good luck, Mr. Whitehurst.