Fixing Education Research And Statistics (Again)

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments
Washington's education research effort is sorely troubled. Newly passed legislation holds out hope for major reform.

With little fanfare and scant public awareness, the House Subcommittee on Early Childhood, Youth, and Families did something remarkable some weeks back: By a unanimous, bipartisan vote, it adopted HR 4875, the proposed Scientifically Based Education Research, Statistics, Evaluation, and Information Act of 2000. ("House Plan Would Create Research 'Academy,'" Aug. 2, 2000.) If this measure survives the rest of the legislative gantlet in anything like its present form, it will work a long- overdue transformation in Washington's handling of education research, statistics, program evaluation, and assessment. For even pointing the way toward such a major reform, subcommittee Chairman Michael N. Castle, R-Del., and his colleagues deserve plaudits.

One sign that they're heading in a good direction: The American Educational Research Association is beside itself with anxiety that these changes might actually come to pass. Another sign: The mandarins of program evaluation at the U.S. Department of Education are apoplectic. (A lot of other education groups have signaled their support for the bill, however, which gives rise to the suspicion that it still may not go far enough!)

We've known for ages that Washington's education research effort is sorely troubled. Ever since the National Institute of Education was created some 28 years ago, this domain of federal activity—now housed in the U.S. Department of Education's office of educational research and improvement—has been beset by woes of every sort: shoddy work on trivial topics; research bent to conform with political imperatives and policy preferences; a skimpy budget that gets gobbled up by greedy, ineradicable "labs and centers" and other porky projects; avoidance of promising but touchy topics; studies that seldom follow the norms of "real" science (or even social science); research that is mostly inconclusive and, when conclusive, is weakly disseminated and widely ignored; terminal confusion about where research ends and "school improvement" begins; and an ever-shifting set of priorities presided over by an ever-changing cast of directors, assistant secretaries, and policy boards, most of them firmly under the thumb of the public school establishment.

Most serious policymakers and education reformers have simply come to ignore OERI-sponsored research.

Most serious policymakers and education reformers have simply come to ignore OERI-sponsored research. This is not new and, while discouraging to those who labor in this vineyard, is not fatal. Sure, it's a waste of money and opportunity. But the waste is as modest as the budget, certainly modest compared with the agony of trying yet again to set matters right. Besides, much sound education research is being done by other public and private sponsors. (Consider, for example, the superb work on reading at the National Institute of Child Health and Human Development.)

In recent years, however, some important cousins of research have also slid into trouble. Vexing problems now beset education statistics, program evaluation, and the National Assessment of Educational Progress. Rescuing them is worth the effort. And maybe the federal research effort can also be boosted along the way.

Washington's oldest and most seminal mission in all of education, dating back to the Civil War, is the collection and dissemination of statistics. That key role is entrusted primarily to the National Center for Education Statistics, also housed within the OERI. This once-sleepy backwater among government statistics agencies has grown conspicuously more important as energized reformers and determined policymakers demand more and better education data.

Much of its work is still sound. But the NCES now suffers from a deteriorating professional staff, ever-tighter supervision by the Education Department's political types, mounting pressure to "spin" its findings to accord with White House policy preferences, a lot of serious data gaps, and systems that are too slow and old- fashioned to keep pace with today's appetite for timely statistics. A particularly damaging blow fell in May 1999, when the center's well-regarded commissioner, Pascal D. Forgione Jr., was forced out by the White House. ("Renomination Blocked, Forgione To Depart," May 26, 1999.) The place has had "acting" leadership ever since. Today its very integrity is at risk.

Consider, for example, how its annual "back to school" press release, once an impeccably neutral source of straight facts (enrollments, spending, and the like) projected over the new school year, has been turned into a platform for advocating the administration's current policy passions—this year, school construction. Much the same fate has befallen the annual Condition of Education volume.

Integrity has long since vanished from program evaluation, even as this activity has become steadily more important to a Congress keen to know what is and isn't working.

Integrity, alas, long since vanished from program evaluation, even as this activity has become steadily more important to a Congress keen to know what is and isn't working among the hundreds of federal education programs and tens of billions (and counting) of dollars being spent upon them.

Here the current structure contains a built-in conflict of interest. The government's main program-evaluation unit is the same as the U.S. secretary of education's principal policy shop. Called the "planning and evaluation service," it was brought under the direct control of former Undersecretary Marshall Smith, one of the Clinton administration's most formidable policy wonks. But these problems predate Mr. Smith. It's simply unrealistic for Congress to expect impartial program evaluations from the same office that is helping the White House strategize about how to impose its policy preferences on those programs, how to manipulate public opinion about them, and how to press the Congress to go along.

Yet dozens of evaluations of major federal programs (for example, Title I) have been entrusted to this office and to panels, experts, and consultants chosen by it. In the past few years, as Maris Vinovskis and others have shown, several occasions have arisen when the evaluation office was, if not exactly cooking the books, certainly rushing out those findings that accorded with the administration's proposals and dragging its heels on data that contradicted those proposals. The upshot: Members of Congress and their staffs have come to believe that they can't trust the department to evaluate its own programs candidly and objectively. As any 12-year-old might say, "Duhhhhhhh."

Along with the troubles besetting statistics and program evaluation, the third big problem that the Castle bill seeks to solve is the subjugation of the National Assessment of Educational Progress to various political agendas. Though its policies are supposed to be set by the independent National Assessment Governing Board, numerous decisions about NAEP's actual operations, methods, and data reporting are in fact made by other offices at the department, and the assessment itself is run by the NCES. The potential for conflict is immense.

That these problems have been kept within bounds in the past few years is largely due to the fact that Secretary of Education Richard W. Riley is himself an alumnus of the governing board and a friend of the current NAGB chairman, Mark Musick. But if a more manipulative or NAEP-wary person were to occupy the secretary's chair, the present set-up would be a formula for compromising the credibility of the country's most valued gauge of K-12 student achievement. The proposed Scientifically Based Education Research, Statistics, Evaluation, and Information Act of 2000 tackles these three problems and a bunch of others. It makes two sweeping reforms in today's vexed arrangement, and one worthy secondary change.

The first big improvement is structural. All functions currently contained in the OERI, plus program evaluation and a few of the Education Department's miscellaneous activities (such as its library), are swept into a new agency with the clumsy name of National Academy for Education Research, Statistics, Evaluation, and Information (NAERSEI, which sounds to me like a depilatory, but let's not dwell on terminology). To gain the assent of Democrats on his subcommittee, Chairman Castle amended his original proposal for a completely separate agency and agreed to keep NAERSEI nominally within the Education Department. This has the potential for ambiguity, to be sure, but the bill says that NAERSEI's director (a presidential appointee who is supposed to possess specific qualifications and to enjoy a six-year term) will have charge of "all functions for carrying out" the bill's many provisions. That sounds like it's supposed to mean autonomy.

Another problem that the Castle bill seeks to solve is the subjugation of the National Assessment of Educational Progress to various political agendas. The potential for conflict is immense.

Within NAERSEI would be separate centers (for education research, program evaluation, and statistics), each with its own commissioner (appointed by the president with Senate confirmation). Sundry boards and committees at every level of this structure, while cumbersome, are meant to provide sage policy counsel, set durable research priorities (rather than have Congress forever insisting on its own pet topics and pork-barrel projects), and help assure the independence and integrity of the programs. Also within the proposed academy, the National Assessment Governing Board would gain full control of all aspects of NAEP, making the national assessment fully independent of political and bureaucratic control for the first time in its history. And we find several hopeful efforts at information dissemination and clearinghouse activities, as well as technical assistance to educators around the country. (This part is a mixed blessing. NAERSEI would undeniably win more friends and dollars if it's seen as useful to practitioners and parents. But if it slipped from objective, truth-seeking "audit" agency into "school improvement" program, it would be whipsawed by the usual disputes and interests associated with education reform in America today.)

The second big change wrought by HR 4875 would be substantive, not structural. The bill sets strict criteria for what constitutes sound research and program evaluation, and says that only projects satisfying those criteria could be funded. The phrase "scientifically based" recurs frequently. There's a strong push for bona fide experiments, complete with control groups, which are normal in hard science and biomedical research but staunchly resisted by education researchers enamored of what is politely termed "qualitative methods." Various safeguards are put in place to ensure that NAERSEI's constituent centers wouldn't fund or engage in projects that failed to satisfy those norms—and existing university-based research centers would be given just two years to prove themselves or lose their privileged access to the federal treasury.

The bill's worthy secondary change tackles the infamous regional labs, which have been around practically forever and have long since outlived whatever value President Lyndon B. Johnson ascribed to such a structure 35 years ago. Yet they've clung to ever-larger budgets with leech- like tenacity, meanwhile giving education research a bad odor and the OERI a very mixed profile on Capitol Hill.

The subcommittee was heavily lobbied not to cut the labs off altogether. So it created a new, slightly gimmicky way to determine their future: by entrusting federal technical-assistance dollars in block grants to boards established by state governors in 10 regions that would consolidate several of the inconsistent geographic clusters that today define the Education Department's "regional" programs. Each board would then decide how to spend its technical-assistance dollars and where to purchase the services it desired. A regional board might choose an extant lab or opt for something different. If this worked, these politically freighted decisions would at least be decentralized rather than focused entirely on appropriations committees in Washington.

There's much more in this 116-page bill, even as a lot of issues remain unresolved and questions unanswered. Some fine-tuning is still needed. (The process for appointing NAEP governing board members, for example, is badly flawed, more apt to yield interest-group representatives than big-picture education statesmen.) Scuffles lie ahead over funding levels, research priorities, the labs' status, and more. Only a few optimists think Congress will finish the job this year. Still, the bill's progress already sets the stage for serious attention in 2001, by which time we will also have a new administration downtown.

Chairman Castle and his colleagues (especially ranking Democrat Dale Kildee of Michigan) deserve kudos for taking on this complicated and mostly thankless project—and getting as far as they have with it. As Mr. Castle remarked in July: "Education research is broken in our country, and Congress must work to make it more useful, more independent of political influence, and less bureaucratic than the current system."

Vol. 20, Issue 03, Pages 33, 48

Published in Print: September 20, 2000, as Fixing Education Research And Statistics (Again)
Related Stories
Web Resources
Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories