Opinion
School & District Management Opinion

Fixing Education Research And Statistics (Again)

By Chester E. Finn Jr. — September 20, 2000 11 min read
  • Save to favorites
  • Print
Washington’s education research effort is sorely troubled. Newly passed legislation holds out hope for major reform.

With little fanfare and scant public awareness, the House Subcommittee on Early Childhood, Youth, and Families did something remarkable some weeks back: By a unanimous, bipartisan vote, it adopted HR 4875, the proposed Scientifically Based Education Research, Statistics, Evaluation, and Information Act of 2000. (“House Plan Would Create Research ‘Academy,’” Aug. 2, 2000.) If this measure survives the rest of the legislative gantlet in anything like its present form, it will work a long- overdue transformation in Washington’s handling of education research, statistics, program evaluation, and assessment. For even pointing the way toward such a major reform, subcommittee Chairman Michael N. Castle, R-Del., and his colleagues deserve plaudits.

One sign that they’re heading in a good direction: The American Educational Research Association is beside itself with anxiety that these changes might actually come to pass. Another sign: The mandarins of program evaluation at the U.S. Department of Education are apoplectic. (A lot of other education groups have signaled their support for the bill, however, which gives rise to the suspicion that it still may not go far enough!)

We’ve known for ages that Washington’s education research effort is sorely troubled. Ever since the National Institute of Education was created some 28 years ago, this domain of federal activity—now housed in the U.S. Department of Education’s office of educational research and improvement—has been beset by woes of every sort: shoddy work on trivial topics; research bent to conform with political imperatives and policy preferences; a skimpy budget that gets gobbled up by greedy, ineradicable “labs and centers” and other porky projects; avoidance of promising but touchy topics; studies that seldom follow the norms of “real” science (or even social science); research that is mostly inconclusive and, when conclusive, is weakly disseminated and widely ignored; terminal confusion about where research ends and “school improvement” begins; and an ever-shifting set of priorities presided over by an ever-changing cast of directors, assistant secretaries, and policy boards, most of them firmly under the thumb of the public school establishment.

Most serious policymakers and education reformers have simply come to ignore OERI-sponsored research.

Most serious policymakers and education reformers have simply come to ignore OERI-sponsored research. This is not new and, while discouraging to those who labor in this vineyard, is not fatal. Sure, it’s a waste of money and opportunity. But the waste is as modest as the budget, certainly modest compared with the agony of trying yet again to set matters right. Besides, much sound education research is being done by other public and private sponsors. (Consider, for example, the superb work on reading at the National Institute of Child Health and Human Development.)

In recent years, however, some important cousins of research have also slid into trouble. Vexing problems now beset education statistics, program evaluation, and the National Assessment of Educational Progress. Rescuing them is worth the effort. And maybe the federal research effort can also be boosted along the way.

Washington’s oldest and most seminal mission in all of education, dating back to the Civil War, is the collection and dissemination of statistics. That key role is entrusted primarily to the National Center for Education Statistics, also housed within the OERI. This once-sleepy backwater among government statistics agencies has grown conspicuously more important as energized reformers and determined policymakers demand more and better education data.

Much of its work is still sound. But the NCES now suffers from a deteriorating professional staff, ever-tighter supervision by the Education Department’s political types, mounting pressure to “spin” its findings to accord with White House policy preferences, a lot of serious data gaps, and systems that are too slow and old- fashioned to keep pace with today’s appetite for timely statistics. A particularly damaging blow fell in May 1999, when the center’s well-regarded commissioner, Pascal D. Forgione Jr., was forced out by the White House. (“Renomination Blocked, Forgione To Depart,” May 26, 1999.) The place has had “acting” leadership ever since. Today its very integrity is at risk.

Consider, for example, how its annual “back to school” press release, once an impeccably neutral source of straight facts (enrollments, spending, and the like) projected over the new school year, has been turned into a platform for advocating the administration’s current policy passions—this year, school construction. Much the same fate has befallen the annual Condition of Education volume.

Integrity has long since vanished from program evaluation, even as this activity has become steadily more important to a Congress keen to know what is and isn’t working.

Integrity, alas, long since vanished from program evaluation, even as this activity has become steadily more important to a Congress keen to know what is and isn’t working among the hundreds of federal education programs and tens of billions (and counting) of dollars being spent upon them.

Here the current structure contains a built-in conflict of interest. The government’s main program-evaluation unit is the same as the U.S. secretary of education’s principal policy shop. Called the “planning and evaluation service,” it was brought under the direct control of former Undersecretary Marshall Smith, one of the Clinton administration’s most formidable policy wonks. But these problems predate Mr. Smith. It’s simply unrealistic for Congress to expect impartial program evaluations from the same office that is helping the White House strategize about how to impose its policy preferences on those programs, how to manipulate public opinion about them, and how to press the Congress to go along.

Yet dozens of evaluations of major federal programs (for example, Title I) have been entrusted to this office and to panels, experts, and consultants chosen by it. In the past few years, as Maris Vinovskis and others have shown, several occasions have arisen when the evaluation office was, if not exactly cooking the books, certainly rushing out those findings that accorded with the administration’s proposals and dragging its heels on data that contradicted those proposals. The upshot: Members of Congress and their staffs have come to believe that they can’t trust the department to evaluate its own programs candidly and objectively. As any 12-year-old might say, “Duhhhhhhh.”

Along with the troubles besetting statistics and program evaluation, the third big problem that the Castle bill seeks to solve is the subjugation of the National Assessment of Educational Progress to various political agendas. Though its policies are supposed to be set by the independent National Assessment Governing Board, numerous decisions about NAEP’s actual operations, methods, and data reporting are in fact made by other offices at the department, and the assessment itself is run by the NCES. The potential for conflict is immense.

That these problems have been kept within bounds in the past few years is largely due to the fact that Secretary of Education Richard W. Riley is himself an alumnus of the governing board and a friend of the current NAGB chairman, Mark Musick. But if a more manipulative or NAEP-wary person were to occupy the secretary’s chair, the present set-up would be a formula for compromising the credibility of the country’s most valued gauge of K-12 student achievement. The proposed Scientifically Based Education Research, Statistics, Evaluation, and Information Act of 2000 tackles these three problems and a bunch of others. It makes two sweeping reforms in today’s vexed arrangement, and one worthy secondary change.

The first big improvement is structural. All functions currently contained in the OERI, plus program evaluation and a few of the Education Department’s miscellaneous activities (such as its library), are swept into a new agency with the clumsy name of National Academy for Education Research, Statistics, Evaluation, and Information (NAERSEI, which sounds to me like a depilatory, but let’s not dwell on terminology). To gain the assent of Democrats on his subcommittee, Chairman Castle amended his original proposal for a completely separate agency and agreed to keep NAERSEI nominally within the Education Department. This has the potential for ambiguity, to be sure, but the bill says that NAERSEI’s director (a presidential appointee who is supposed to possess specific qualifications and to enjoy a six-year term) will have charge of “all functions for carrying out” the bill’s many provisions. That sounds like it’s supposed to mean autonomy.

Another problem that the Castle bill seeks to solve is the subjugation of the National Assessment of Educational Progress to various political agendas. The potential for conflict is immense.

Within NAERSEI would be separate centers (for education research, program evaluation, and statistics), each with its own commissioner (appointed by the president with Senate confirmation). Sundry boards and committees at every level of this structure, while cumbersome, are meant to provide sage policy counsel, set durable research priorities (rather than have Congress forever insisting on its own pet topics and pork-barrel projects), and help assure the independence and integrity of the programs. Also within the proposed academy, the National Assessment Governing Board would gain full control of all aspects of NAEP, making the national assessment fully independent of political and bureaucratic control for the first time in its history. And we find several hopeful efforts at information dissemination and clearinghouse activities, as well as technical assistance to educators around the country. (This part is a mixed blessing. NAERSEI would undeniably win more friends and dollars if it’s seen as useful to practitioners and parents. But if it slipped from objective, truth-seeking “audit” agency into “school improvement” program, it would be whipsawed by the usual disputes and interests associated with education reform in America today.)

The second big change wrought by HR 4875 would be substantive, not structural. The bill sets strict criteria for what constitutes sound research and program evaluation, and says that only projects satisfying those criteria could be funded. The phrase “scientifically based” recurs frequently. There’s a strong push for bona fide experiments, complete with control groups, which are normal in hard science and biomedical research but staunchly resisted by education researchers enamored of what is politely termed “qualitative methods.” Various safeguards are put in place to ensure that NAERSEI’s constituent centers wouldn’t fund or engage in projects that failed to satisfy those norms—and existing university-based research centers would be given just two years to prove themselves or lose their privileged access to the federal treasury.

The bill’s worthy secondary change tackles the infamous regional labs, which have been around practically forever and have long since outlived whatever value President Lyndon B. Johnson ascribed to such a structure 35 years ago. Yet they’ve clung to ever-larger budgets with leech- like tenacity, meanwhile giving education research a bad odor and the OERI a very mixed profile on Capitol Hill.

The subcommittee was heavily lobbied not to cut the labs off altogether. So it created a new, slightly gimmicky way to determine their future: by entrusting federal technical-assistance dollars in block grants to boards established by state governors in 10 regions that would consolidate several of the inconsistent geographic clusters that today define the Education Department’s “regional” programs. Each board would then decide how to spend its technical-assistance dollars and where to purchase the services it desired. A regional board might choose an extant lab or opt for something different. If this worked, these politically freighted decisions would at least be decentralized rather than focused entirely on appropriations committees in Washington.

There’s much more in this 116-page bill, even as a lot of issues remain unresolved and questions unanswered. Some fine-tuning is still needed. (The process for appointing NAEP governing board members, for example, is badly flawed, more apt to yield interest-group representatives than big-picture education statesmen.) Scuffles lie ahead over funding levels, research priorities, the labs’ status, and more. Only a few optimists think Congress will finish the job this year. Still, the bill’s progress already sets the stage for serious attention in 2001, by which time we will also have a new administration downtown.

Chairman Castle and his colleagues (especially ranking Democrat Dale Kildee of Michigan) deserve kudos for taking on this complicated and mostly thankless project—and getting as far as they have with it. As Mr. Castle remarked in July: “Education research is broken in our country, and Congress must work to make it more useful, more independent of political influence, and less bureaucratic than the current system.”

Related Tags:

A version of this article appeared in the September 20, 2000 edition of Education Week as Fixing Education Research And Statistics (Again)

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being Webinar
Reframing Behavior: Neuroscience-Based Practices for Positive Support
Reframing Behavior helps teachers see the “why” of behavior through a neuroscience lens and provides practices that fit into a school day.
Content provided by Crisis Prevention Institute
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Mathematics Webinar
Math for All: Strategies for Inclusive Instruction and Student Success
Looking for ways to make math matter for all your students? Gain strategies that help them make the connection as well as the grade.
Content provided by NMSI
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Mathematics Webinar
Equity and Access in Mathematics Education: A Deeper Look
Explore the advantages of access in math education, including engagement, improved learning outcomes, and equity.
Content provided by MIND Education

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

School & District Management Opinion Principals, You Aren't the Only Leader in Your School
What I learned about supporting teachers in my first week as an assistant principal started with just one question: “How would I know?”
Shayla Ewing
4 min read
Collaged illustration of a woman climbing a ladder to get a better perspective in a landscape of ladders.
Vanessa Solis/Education Week via Canva
School & District Management Opinion 3 Steps for Culturally Competent Education Outside the Classroom
It’s not just all on teachers; the front office staff has a role to play in making schools more equitable.
Allyson Taylor
5 min read
Workflow, Teamwork, Education concept. Team, people, colleagues in company, organization, administrative community. Corporate work, partnership and study.
Paper Trident/iStock
School & District Management Opinion Why Schools Struggle With Implementation. And How They Can Do Better
Improvement efforts often sputter when the rubber hits the road. But do they have to?
8 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
DigitalVision Vectors/Getty
School & District Management How Principals Use the Lunch Hour to Target Student Apathy
School leaders want to trigger the connection between good food, fun, and rewards.
5 min read
Lunch hour at the St. Michael-Albertville Middle School West in Albertville, Minn.
Students share a laugh together during lunch hour at the St. Michael-Albertville Middle School West in Albertville, Minn.
Courtesy of Lynn Jennissen