What a dull topic, you are already thinking, but try to bear with me for a few more paragraphs. It may not be riveting, but it is important. Consider, for example, the much-publicized exercise in goal setting that President Bush and the governors have been engaged in since September’s education summit. Establishing goals for education in an information vacuum is like making ceramics without a kiln. The results may look genuine, but they’ll dissolve in the first rain.
The present situation is not a total vacuum. The quantity, quality, and timeliness of data emanating from the National Center for Education Statistics have never been better. The agency’s resources have risen, too, from $12.3 million in fiscal year 1986 to $40.3 million this year. Another sizable increase is sought in the 1991 budget that has just emerged. This growth shows the results of conscious efforts by the executive branch to repair the data base, even in a period of fiscal stringency. The Congress and “the field” have been willing to go along. And in the N.C.E.S. commissioner, Emerson J. Elliott, we have had the right person to begin a transformation of this enterprise.
So far, so good. But what a distance there still is to go, and how slowly we’re moving, especially as the post-summit data needs become clearer. It is a near certainty that those annual education “report cards” the White House and governors have vowed to issue will have a lot of “incompletes” on them unless urgent efforts are made to gather data that nobody now has.
Setting goals for education is a fine thing. Absent a clear destination, there can be no accountability for movement toward it. But the crucial element is a reliable information feedback system by which progress--or its absence-can be tracked. Unfortunately, the nation’s data-gathering system is not yet robust enough to support such a burden.
Its frailties are many. We have no uniform definition of “school readiness,” for example, of “dropouts,” or of “high-school equivalency.” What is commonly meant by “grade lever is simply the achievement level where average students happen to be, not where youngsters ought to be. (Hence, the oft. stated hope that all pupils will be “at or above grade level” is nonsensical.)
Despite much talk at the summit about foreign competition and the performance of American youngsters on “international achievement tests,” the sad truth is that today nobody is responsible for giving such tests on a regular basis. That we occasionally see some comparative data is due to private organizations that intermittently gather transnational results in whatever subjects or skills interest them. But nowhere in our federal or state establishments is there a soul whose job it is to make this happen.
Domestically, we’re better supplied with information about the country as a whole. For two decades, the National Assessment of Educational Progress has been generating reliable--if often depressing--data about students’ skills and knowledge. The mounting significance and visibility of these data were apparent in early January, when the 1988 NAEP reading and writing results were issued. This was front-page news around the country and led the evening television news reports as well.
Yet NAEP furnishes no information about achievement at the state or local levels, even though that’s where real education policies are made. For such data, we have relied on a motley array of commercial tests and state assessment programs. Sloppiness and cheating are rampant in many instances, and even when the numbers are sound, they cannot be compared from state to state or state to nation, much less to other lands.
It’s staggering, given the vast amount of testing in our schools, that no governor can tell how the reading and mathematics skills, or science and history knowledge, of youngsters in his state stack up alongside those in other jurisdictions.
In 1988, going partway down a path blazed by the Alexander- James study panel in 1987, the Congress authorized an experiment with state-by-state assessment on a voluntary basis. This year, 37 states are participating in a project that is confined to 8th-grade math. In 1992, the interstate NAEP adds 4th-grade reading and math.
This is a sizable undertaking for NAEP; two years ago, it was even judged a wee bit audacious. But so fast has been the pace of change in American education in recent months and so keen is the appetite for information on learning outcomes that the present plans appear woefully inadequate. As late as 1993, we will have no state-level achievement data in science, history, or writing, nor any for 12th graders in any subject, nor any for youngsters who have dropped out. After 1992, NAEP has no authority even to continue testing 4th and 8th graders in math and reading on a state-by-state basis. What is more, under current law, the tests cannot be used within states to compare the performance of local school systems or particular schools.
I once thought that only the “quality” data were weak, mainly in the area of school outcomes, and that we were well supplied with solid numbers on the “quantity” and “input” “ide of the ledger. The E.P.I.'S report shows that this isn’t so, at least not when we want to make international comparisons. The school-spending data on which the authors relied, gathered and reported by the United Nations Educational, Scientific and Cultural Organization several years after the fact, simply cannot sustain much analysis. The countries involved do not use the same definitions, employ the same measures, or report their information reliably.
(The report’s authors also made some highly questionable assumptions and used techniques that I judge to have been chosen to reach certain predetermined--and politically motivated--conclusions. But that’s another story.)
There are many other holes in the education data base. Those I’ve cited are merely the most vexing for policymakers. But they’re enough to menace any set of national goals, however timely and wise, turning it into little more than a wish list.
The President having proclaimed six ambitious national goals, none of which can be satisfactorily monitored with present data bases, the Administration is weighing an initiative to buttress the education-information system. The National Governors’ Association will address these shortcomings in late February. Senator Jeff Bingaman, Democrat of New Mexico, has taken a vigorous and knowledgeable interest in this problem and, based on hearings he conducted in the autumn, IS trying for a legislative solution. Several influential education groups, such as the Council of Chief State School Officers, as well as individuals such as Albert Shanker, president of the American Federation of’ Teachers, and Ernest L. Boyer, president of the Carnegie Foundation for the Advancement of ‘leaching, have urged a wholesale upgrading of this enterprise.
In early December, NAEP’S own independent governing board recommended sweeping reforms in the assessment program--changes that would widen its scope, systematize and accelerate the reporting of state and national results, end the ban on intrastate use, and modernize testing methods. Doing this right could cost another $100 million a year, possibly more--a sizable sum in the world of federal statistics, to be sure, yet a mere droplet in the education bucket.
With interest and anxiety mounting on several fronts, the new year may bring renewed efforts to reconstruct the data base. But if past experience is any guide, this endeavor will be impeded by school groups hostile to testing; by publishers who profit from the present hodgepodge of measurements; by reluctance to spend more federal dollars on programs that do not provide direct services to children; by entrenched incrementalism in the Office of Management and Budget and the House of Representatives; by the limited capacity of the N.C.E.S. itself to manage its growing workload; and by the sheer complexity of gathering uniform information about an enterprise as confused and decentralized as American schooling.
Only if the governors and the President put as much energy into overhauling the information system--intricate, technical, and politically juiceless though this task is--as they’ve been putting into their high-visibility goal-setting exercise is the latter apt to succeed. An improved data base is the kiln that will give their newly formed goals some strength and durability. And only if that endeavor proves both lasting and strong have we much hope for boosting the shameful levels of performance that characterize American education.
A version of this article appeared in the February 07, 1990 edition of Education Week