The U.S. Education Department’s annual State Education Performance Chart--the so-called “wall chart"--ought to be relegated to the dustbin of historical frauds.
The chart claims to measure student achievement on a number of indicators across state boundaries. But in fact it is an intellectual deceit, clouding rather than illuminating the task of improving education.
This year’s chart, the sixth in the series, instantly became front-page news in every state when it hit the streets last month. In releasing the data, Secretary of Education Lauro F. Cavazos expressed alarm, noting that in half the states, dropout rates had increased, while college-entrance-test scores had declined. “We have not made any progress in the last three years,” he declared.
Reactions in the states were mixed--depending upon where states placed on the chart. In those that ranked relatively high, like Nebraska, the release of the chart elicited smug smiles; educators hailed the data as proof of the great job they’re doing. In states that ranked low, the chart prompted another round of hand wringing and school bashing; educators retreated, while policymakers called for still higher standards.
But the wall chart should offer no state grounds for complacency about the condition of its education system. The baseline data by which the chart measures “progress” each year were gathered in 1982 and are about as reliable as the economic data used to project the federal budget deficit.
With little data on hand in 1982, many states simply made up information to comply with the Education Department’s directive. Subsequent wall charts have not accounted for refinements in data collection and reporting, and in no event does the chart consider changes in the real world.
Add to these weaknesses the incomparability of measures of quality across state boundaries, and you have a ranking so ill-defined that it does not go much beyond the framed motto hanging over Mammy Yokum’s mantelpiece: “Good is better than evil because it is nicer.”
The wall chart also fails to address some of the most salient questions about the quality of education. Isn’t it possible, for instance, that the differences among the states on the chart result as much from the strengths and weaknesses of local communities as from anything the schools do? Isn’t it possible that all states have some good schools and some not-so-good ones? Isn’t it possible that there is no cause-and-effect correlation between the caliber of education a student receives and a state’s ranking on the chart?
After all, most schools look much as they did when our grandparents and parents attended them. The fact that only a few points in scores on the American College Testing program separate the highest state on the chart from the lowest suggests that most state education systems are more alike than they are different.
It should not be surprising that dropout rates are higher in large, urban states, where distractions are numerous and poverty endemic, than they are in sparsely-populated rural states, where the school is a central community attraction and family life is more stable. (Unfortunately, this too is changing, as more and more at-risk students are turning up in rural states as well.)
Nor should it be surprising that when demographic elements such as socioeconomic status and poverty are taken into account in a high-ranking state like Nebraska, its students perform little better or worse than their counterparts in other states.
But you wouldn’t know any of this by looking at the wall chart.
The chart actually conceals more than it reveals. It is not a diagnostic instrument; it can’t tell us why things are as they are, or how to change them--which may explain the Secretary’s befuddled but revealing comment on the alleged lack of progress in recent years: “It’s hard to tell why.”
And the wall chart fosters two false and harmful assumptions about the quality of education in the states. One, trumpeted mostly by educators in states that rank high, says: “Things look good here, we’re pretty content, and if we can just get more money, without additional strings attached, we’ll do even better.”
The other, heard most often from policymakers in states on the lower end of the chart, says: “We’ve got to do better, we’ve got to impose higher standards and more requirements, and if we can convince the taxpayers, we’ve got to get more money.”
Like the chart itself, which largely measures inputs, these perspectives share the premise that the quality of education depends on doing what we’ve always done--adding more of this and more of that to the existing structure of schooling.
Fortunately, another perspective is emerging--one that could fulfill the promise of American education. This outlook suggests that, regardless of the relative quality of our schools, nothing short of a fundamental restructuring will suffice if we are to prepare young people for the challenges and opportunities they will face in the future.
The proponents of restructuring advocate making what we know about how children learn best and how teachers teach best the basis for instruction; currently, such practice is the exception rather than the norm. They propose that the school be designed to fit the student instead of making the student fit the school.
The premises that guide the restructuring movement reflect the recognition that, to paraphrase one of the movement’s leaders, if we continue doing what we’ve always done, we’re going to get what we’ve always gotten.
Despite the best efforts of educators and the infusion of vast sums of money into the education system over the last decade, the performance of American students continues to deteriorate.
Clearly, something is out of whack--our best intentions and efforts have been misdirected.
But the wall chart sheds no light on this problem. It’s time to scrap the chart and get on with the job of making schools a success for every student.
A version of this article appeared in the June 07, 1989 edition of Education Week as E.D.'s ‘Wall Chart’ Should Be Scrapped