Opinion
Education Opinion

The Misinterpretation of S.A.T. Scores

By George H. Hanford — December 04, 1991 9 min read
  • Save to favorites
  • Print

News reports about the fall-off in scores on the Scholastic Aptitude Test earned by last year’s high-school seniors were marked by shortsighted treatment by the press and spurious speculation by educators who should have known better. The news was that the Class of 1991 on the average scored a total of 4 points less (out of a possible 1,600) on the S.A.T. than the Class of 1990. The most egregious faults were the failure to put the one-year decline in proper perspective and the tendency to attribute changes in S.A.T. scores to the presence of more minority students in the test-taking population.

Most at fault were those who assumed that the one-year change had significance in and of itself when any educator worth his or her salt and any reporter who had read the background material supplied by the College Board, the sponsor of the S.A.T., should have known better. Changes in S.A.T. scores for any population of students--a school’s, a district’s, a state’s, as well as the nation’s--have meaning only when observed over time. Nevertheless, it is not surprising that some commentators chose this year to take that short-run view; after all, that’s what former Secretary of Education Terrel H. Bell did in 1984, when he gave credit to the Reagan Administration’s education policies for a one-year upturn in S.A.T. scores. Although for obvious political reasons the current Administration chose not to take responsibility for the most recent one-year decline, a few observers did speculate that it may have been due to more minority test-takers, in the process ignoring not only the warnings of the College Board about over-interpreting one-year changes but also the data it provided; the change in the ethnic mix of the S.A.T. population from 1990 to 1991 could not conceivably account for the one-year, 4-point drop in S.A.T. scores.

Others on the other hand did choose to take a longer view, although still not long enough. A few chose to take a 10-year look, observed that the scores were down from their highs in the mid-1980’s, and speculated that the 10-point decline since then was due to the presence of more minority students in the S.A.T.-taking population. Again, the data supplied by the College Board would seem to refute that hypothesis; the gap in average scores earned by minority students on the one hand and majority (white) students on the other has been steadily closing. The facts aside, however, using the presence of more minority students in the test-taking population is a blatant admission of failure. We’ve been a ware for more than a decade of the fact that both the proportion and the absolute number of minority young people in the college-age cohorts were growing, and we should by now have adopted measures to ensure that, as more of them choose to apply to college and take the S.A.T., they won’t bring national average S.A.T. scores down.

The longer view that should have been taken would have made clear that the 431 verbal high from which the 1991 scores were down was in fact a pretty dismal high when compared with the national average verbal S.A.T. score at the beginning of the decline in those scores that began in 1963. It was 478 then as compared with 422 for the Class of 1991, a 56-point drop over nearly three decades.

Indeed, it was the 38-point drop over the 11 years from 1963 to 1974 that proved to be the catalyst for the educational reform movement that began to be mounted in the 1980’s. That decline confirmed what a lot of people already suspected: The nation’s schools were in trouble. Just as the 4-point falloff between 1990 and 1991 generated a lot of speculation as to its causes, so too did that 38-point decline. To make sense out of the plethora of suggestions being offered back there in 1975, the College Board and the Educational Testing Service appointed a blue-ribbon panel to sort them out. After two years of intensive study, the panel concluded that the decline was real but found itself unable, in view of the “circumstantial” nature of most of the evidence available to it, to attribute specific points in the decline to particular cause except in one instance. It did conclude that “most--probably two-thirds to three-fourth of the score decline between 1963 and 1970 was related to ‘compositional’ changes in the group of students taking this college-entrance examination.”

That seven- or eight-year decline was on the order of 24 points, and the panel’s conclusion would suggest that 16 to 18 of them were due to the presence of more minority students in the test-taking population. After that, the composition of the test- taking population stabilized to the point where the panel figured that changes in the “mix” were no longer a significant factor.

Now take 1971 as the base year, assuming for the moment that all of the decline up to that point was due to compositional changes. The average S.A.w. verbal score that year was 454. Over the next 10 years scores plummeted 30 points, to 424 in 1981, recovered to 431 in 1985 and 1986, only to fall back to 422, the lowest point ever, in 1991. If those who hold minority students responsible for changes in national average S.A.w. scores were right, the data would show a marked increase in the number of them between 1971 and 1981 (which they don’t), a decrease between 1981 and 1986 (which they also don’t), and then an upturn between then and 1991. What happened instead was a slow, steady increase over the 20-year period--a period in which, while the number and the percentage of minority students in the S.A.T.-taking population were increasing, the gap between the average scores of minority and majority students was closing. However one views those circumstances, one simply cannot hold minority students responsible for the declines in S.A.T. verbal scores from 1971 to 1991, from 1981 to 1991, from 1986 to 1991, or from 1990 to 1991.

While putting the blame on minority youngsters was the most egregious fault in the coverage of the fall-off in S.A.T. scores from 1990 to 1991, there were others of both omission and commission. For instance, failure to look back at 1981 created the erroneous impression that, ff 1991’s 422 was an all-time low, 1986’s 431 was an all-time high and that a decline of 9 points was all the nation had to worry about when, as already noted, the 30-year highwater mark was 478 and it is a 56-point differential that ought to concern as. Taken in conjunction with references to the appearance of A Nation at Risk, it also fostered some n@taken differences about the course of the educational-reform movement mounted in the aftermath of the disclosure of the decline in S.A.T. scores from 1963 through 1974. The scores continued to drop down for another seven years, to a then all-time low of 424 in 1980 and 1981. Then, in 1982, a year before the publication of A Nation at Risk, they began their rebound to their decade high of 431 in 1985 and 1986. When Secretary Bell took credit on behalf of the Reagan Administration for the minimal improvement from 1983 to 1984, I observed that any improvement in high-school seniors’ S.A.T. scores had to be a function of improvements in the schools that had begun well before the publication of A Nation at Risk and suggested that On Further Examination, the 1977 report of the panel appointed to study the S.A.T. score decline, might have been a more pertinent catalyst.

By that commentary, I do not mean to take anything away from the significance of A Nation at Risk. It is a seminal document, the first recognition by the federal government of the importance of education to our national well-being. But the fact remains that within four years of its publication national average S.A.T. scores started going down again, and the educational reform that it was supposed to have generated either never took hold or soon ran out of steam.

In this connection it is ironic that, while the last Administration chose to give itself credit for the presumed improvements in the nation’s schools reflected in the slight increase in scores in 1984, President Bush has chosen to put the onus for the 1991 downturn on the nation’s homes. As I said in a speech at Plymouth State College in 1986, “Above all else be consistent in what you do, for nobody will believe you when you say declining scores are no indication of the health of the enterprise if you have been using rising scores to claim that it is thriving.” President Reagan and President Bush can’t both have been right in their short-term interpretations.

Taking a 30-year look at the record of national average S.A.T. scores in 10-year bites provides a more meaningful if not very encouraging perspective. Beginning in 1963 there was a nearly 10-year decline that was due primarily to changes in the population of students taking the test. That drop was followed by a second decade of falling scores attributed on the basis of circumstantial evidence by the blue-ribbon panel appointed to study the decline to a variety of causes, including but not limited to lower academic standards in the nation’s schools. Somewhere in the middle of that second 10 years something happened to begin the process of halting the 20-year slide. It bottomed out in 1980 and 1981 and, with relatively minor variations from year to year, we’ve been on what is from a long-range perspective essentially level ground for the past 10 years.

What is needed is a look at the data that have been accumulated over the past 20 years to see if some determination can be made as to the reasons for our inability to get out of our current 10-year slough. As in the case of the original score-decline panel, the findings will have to be based on evidence that, while supported by data, is essentially circumstantial. For example, the panel opined that the “period covered by the score decline [was] an unusually hard one to grow up in.” Much was said, for instance, about the unpopularity of the Vietnam War, the threat of world atomic conflict, the diversions of television, the unfortunate lot of “latchkey kids” of working parents, and the growing number of one-parent families. The data should give us some feel for how circumstances have changed in some of these regards. In these and similar terms, were the 1980’s an easier time to grow up in than the 1970’s and, if so, could that difference account in part for the bottoming out of S.A.T. scores? But, also, if so, why didn’t the scores go up in the 1980’s?

My own speculation, and it is purely that, is that President Bush, Daniel B. Taylor, and John I. Goodlad are all right. Part of the reason scores haven’t rebounded can be attributed, as the President does, to the deterioration of family life in the United States. Another reason they haven’t, and one which explains why American students fare so poorly in academic comparisons with youngsters from other developed nations, is a matter of time on task. As Mr. Taylor, deputy executive director of the National Assessment Governing Board, pointed out in a recent Commentary (“Half-Time Schools and Half-Baked Students,” Sept. 11, 1991), students in the United States spend far less time in school and on schoolwork than their counterparts from other countries. And, like Mr. Goodlad, I persist in my belief that academic standards in the schools need to be raised.

Society, the family, and the schools all share the blame for the sorry state of things. Perhaps another careful look at the data accumulated in connection with the S.A.T. might provide some hints as to where our national efforts might most profitably be directed.

A version of this article appeared in the December 04, 1991 edition of Education Week as The Misinterpretation of S.A.T. Scores

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Achievement Webinar
How To Tackle The Biggest Hurdles To Effective Tutoring
Learn how districts overcome the three biggest challenges to implementing high-impact tutoring with fidelity: time, talent, and funding.
Content provided by Saga Education
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being Webinar
Reframing Behavior: Neuroscience-Based Practices for Positive Support
Reframing Behavior helps teachers see the “why” of behavior through a neuroscience lens and provides practices that fit into a school day.
Content provided by Crisis Prevention Institute
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Mathematics Webinar
Math for All: Strategies for Inclusive Instruction and Student Success
Looking for ways to make math matter for all your students? Gain strategies that help them make the connection as well as the grade.
Content provided by NMSI

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Education Briefly Stated: March 20, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read
Education Briefly Stated: March 13, 2024
Here's a look at some recent Education Week articles you may have missed.
9 min read
Education Briefly Stated: February 21, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read
Education Briefly Stated: February 7, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read