Our long-running SAT saga enters a new era.
The ongoing debate over the SAT is beginning to resemble the interminable case of Jarndyce and Jarndyce in Bleak House, that “scarecrow of a suit [that has] become so complicated that no man alive knows what it means,” as Dickens put it. Do pity the parents, students, state legislatures, and other members of the court of public opinion who endure American education’s own intractable case of Jarndyce and Jarndyce. Like that inscrutable battery-powered bunny on TV commercials, the SAT debate drones on and on, just as it has done for decades, and there’s little reason to believe that the argument will ever end unless the terms of the debate are dramatically changed.
Much of the controversy about the SAT over the years has been a relatively arcane debate about the social science backing it up, particularly whether the exam is a sufficiently accurate predictor of success in college. More than 20 years ago, in 1980, for instance, the Harvard Educational Review published a bitter exchange of papers between Warner V. Slack and Douglas Porter of the Harvard Medical School, on one side, and Rex Jackson of the Educational Testing Service, the firm that administers the SAT for the College Board, on the other. Slack and Porter concluded from their analysis of available data that the Scholastic Aptitude Test (as it was then called) provided colleges with very little additional information about a student’s chances of success in college beyond what could be predicted from one’s high school record alone. Jackson countered that, while the “continuing appraisal” of the SAT was “healthy,” Slack and Porter’s critique had failed “the tests of fairness, accuracy, or responsibility.”
And so this seemingly endless contest has gone. The SAT’s academic critics present evidence to challenge the ETS and College Board claims that the SAT is a useful predictor of success in college, which inevitably is followed by protests from the ETS, the College Board, and others that the exam’s critics are merely “shooting the messenger,” and that the SAT, “while not perfect,” is the best common yardstick colleges have to compare the academic credentials of prospective students.
The standoff over the truth about the SAT surely has been confusing to students, parents, and others who must wonder who is right.
The standoff over the truth about the SAT surely has been confusing to students, parents, and others who must wonder who is right. In reality, they also know it hardly matters: The SAT’s sponsors have been brilliant at casting a degree of reasonable doubt about the critics’ charges against the exam, enabling the SAT to remain entrenched as a right of passage for millions of young people.
However, the latest salvo in the SAT debates may well trump all previous efforts to cast doubt on the SAT’s hegemony. It came earlier this year when the president of the University of California, Richard C. Atkinson, proposed that the nation’s largest and most selective public university scrap the 75-year-old SAT “reasoning” test of math and verbal “developed abilities.” That, at least, is how the College Board and the ETS describe what the SAT is supposed to measure. But for Mr. Atkinson, whatever cognitive abilities the SAT is intended to measure remained a bit too mysterious to be useful. Hence, he proposed that the reasoning test (now called the SAT I, in yet another name change in a series of several over the years) be replaced with subject-related achievement tests, such as the SAT IIs. Unlike the SAT I, Mr. Atkinson reasoned, the various SAT II exams in such subjects as math, history, and languages assess something reasonably close to what young people actually study in high school—and do a job as good as or better than the SAT I in predicting college success.
Above all, Mr. Atkinson’s effort to eliminate the SAT I at the University of California represents one of the bravest attempts in recent history to shift the terms of the debate about the exam’s use in university admissions. The social-scientific debate, until now, has centered on the exam’s utility to colleges and universities in their sorting and selection of new students. Mr. Atkinson’s move, however, brings the debate squarely into the realm of ethics, by challenging the University of California and other institutions to consider not what’s necessarily beneficial and efficient for college-admissions staffs, but what’s in the best interests of young learners themselves.
To be sure, the SAT is a bureaucratically convenient way to justify admissions decisions, which has no doubt been a driving factor in its continued popularity among admissions staffs. But why aren’t colleges and universities more interested in the benefits, if any, of the SAT to the very students whom the whole enterprise is ultimately supposed to benefit? How does the SAT enhance or hinder the teaching and learning of schools and colleges? How does it motivate or inhibit students to work hard in high school? How does it help or punish a young person from a poor family in his or her dreams of going to college?
When put to these tests, the SAT’s continued use as an important gatekeeper to many colleges is ethically suspect. Recalling how he had observed a class of 12- year-olds at an elite private school already engaged in intense prepping for the SAT, Mr. Atkinson said: “The time involved was not aimed at developing the students’ reading and writing abilities but rather their test-taking skills. What I saw was disturbing. ... I concluded what many others have concluded— that America’s overemphasis on the SAT is compromising our educational system.”
When seen in the context of educational value, the SAT's continued use as an important gatekeeper to many colleges is ethically suspect.
For its part, the College Board, as the owner of the SAT exams, knows well that efforts of prominent education leaders like Richard Atkinson to reframe the debate about the test could be dangerous for the entire SAT enterprise. Shifting the debate about the SAT from the realm of bureaucratic efficiency into the realm of ethics is an argument that can’t be easily obfuscated or passed off as an arcane disagreement about technical details beyond the public’s grasp. Indeed, the College Board has redoubled efforts to sponsor even more studies that purport to show that the SAT is an “objective” and “valid” predictor of college success, and thus bolster public confidence in the status quo.
Consider, for example, one recent and widely publicized study sponsored by the College Board. In that study, several University of Minnesota researchers found, lo and behold, that the SAT was a “valid” predictor of success in college. In an uncanny sense of timing, the study received prominent attention in the press within weeks of the University of California president’s proposal to quit the SAT. That the study got so much press attention was especially curious in that it hadn’t been published in a peer-reviewed journal and added little to the existing body of knowledge about predictive powers of the exam.
Still, it’s clear why the study (“The Predictive Validity of the SAT: A Meta-analysis”) got the media’s attention. First, the timing of the report’s release followed closely on the heels of Mr. Atkinson’s stunning announcement. And its conclusions were bold, unambiguous, and, of course, purportedly rooted in objective social science. Indeed, defending the seemingly clinical scientific objectivity of the SAT as a means to rate college aspirants has been a crucial strategy of the exam’s stakeholders. On the other hand, the SAT’s proponents have painted its critics as being unduly influenced by “political correctness” and other mushy sentiments that threaten to diminish academic standards of America’s universities.
A recent op-ed piece in the San Diego Union-Tribune by Gail Heriot, a law professor at the University of San Diego (who co-chaired the California Proposition 209 campaign to abolish affirmative action in that state) illustrates the point. Mr. Atkinson’s move to quit the SAT, Ms. Heroit claimed, “is a reaction to political pressure ... to alter the racial composition of the UC class.” She went on to suggest that those who have questioned the SAT’s utility (full disclosure: she names me as one of those wrongheaded persons) “would make a statistician cringe. Research has repeatedly found a strong correlation between the SAT and student performance. It’s not perfect, of course, but nothing is.”
SAT backers have, perhaps unwittingly or unknowingly, misled the public and the press about the SAT's benefits to colleges.
Even in the context of the scientific debate, SAT backers such as Ms. Heriot have, perhaps unwittingly or unknowingly, misled the public and the press about the SAT’s benefits to colleges. As the latest College Board-sponsored study bluntly proclaimed, “The SAT is a valid predictor of performance in college,” which, of course, was the conclusion that shaped headlines in the media. But just how “valid” and just how “strong”? Unfortunately, press reports glossed over those details, taking at face value the more quotable claims of the study. According to the Educational Testing Service, which has tracked what’s known as the “predictive validity” of the SAT for years, the scores on the exam formerly known as the Scholastic Aptitude Test have accounted for an average of almost 17 percent of the variation in first year grades for college freshmen. This means that fully 84 percent of the differences in college grades among freshmen have been, in fact, attributable to other factors besides SAT performance.
By comparison, that research showed that one’s high school grades produced a considerably stronger predictive punch than the SAT, explaining about 23 percent of the variation in grades among college freshmen. Combined, the SAT and high school grades have predicted about 30 percent of the variation in first-year grades, only a small improvement over what high school grades alone would predict. More recent, unpublished data on the SAT I exam show slightly stronger correlations to college performance, the College Board says.
Another way to look at the SAT’s effectiveness is the degree to which it improves the number of “correct” admissions decisions. In their book The Case Against the SAT, James Crouse and Dale Trusheim demonstrated that if colleges want to admit students who are likely to achieve a first-year grade point average of 2.5, then using high school rank alone results in about 62 in 100 “correct” decisions. Adding the SAT improves correct admissions by only about two in 100. If the objective is to maximize the number of students who matriculate to a bachelor’s degree, adding the SAT to high school rank is actually counterproductive, lowering correct decisions from 73.4 percent to 72.2 percent.
In fact, the new College Board study found correlations between sat scores and first-year college grades similar to what have been reported for years. What’s more, the study’s latest data confirm that the sat’s ability to predict college performance declines throughout the four years of college. Nevertheless, the authors of the new College Board study put the best face possible on those results, proclaiming that “overall, these results indicate that the sat predicts academic performance both early and late in college.”
In reporting the College Board’s new SAT study, the press completely missed the real story.
It should come as no surprise that the SAT’s effectiveness as a predictor declines as one progresses through school. That result is consistent with other findings about the predictive powers of standardized tests in educational and workplace settings. As the level of sophistication of academics and work rises, standardized tests prove to be increasingly weaker predictors of performance at those higher levels. For example, scores on the Medical College Admissions Test have modest ability to predict science grades during the first two years of medical school, but the correlations between test scores and performance weaken considerably as medical students enter their clinical rotations.
In reporting the College Board’s new SAT study, the press completely missed the real story. In general, if one wants to predict future performance on standardized tests, it’s best to examine past performance on such tests. But if one is interested in predicting future performance in school or on the job, then it’s best to look to one’s record of accomplishment in school or work. In fact, in a companion report to their SAT study (also supported by the College Board, to its credit), which the press either ignored or didn’t know about, the University of Minnesota researchers also examined high school performance as a predictor of later college performance, measured by grades. Despite the arguments of many SAT supporters that high school grades are unreliable, inconsistent, and tainted by grade inflation, the research from this very large sample of cases showed that high school grades were a considerably more powerful predictor of college performance than the SAT, explaining more than 36 percent of the variance in first-year college grades.
Those results illustrate enormous value of looking at one’s actual accomplishments in the real world, rather than abstract test-taking exercises, as indicators of future performance. The question before educational policymakers, then, is how to structure incentive systems that motivate young minds to do their best work.
Whether policymakers in other states will follow President Atkinson’s lead is a matter of ethical choice, of philosophy about what education is and should be. What is merit, after all, and how do different definitions of merit change the behavior of students, teachers, parents, and schools? Whose interests are best served by continuing to rely heavily on standardized tests as measures of individual merit? Students? Colleges and universities? Testing companies? Poor families or wealthy ones?
There's no question that test scores provide institutions with a relatively painless and cheap way to sort young people.
There’s no question that test scores provide institutions with a relatively painless and cheap way to sort young people—a fact that testing companies and U.S. News & World Report, in its annual ranking of colleges, happily take to the bank. Many colleges and universities themselves benefit from this madness, as they compete with one another for prestige in an educational marketplace in which SAT scores and other such data have become coins of the realm.
And yet, overreliance on such tests has so skewed the incentives facing schools, families, and young people that 12-year-olds drill obsessively on word analogies that mimic SAT test items. It’s a skill that might make these children better test-takers but may have little relevance to doing meaningful and thoughtful work in school and beyond.
Peter Sacks is an independent education analyst and the author, most recently, of Standardized Minds: The High Price of America’s Testing Culture and What We Can Do to Change It (Perseus, 2000).