It seems the debate over the value of the SAT as part of the college-admissions process won’t ever end (“Everything You Need to Know About the New SAT,” The New York Times, Nov. 1). Although the new version is getting much attention, the truth is that its mission is the same as it always has been.
Despite the changes, the SAT must still somehow engineer score spread in order to deliver on its promise to allow admissions officers to rank applicants. If its designers loaded up the test with the most important subject matter effectively taught in school, scores would likely be bunched together, making comparisons difficult. As a result, it wouldn’t stay in business very long thereafter. Therefore, items are deliberately included that have been found to have the potential to sufficiently spread scores out.
I don’t blame the College Board, which administers the SAT, for trying to survive. But I do fault it for lack of candor. So much has changed in the admissions game since I applied. At the time, the College Board vigorously maintained that practice would do little to improve scores. It was only after Stanley Kaplan proved otherwise that the truth finally emerged. That’s why I remain highly skeptical about anything that the College Board now proclaims. It has too long a history of disinformation.
For example, the College Board says the new SAT is more relevant and less gimmicky than its predecessor. That’s not saying very much because the ultimate objective is still to spread scores out so that not all test takers get an 800 on both sections. Therefore, I expect it will continue to include items that are deliberately designed to confuse or distract, as if those items answered correctly are somehow relevant to identifying students highly likely to succeed.
Frankly, I’d like to know why any college or university bothers to use the SAT anymore. A study by William Hiss, former Bates College admissions dean, and researcher Valerie Franks of 123,000 student records from 33 colleges with test-optional admission policies concluded that high-school GPA’s - even at high schools with easy curriculums - were better at predicting success in college than any standardized test.
Now that’s a remarkable finding. The strongest case up until theirs was that standards varied so dramatically at high schools across the country that GPA’s were poor indications of student ability. Therefore, a single test administered to all applicants on a particular day was the only effective way of evaluating students. The argument had great intuitive appeal, but the evidence has not supported it. Nevertheless, I expect the SAT to continue to be used. Tradition dies hard in education.
The opinions expressed in Walt Gardner’s Reality Check are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.