The four-hour ordeal known as the SAT has given rise to expensive test-prep courses that parents hope will boost their children’s scores. The question is whether they promise more than they deliver (“Do SAT Prep Courses Help Test Takers?” The Wall Street Journal, May 2).
To answer that question, it’s necessary to take a closer look at the SAT. Despite the changes over the years, its ultimate goal is still to engineer score spread so that test-takers can be ranked. I wrote about this in detail before (“UnSATisfactory,” Education Week, Jun. 14, 2006). The revised SAT retains this primary objective (“The Big Problem With the New SAT,” The New York Times, May 5).
If the SAT were loaded up with items that measured only the most important material taught effectively by teachers, scores would be bunched closely together, making rankings extremely difficult. To avoid that risk, designers include items that measure material highly unlikely to be taught. This strategy effectively produces the desired score spread, but it’s hard to defend on any other basis.
What continues to confuse the public, furthermore, is the difference between an aptitude test and an achievement test. The former is designed to predict how well a test taker is likely to perform in a future setting; the latter is designed to measure the knowledge and skills that a test taker posesses in a given subject. Scores on both may be related, but they do not necessarily correlate. That’s why saying that students who pay attention in class and learn the material will do well on the SAT is only partially true.
Stanley Kaplan was the first to recognize this reality. He refused to believe the College Board’s claim that the SAT was not coachable. In 1946, he established the test-prep company that still bears his name. His secret was constant practice followed by immediate feedback. The changes in the name of the test over the years reflect the confusion about what the test actually measures.
I’ve always been skeptical about the usefulness of multiple-choice tests because I don’t think they can possibly assess creativity and independent thinking. But I realize they are not as subject to charges of subjectivity as essays are. Moreover, they are cheaper to score. That’s why they’ll always be used.
The predictive value of the SAT, however, is another story. In 2004, Bates College released the results of its 20-year study. It found virtually no difference in the four-year academic performance and on-time graduation rates of 7,000 submitters and non-submitters of SAT results. Since then the list of colleges making the SAT optional has dramatically grown.
If I were applying to college today, I would choose schools that are test-optional. I wouldn’t want to spend the time, effort and money on prep courses. They can raise scores, but the price paid is too great.
The opinions expressed in Walt Gardner’s Reality Check are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.