Education Opinion

The SAT-ACT Duel

By Walt Gardner — June 25, 2010 2 min read
  • Save to favorites
  • Print

The school year is finally over, but zealous students will not be spending the summer months as they did in days of yore. Instead, they’ll be honing their skills either for the SAT or the ACT, which still count heavily for admission to most marquee-name colleges.

For years the SAT was the overwhelming choice of high school seniors. But the ACT’s growth is outpacing its competitor. The competition pits the College Board, owner of the former, against American College Testing, owner of the latter. Once more popular in the Midwest and the South, the ACT is overtaking the SAT in such longtime strongholds as California. This narrowing of the lead is impressive because the ACT has been in existence only since 1959, while the SAT goes back to 1926.

What’s the difference between the two tests and which one should high school seniors choose? The ACT claims that it measures what students learn in school (achievement), rather than what abilities students possess (aptitude). While the two often overlap, they are not the same. This goes for both the ACT and the SAT.

In a Commentary that was published in Education Week on June 14, 2006 (“UnSATisfactory”), I explained how the names of tests themselves do not mean very much. In 1926, when the test was first conceived by the psychologist Carl C. Brigham, it was called the Scholastic Aptitude Test in the belief that it assessed innate ability. By 1994, however, the College Board had second thoughts and renamed its premier brand the Scholastic Assessment Test because the original designation was too closely associated with eugenics. But in 1997, the College Board decided to change the name to simply the SAT, which stands for nothing.

What students and their parents need to bear in mind is how both the SAT and ACT are designed. If psychometricians loaded up their respective tests with items that measured material that was taught effectively in class, scores might be bunched together, making comparisons of students difficult. Since both tests are marketed to colleges as a way to help them rank applicants for admission, they can’t afford to run the risk of not delivering what they promised. To engineer score spread, the SAT and ACT rely heavily on items that reflect the socioeconomic backgrounds of test takers because they have learned through experience that this strategy is most productive for this purpose.

There are, however, some differences between the two. The ACT is slightly cheaper than the SAT, and takes less time to complete. But I doubt that these two factors enter heavily into the thinking of most high school seniors. What is most on their mind is which one is easier. The answer seems to be that they’re about the same difficulty, since the vast majority of students score closely on both tests, albeit on different scales.

Most important of all is the low predictive value for academic success in college of both tests. There are simply too many variables that enter into the picture. No test has been able to measure these factors. Bates demonstrated this when it made the submission of SAT scores optional more than two decades ago. It reported in the fall of 2004 that it found virtually no differences in the four-year academic performance and on-time graduation rates of 7,000 submitters and non-submitters.

In the final analysis, therefore, students would be well advised to follow their instincts in making their choice.

Related Tags:

The opinions expressed in Walt Gardner’s Reality Check are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.