As President Bush promotes his plans for expanding high-stakes testing in the nation’s high schools, a new study has found that states that already have such exams in place have lower graduation rates and college-entrance-exam scores than states that don’t have them.
The study, published Jan. 21 in the electronic journal Education Policy Analysis Archives, zeroes in on 18 states that require high school students to pass an exam in order to graduate. When compared with students in 33 states or territories without such requirements, students in the exit-exam states tended to have both lower scores on the SAT admissions exam and lower graduation rates.
“The Relationship of High School Graduation Exams to Graduation Rates and SAT Scores” is available online from the Education Policy Analysis Archives. ()
The research is the latest of a half a dozen or more recent studies that attempt to gauge the impact of the growing push to step up student testing and to hold schools or students accountable for the results.
But much like its predecessors, the new report, by researchers from Ball State University in Muncie, Ind., ran into criticism.
“This study can’t possibly tell you what they’re trying to show,” said Martin Carnoy, a Stanford University researcher whose own work has linked high-stakes testing to student learning gains. Mr. Carnoy, an economics professor, said he faulted the new study for focusing on just one year of student testing and graduation data and using what he sees as a flawed measure of graduation rates.
However, other researchers said the study signals the need for caution at a time when national policymakers are debating how far to extend the testing requirements of the No Child Left Behind Act into high schools. Under the 3-year-old federal law, most states, beginning with the 2005-06 school year, will have to test students at least once in high school to meet the law’s accountability requirements. Mr. Bush has proposed adding two more years of high school testing to that requirement. But the president’s plan does not call for tying those tests to diploma requirements.
Defending the Findings
“Is [this study] worth a big story? Probably not,” said David C. Berliner, an education professor at Arizona State University in Tempe who has also conducted research on high-stakes tests. “Is it worth a warning flag? I’d probably say yes.
“There does seem to be a danger,” he said, “that you could narrow the curriculum and hurt achievement on broader tests like the SAT that measure critical-thinking skills.”
Gregory J. Marchant, the study’s lead author and an educational psychology professor at Ball State, said the study focused on the 2001 SAT scores and 2002 graduation rates as a way to examine the tests’ impact on both ends of the high school population: students who were struggling to stay in school as well as college-bound teenagers.
Previous studies have shown that states with high school exit exams tend to be located in the South and to have high concentrations of poor and minority students—two groups that often score low on standardized tests. So Mr. Marchant and his co-author, Sharon E. Paulson, factored in data on family incomes, racial characteristics, parents’ educational levels, and high school grades in order to compare students from similar backgrounds across states.
To calculate graduation rates, they compared fall 1999 freshman enrollments with the numbers of graduating seniors in 2002. The 64 percent graduation rate they found for exit-exam states was 9 percentage points lower than the rate for other states.
“Any time you raise standards at the high school level, there is potentially no place for some of these kids to go but out,” Mr. Marchant said.
Among the findings for SAT scores, the report shows that high-achieving white students in exit-exam states lagged behind their peers in non-exam states by 13 to 16 percentile points. Those differences decreased a bit, though, when the researchers compared scores for individual students across states rather than clumping together an entire state’s scores.
But Mr. Carnoy of Stanford said the study’s measure of graduation rates failed to account for differences in the size of states’ 9th grade “bulges”—in other words, the upward blip in enrollments that occurs that year as larger numbers of students are held back. Some states, particularly those that ban the practice known as social promotion, are likely to retain more students than others. The study’s method of determining graduation rates also does not account for students who move out of a state.
But Mr. Marchant reasoned that his method is as good as those that rely on schools’ own reports of dropout rates, many of which have recently been shown to be faulty—and even false.