Published Online:
Published in Print: April 21, 2004, as Test-Score Averages Don’t Tell Whole Story About Schools, Study Says

Test-Score Averages Don’t Tell Whole Story About Schools, Study Says

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

A new study suggests that the methods most states are using to judge schools’ academic progress may produce an incomplete—and possibly misleading—picture of the jobs the schools are doing.

For More Info

Based on two years of test-score data for 230,000 students in 22 states, the study was released last week by the Northwest Evaluation Association, a nonprofit group that provides testing help to 1,200 member school districts.

The findings are noteworthy, researchers said, because they come as states are gearing up to comply with the federal testing requirements outlined in the No Child Left Behind Act. Under the law, states are required to use test scores to determine whether schools are making "adequate yearly progress" and to hold schools accountable when students’ scores fail to rise.

To meet that requirement, most states are using simple test-score averages, said G. Gage Kingsbury, the director of research for the Portland, Ore.-based group. The problem with that method, he said, is that it might not reveal much about how much students are learning—in other words, their academic growth over time.

For example, two schools might have identical test-score averages and thus earn similar ratings under the federal law. What the numbers might not reveal is that students at one school started out farther behind students at the other school.

"There’s not too much difference between these schools in terms of NCLB," said Mr. Kingsbury. "But if I was a parent, I’d want my child in the school that’s causing more growth in students’ learning."

Similar, But Different

To explore how such a growth-oriented evaluation approach might play out, the Oregon researchers analyzed scores for students in grades 3-8 from 723 of the association’s member districts. Students took the tests in the spring of 2002 and 2003.

The goal was to gauge the amount of unexpected score gains that students made from one year to the next.

As they had predicted, the researchers found that schools with similar performance ratings actually varied widely in the test-score gains their students were making.

More than 20 percent of the schools that were rated high-performing by their states fell into the bottom quarter when measured by the researchers’ test-score-growth index.

The reverse was also true: Several schools with low average test scores had score gains as large as those of the highest-performing schools. The differences between similarly ranked schools sometimes added up to as much as two-thirds of a year in a students’ academic growth.

"If a low-status school started out with students with low status and yet moved them dramatically forward, I would argue that might actually be a success story," Mr. Kingsbury said.

Under provisions of the federal law that allow students to transfer out of failing schools, though, that school could lose students, Mr. Kingsbury said. The students who transfer could also conceivably end up at higher-ranked schools where they might make less academic progress than they did at the schools they left behind.

Likewise, Mr. Kingsbury argued, high- performing schools that seemed to be doing little more than nudging students’ learning along have no incentive to work harder under the evaluation methods that states are using now.

Students at high-performing schools didn’t "top out" on the tests because the tests were given on computers that adapted the test questions to students’ actual test performance, he added. In other words, the programs assigned progressively harder questions to students as they answered more questions correctly on particular topics.

The drawback to such growth or "value added" measurement techniques, Mr. Kingsbury said, is that tracking students’ progress is difficult when students move from school to school.

To address that problem, he said, states should assign students identification numbers that can be tracked electronically through their testing systems.

Still, he said, combining the growth measure with a measure of average test scores might give a clearer picture of the kinds of jobs that schools are doing in enhancing students’ academic progress than the methods that states rely on now.

Currently, Mr. Kingsbury said, most states turn to academic-growth measures only after a school has been labeled failing.

Vol. 23, Issue 32, Page 10

You must be logged in to leave a comment. Login | Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Recommended

Commented