Accountability Ratings
Read the study, “Testing the Testers 2003: An Annual Ranking of State Accountability Systems,” from the Princeton Review.
New York has the best state system of school accountability in the country, according to “Testing the Testers 2003.” The second annual ranking of such systems by the New York City-based Princeton Review, a for-profit provider of test-preparation and college-admissions services, ranked Massachusetts second and Texas third.
The study ranked each state and the District of Columbia on 22 indicators in four areas: alignment of tests with the state’s curriculum standards; test quality; the openness of the testing program to public scrutiny; and the extent to which test data are used to support better teaching and learning.
The criteria were weighted at 20 percent, 15 percent, 30 percent, and 35 percent, respectively. States also were assigned letter grades on an A-to-F scale for each of the four criteria.
“Accountability, we believe, matters a lot,” said Steven Hodas, the study’s author and the executive vice president of Princeton Review. “The reason that people fight over it is because they recognize that it’s a strong motivator of behavior. We think that it’s useful to highlight how states are approaching these issues and best practices.”
The company, he said, adjusted the weightings and dropped some of the indicators this year, as it refined its measures. It also gave less weight to indicators now required by federal law, such as disaggregated data.
Those changes led to some large swings in the ratings, according to Seppy Basili, a vice president at Kaplan K-12, a New York City- based competitor. Virginia, for instance, jumped from 18th to fifth place, while Kansas plummeted from eighth to 41st.
“Certainly, it raises lots of questions about methodology and importance,” Mr. Basili said. When big states with large markets wind up at the top of the list, he argued, “it does make you start to wonder.”
Mr. Hodas said year-to-year comparisons “are not really going to be that illustrative, because not only will we add and drop indicators, but we are going to adjust the ratings to reflect facts on the ground. ‘Best practice’ is a moving target.”
No state received an A for either of the report’s “most significant criteria": the openness of state testing programs and state policy.
Mr. Hodas said the results should be interpreted cautiously. “There’s always an inherent element of subjectivity,” he said. “Of course, that’s true of any accountability system.”
—Lynn Olson lolson@epe.org