For those of you who haven’t been following the trials and tribulations of New York state’s Regents tests, on Monday the board agreed to change the assessment’s scoring system in an effort to cut down on the number of students who earn proficient scores without actually being proficient in the material.
Changes may be in the works, but retired Board of Education analyst Fred Smith took to the New York Post yesterday to explain the roots of the scoring problem. After all, better tests can’t be created until test-makers know why the old ones were bad.
According to Smith, the test-developers’ problem was twofold: First, there were problems with trying out new questions. Smith explains, “Field-test questions that the test-writers determined were difficult for students were actually not that hard for them"—gaps between estimated performance and actual performance were evident as early as 2006. And second, there were problems judging the difference between results on multiple-choice questions and ‘constructed response’ questions, which require a written answer. While multiple-choice scores were rising, scores falling in other sections were not reported.
These disparities were so great that, according to Smith, “our lowest-achieving children could show progress simply by guessing randomly.” And with the same officials and advisers set to re-haul the Regents, Smith worries the test’s previous issues won’t be fixed.
Do standardized tests accurately assess students’ skills? And if they don’t, how should test-makers shape assessments to better test students?
UPDATE: Mark Burke, an education consultant at Accurate Assessments, furthers the assessment conversation by asking what kind of tests we should really be giving students. Do we continue down the road of instant gratification multiple choice assessments or do we create tests that require “analysis, creation, and evaluation of complex systems”? What’s your take?
A version of this news article first appeared in the Teaching Now blog.