Documents trickling out of the groups of states aiming for Race to the Top assessment money show that they consider it very important to design testing systems that permit comparisons among students no matter what state they live in.
You remember (right?) from your attentive reading of this blog and a recent story on edweek.org that the groups, or “consortia,” of states applying for chunks of that $350 million have boiled down from six to two (not counting the one that is aiming solely at the $30 million for high school end-of-course assessments).
Most of that winnowing happened in private winter meetings convened by the Council of Chief State School Officers and the National Governors Association (i.e., the two groups leading the work to draft common standards, to which these new-age assessments are supposed to be aligned. With me so far? Good.). They got leaders of five of the six consortia (which you read about here) to merge into two groups, and outline their plans. Using those plans, the CCSSO and the NGA put together a paper describing the “common vision” and priorities shared by both groups, as well as their differences. They released the paper today, and the consortia’s outlines are included as appendices.
A good deal of the paper is stuff we’ve heard before, since it reflects administration priorities such as making sure the new tests yield data about the growth in student achievement (not just the status), take advantage of new technologies, involve teachers in their design, and incorporate a mix of tasks that go far beyond long lists of multiple-choice questions. But it’s interesting to read more in detail about the types of tests the consortia are planning to design.
And there’s an interesting bit in there about comparability. The CCSSO and NGA say that this is a high priority for the governors and state education chiefs in designing new assessments, so the two groups are leading a joint project to make sure that scores from the summative assessments created by the two consortia can be compared from state to state. So even if West Virginia is using assessments created by a different consortium than the one Florida belongs to, scores from a school in one state could still be compared to scores in the other (although this wouldn’t be possible on a student-by-student basis). The two groups are going to convene testing experts to see how this can be done. And they plan to ask each consortium to sign a memorandum pledging support for this comparability effort.
When I chatted with NGA and CCSSO leaders, they made no bones about their wish that the two consortia merge further into one. They could benefit from better economies of scale, and one system of assessments used across all the states that adopt common standards would enable student-to-student comparisons, they said. (There is some simple division here: one consortium gets a bigger chunk of the RTT money than two would. Of course, a huge portion goes to the test developers—whoever they turn out to be. But the idea is that by banding together and getting a bigger chunk of RTT money, it’s more efficiently used in developing one set of tests for everyone, and also, I’m guessing, that each state has more money to use in implementation than if two consortia—with overlapping membership, as things stand currently—remain.)
Whether the two will merge into one is still an open question. As you read this, the jostling is still going on. And how folks will feel about one set of tests offered to everyone is anyone’s guess (the NGA and CCSSO folks didn’t see this as potentially controversial.). But until the tea leaves become more clear, you can read this paper and the appendices and give us a shout-out right here.