The state assessment consortia are working on designing tests for the common standards that have been adopted by 43 states and the District of Columbia. Their work raises endless lists of questions about design and content. And, this being Washington, lots of outside folks are jumping in to try to shape the dialog.
The latest string of questions comes in the form of a white paper from Pearson, which has begun assembling a family of related papers about “next generation assessment” in one spot on its website. The paper released this week details a laundry list of things that the consortia and policymakers are going to need to get right to make a success of this behemoth of an assessment project.
Take this, for instance: In any given grade and subject, will all of the standards in the common core be deemed eligible for assessment, or only some? Some demands of the standards are pretty tough to assess in an end-of-the-year summative test, the authors say. They cite as an example the expectation that students will “conduct sustained research projects” and “write routinely over extended time frames.” The paper notes that “through course assessments” like those envisioned by the Partnership for Assessment of Readiness for College and Careers consortium “could possibly address” that problem, but they maintain that such skills would be “virtually impossible” to assess in an end-of-year summative test.
In math, the Pearson authors note the expectation that high school students show they “understand that rational expressions form a system analogous to the rational numbers, closed under addition, subtraction, multiplication and division by a nonzero rational expression.” It says: “An intellectual abstraction at this cognitive complexity level would be difficult to assess in any standardized assessment.”
The paper’s authors advise test-makers to cast their net more broadly in deciding what skills are assessable, on the hope that creativity and evolving technology will make it easier to test standards that now seem unassessable.
Another question posed in the paper is whether all assessable standards will be tested each year, or whether they will be sampled. One factor to grapple with here is the number of standards that could be tested; if test-makers include all 175-plus of the skills sets in the common standards in math, for instance, the test could prove too long, the paper’s authors said. By “sampling,” or testing, just some of the skills in the standards each year, test-makers could manage length and also “reflect a growing national consensus” to focus on essential skills, the paper says. But they also run the risk of sending the message that certain skills or standards don’t need to be taught.
To deal well with the issue of which standards are tested and whether they will be sampled, the authors advised, the test-making teams need to solicit input from policymakers and stakeholders, particularly classroom teachers. If sampling is decided upon, the rationale for doing so must be clearly articulated, the paper says.
These are only two of a dozen questions the authors pose as the tests are designed. The folks at the center of the projects have raised laundry lists of questions themselves. If you haven’t read the detailed descriptions of their plans, you might want to do so. (They’re in the groups’ applications for Race to the Top money, which is what’s funding all of this.) The PARCC consortium‘s application is here. Check pages 43-59 for the heart of the discussion about its test design ideas. The SMARTER Balanced consortium‘s application is here. The section describing test design is on pages 40-55.
A version of this news article first appeared in the Curriculum Matters blog.