By guest blogger Catherine Gewertz
States that have adopted new tests, or made significant changes to their old ones, will have to undergo peer review by the U.S. Department of Education within the next four to eight months, according to department officials.
That timetable made its debut alongside the issuance of much-awaited new federal guidance on the peer-review process. That guidance, published Friday, marks the official re-launch of the peer-review process after nearly three years of suspension. It details the requirements states must meet to undergo the federally mandated reviews by panels of experts.
The document itself says that states must undergo peer review within six months of when they first give a new, or substantially changed, assessment.
But in an interview with Education Week, Ann Whalen, a special adviser to U.S. Secretary of Education Arne Duncan, said that states actually will have three review periods to choose from: January, March, or May.
They will be expected to submit detailed documentation several weeks in advance, and the department will work with states to help them select a period that works for them, said Whalen, who has been delegated the duties of the assistant secretary of elementary and secondary education.
The testing landscape shifted profoundly in 2014-15, with many states giving new tests to reflect the common core, and half using federally funded tests by two state consortia--PARCC and Smarter Balanced.
One-Time-Only Tests Can Skip Review
But some states plan to move to a new test in 2015-16, and they worried privately that they’d have to undergo peer review--notoriously time- and labor-intensive--for a test they’d used for only one year and then left behind. Whalen said states would not have to do that. The department would rather have them focus on ensuring that their 2015-16 tests “are high-quality and will sail through peer review,” she said.
Elements of the guidance were reorganized and expanded to emphasize their importance, and to reflect 2014 updates to the testing industry’s Bible, the Standards for Educational and Psychological Testing. Sections requiring states to submit evidence that their assessment systems include adequate security measures and properly protect student data privacy, for instance, have new prominence and detail. (See the previous version of peer review guidance here.)
Another aspect of the updated approach to federal review is an emphasis on ensuring that states’ tests seek to measure students’ “higher-order thinking skills.” The kinds of evidence that states might submit to show that “critical element” in the review might include test blueprints, or samples of test-question specifications that lay out the questions’ “cognitive complexity.”
A Spotlight on Numbers Taking, Skipping, Tests
Participation data, too, has a new prominence in the updated guidance. States were always required to report the numbers of students who took state-mandated assessments, to show that the tests are being given to “all students,” as required by law. But the new version sets off the participation data requirement by itself, with a grid to illustrate that states must supply the number of students enrolled, and the number and percent of students tested, in each grade 3-8, and at the high school grade level chosen for testing. That data takes on new resonance for the 2014-15 testing season, since rising antipathy to testing sparked a massive opt-out movement in some places.
The peer-review process was suspended in December 2012 to enable the department to revise its guidelines in light of many states’ new standards and tests, and the changes affecting assessment because of technology. States have been impatiently awaiting the new guidance, which was originally slated to come out in summer 2014, but was repeatedly delayed.
The reviews, required by the Elementary and Secondary Education Act, are aimed at demonstrating that states meet federal requirements to have, among other things, rigorous academic standards and high-quality, valid, and reliable assessments.
The panels of assessment experts, appointed by the department, review evidence of those things, rather than reviewing the standards and tests themselves.
To show that its standards are rigorous, for instance, states must submit stacks of various kinds of information, such as endorsements by their state university system that the standards reflect college-ready expectations, or documentation that content-matter experts were involved in the standards’ creation.
Importantly, states that are participating in one of the two consortia--or are using another test that’s common to a significant number of states, can work on their peer-review submissions together.
That could give states that are using tests from either the Partnership for Assessment of Readiness for College and Careers or the Smarter Balanced Assessment consortia an easier road, especially given the relatively tight timeline for peer review, said Scott Marion, the associate director of the Center for Assessment.
“I think the six-month timeline for submission is too fast for states,” Marion said in an email. “Look at all the analyses required for fairness, comparability, validity, etc. A six-month timeline implies that the state has all necessary analyses conducted and they just need to package things up. This is a tremendous advantage for the consortia, which is fine with me, but [the department] needs to acknowledge this.”
But Wes Bruce, the former chief assessment director in Indiana, had a different take. “I think there’s probably a little bit of truth to the fact that this encourages consortia or collaboration writ large,” he said in an interview.
But he added, “I don’t think it favors any set of consortia; PARCC and Smarter Balanced aren’t more advantaged than if Alaska and Hawaii got together and decided to build” their own test.
And Bruce thinks the time-frame is doable, especially for state assessment officials who have experience with peer review.
“There may be some folks in some states who are surprised by this, [especially] if you haven’t already had to do peer review,” he said. “There certainly are some changes and it’s not going to be a slam dunk,” but, he added, “I don’t think it’s a lot of scrambling.”
And Louisiana’s former assessment director, Scott Norton, who is now the director of standards, accountability, and assessment at the Council of Chief School Officers agreed.
“States will step up and meet the challenge,” he said. But, “it’s a big amount of work for the assessment staff to take on.” The key new elements make sense, however, he said, including the new focus on data privacy and test security, and the changes for states that are using computer-adaptive tests.
The CCSSO, he said, will be lending a hand. The organization is already planning to get state assessment directors on the phone for initial reaction, and it will host a one-day meeting for directors in November, with experts on call to help field questions. And most importantly, he said, the CCSSO will work with the department on any common issues that emerge as states move forward with the review process.
Another key question, raised by Chris Domaleski, a senior assoicate at the Center for Assessment: How will the guidance on policies for including all students work for states that have adopted, or are considering, opt-out laws?
Here is a map of the six “critical elements,” and subcategories of those elements, that the education department reviewers will examine. The guidance includes many examples--though not a complete list--of the kinds of evidence states can submit to gain approval.
Assistant Editor Alyson Klein contributed to this report.
Follow us on Twitter at @PoliticsK12.