The PARCC and Smarter Balanced assessments do a better job gauging the depth and complexity of important academic skills and knowledge than do the ACT Aspire, or Massachusetts’ MCAS exam, according to a study released Thursday.
The study, performed by teams of assessment and content experts for the Thomas B. Fordham Institute, evaluates two aspects of the tests at the 5th and 8th grade levels: how well it emphasizes the content that’s most important at each grade for students on the path to college readiness, and how well it requires students to demonstrate a wide range of thinking skills, especially the higher-order skills, which have historically been shortchanged in states’ tests.
A report by the Human Resources Research Organization, HumRRO, also released Thursday, examines the same tests at the high school level.
The Fordham researchers also probed the extent to which each test demands higher-order thinking skills of students who take it. While they found some variability on that question, they concluded that all four tests “overwhelmingly” are “more challenging—placing greater emphasis on higher-order skills—than prior state assessments,” particularly in math.
The two research teams fashioned their studies to reflect the priorities in the Council of Chief State School Officers’ “Criteria for Procuring and Evaluating High-Quality Assessments,” released in October 2013. They evaluated how well each of the four tests reflected key aspects of test quality, such as depth and complexity of content covered, and they assigned ratings of “excellent,” “good,” “limited/uneven” or “weak” to show how well the tests matched the CCSSO’s criteria for good tests.
The overall ratings gave stronger rmarks to PARCC and Smarter Balanced at grades 5 and 8 than to ACT Aspire or MCAS.
The high school results showed similar patterns. (Both studies include more detailed breakdowns, as well.)
The researchers emphasized that some aspects of the ratings measured how well the tests reflected the CCSSO criteria, and others measured how well the tests reflected the “depth of knowledge” in the common-core standards. As a result, mismatches—and thus low ratings—emerged not only when the assessments demanded less cognitive complexity than the standards, but also when they demanded more.
For instance, PARCC got a rating of “low” match on some aspects of its high school English/language arts exam because evaluators thought it contained a heavier distribution of higher-order thinking skills than did the common standards themselves, Sheila Schultz, who led the HumRRO study, said in a conference call with reporters.
Criticism and Responses
Digging into more detail on each test’s strengths and weaknesses, the Fordham team urged ACT to revise its tests to put more emphasis on “close reading” skills, and on students’ ability to cite evidence from their reading to support an argument. It suggested that ACT rework its math content to more heavily emphasize the “major work” of each grade level, a big priority of the common core. In a response appended to the report, ACT said it is planning a number of changes in its Aspire test for 2015-16, including more items that require students to cite evidence in their writing. (It includes a chart that indicates its test would be 25-30 minutes longer, too.)
The researchers concluded that Massachusetts’ MCAS focuses insufficiently on the “major work of the grade” in grade 5 math, and doesn’t do a good enough job of requiring students to show their skill in research and inquiry, reading informational texts, and citing evidence in their writing. In a response statement, the Massachusetts department of education say the MCAS was a “terrific 20th century assessment,” but has “reached a point of diminishing returns,” which is why Massachusetts plans to blend the best of MCAS with the best of PARCC into a new test for 2016-17.
Weaknesses in PARCC, Smarter Balanced
While they outranked the MCAS and ACT Aspire overall, PARCC and Smarter Balanced didn’t escape criticism from the Fordham team.
PARCC’s math test suffers from problems with “technical quality” and “editorial accuracy,” the report says, and it could do a better job of focusing on the major work of 5th grade math. Its English/language arts test could be improved by adding more research tasks designed to require students to use multiple sources, and, eventually, to assess listening and speaking skills, it adds. A “better balance” between informational and literacy texts would also be a benefit, the Fordham team concluded.
In its response, PARCC defends its assessment, noting, among other things, the research simulation tasks that require students to synthesize multiple texts, and its “robust” set of instructional supports that focus on speaking and listening.
The Smarter Balanced test in English/language arts could benefit from choosing better vocabulary items, demanding more higher-order thinking skills in 5th grade, “ensuring that all items meet high editorial and technical quality demands,” and, eventually, developing a way to assess speaking skills (the test already gauges listening skills), the report says. In math, a stronger focus on the “major work of the grade” in grade 5 would improve the test, as would solving “a range of problems (from minor to severe)” with “editorial accuracy” and technical quality, the Fordham researchers write.
In their response, Smarter Balanced leaders Tony Alpert and Luci Willits say they “recognize that there is always room for improvement” in the test, and would consider the study’s recommendations. They say the evaluation overlooked key strengths of the exam, however, including its computer-adaptive nature, which adjusts the level of difficulty as students progress through the test, and its strong accessibility features, including support for 90 percent of the consortium’s English-learners.
The Fordham study was funded by seven foundations that support the common core, including the Bill & Melinda Gates Foundation, which also supports Education Week‘s coverage of standards and curriculum. Philanthropic support of common-core-related work has been a sore point among common-core opponents, who argue that big foundations exert too much influence on education policy. The introduction to the Fordham report anticipated that objection:
“It is ... true that the study was funded by a number of foundations that care about assessment quality and the Common Core,” write Thomas B. Fordham Institute Executive Director Michael J. Petrilli and Amber Northern, the institute’s senior vice president for research. “If you think that big private foundations are ruining public education, this study is also not for you.”