Assessment Opinion

Assessing Deeper Learning: New Evidence About Common-Core Tests

By Robert Rothman — February 16, 2016 2 min read
  • Save to favorites
  • Print

Three years ago, Joan Herman and the late Bob Linn from the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) set out to determine whether the assessments being developed by two consortia of states would be able to measure deeper learning competencies: specifically, the ability to think critically, solve problems. and communicate effectively. Using the materials available at the time--the design frameworks and prototype items--Herman and Linn concluded that, yes, the assessments were likely to do a far better job than existing state tests to measure deeper learning.

Since then, the development of the new assessments--by the Partnership for Assessment of Readiness for College and Careers (PARCC) and the Smarter Balanced Assessment Consortium--is complete. And the tests were administered in half the states last spring.

Last week, two new studies came out that provided some new information about the assessments. And these studies, like the preliminary study by Herman and Linn, also were heartening about the extent to which the new assessments measure deeper learning competencies.

The new studies did not examine deeper learning per se. Rather, they analyzed whether the assessments, along with ACT Aspire and the Massachusetts Comprehensive Assessment System (MCAS), measured the expectations for student learning incorporated in the Common Core State Standards. One study, by the Fordham Institute, examined fifth and eighth grade tests in English language arts and mathematics; the other, by the Human Resources Research Organization (HUMRRO), examined high school assessments.

To conduct the studies, the two organizations convened groups of teachers, content experts, and assessment experts to analyze each test item to determine if they measure the content described in the Common Core, whether they measure the depth of knowledge required by the standards, and the overall quality of the assessments.

On the whole, the two assessment consortia held up quite well. In both English language arts and mathematics, the match between the assessments and the standards’ content was quite strong, while the match of the ACT and MCAS was weaker. And the match to the standards’ depth was good for both assessments, and excellent for PARCC’s ELA assessment.

Why does this matter? For at least two reasons. First, if a test fails to measure the deeper learning competencies included in the standards, then the results would provide misleading information about the extent to which students have developed those competencies. And second, a test that does not measure these abilities sends a signal to schools that the competencies are not important, and might encourage teachers to ignore them in favor of the lower-level abilities the tests measure.

State officials should take the results of these studies seriously. More than forty states have adopted the Common Core, or some version of it, and it is imperative to use assessments that accurately reflect what the standards expect. These studies show the PARCC and Smarter Balanced Assessments are reasonable. States using other tests should conduct similar studies--and make them public--to show educators and parents whether their tests really reflect the deeper learning competencies they expect all students to develop.

The opinions expressed in Learning Deeply are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.