States adhering to the PARCC test’s standards are setting higher expectations for students than those using SBAC or the ACT Aspire, a new federal study concludes.
The study, released Thursday by the statistical wing of the U.S. Department of Education, maps states’ cut scores—the point at which a student is deemed proficient—onto the testing scale used by the National Assessment of Educational Progress. This allows for comparisons of the tests’ technical difficulty, despite variations in each test’s emphasis and format.
It is the sixth report the Education Department has released using this method, but the first to look at how the shared tests developed in the wake of the Common Core State Standards stack up.
In general, the report shows that most states demand a significantly higher level of student performance than they did a decade ago. Some of that growth appears to be due to decisions made by the two federally funded testing consortia, SBAC and PARCC. (They are formally known as the Smarter Balanced Assessment Consortium and the Partnership for Assessment of Readiness for College and Careers.)
Both of those testing groups explicitly aimed to set a more stringent, “college and career ready” bar when they set their benchmarks in 2014. In fact, the move to SBAC and PARCC assessments created lots of headaches for states when worried parents saw their kids’ results on the newer, harder tests.
When it comes to head-on comparisons, though, PARCC clearly emerges as the hardest of the shared tests, in each subject and grade studied—4th and 8th grade in reading and math. In fact, PARCC’s definition of “proficient” performance actually surpasses NAEP’s in both 4th and 8th grade mathematics.
Here, for example, is a graphic showing the range of 8th grade reading expectations.
“Overwhelmingly, the PARCC standards are higher than the other two, ACT [Aspire] or Smarter Balanced,” said Commissioner Peggy Carr, the associate commissioner of the National Center for Education Statistics, the statistical agency that produced the report, in a conference call with reporters. “Unfortunately, this doesn’t mean that [student] performance is also high, and there is little to no relationship between the performance of students in these states and how high the bar has been set.”
Comparing Testing Standards
Here’s a look at some of the other highlights in the study.
- In all grade and subject combinations, the range of where states set the proficiency bar has narrowed, mainly because states at the lower end raised their standards.
- In 4th and 8th grade math, no state set the bar below NAEP’s “basic” benchmark. Just four states did in 4th grade reading and one in math.
- Expectations on the ACT Aspire, a more recent competitor in the state-testing sweepstakes that’s signed two states, were below SBAC in math at both grade levels, but equal to or higher than SBAC in reading.
- Kansas set some of the highest proficiency bars overall, even though it doesn’t use any of the shared tests.
The findings also echo the findings of two other recent reports, both of which conclude—using a less rigorous methodology than the NCES—that states have generally narrowed the gap between NAEP expectations and those on the state tests.
Here’s one really tricky thing to remember: States in PARCC and Smarter Balanced agreed to report scores using a common definition of proficiency. But they were permitted to make their own decisions about which level would be used for their own school ratings or consequences, like graduation.
So while Ohio and Louisiana administered PARCC in 2015, they chose to use “approaching proficiency"—a lower standard—for accountability. NCES treated those states separately from the other PARCC states. And its analysts also dropped from the study states that had serious testing disruptions or didn’t follow all of the consortia’s test-administration rules.
Federal officials, meanwhile, underscored that a higher bar is not necessarily a better one.
“It’s important to evaluate states on the relative stringency of their standards in comparison to other states, but states need to look within their population of students and their own goals. I think it’s not an absolute answer,” Carr said.
“What we’re seeing, though, is that states are raising the bar and becoming more alike in terms of what they identify as proficient performance.”
A version of this news article first appeared in the Curriculum Matters blog.