Consider this a major wonk alert.
The headline probably tipped you off. How many people sit around dinner tables talking about text complexity, for crying out loud? Probably only hard-core reading wonks, right?
That might have been the case a couple years ago, but the Common Core State Standards are starting to change that. Because they specifically address text complexity, more and more people—ordinary people, not just wonks—are having to grapple with how to size up a text’s complexity and match students with appropriate readings. And a new analysis could play a part in how people do that.
Before we wade into it, let’s make sure no one has missed the part of the standards that deal with text complexity. If you are just getting acquainted with this, you might want to go back and look at the English/language arts document’s 10 “anchor standards” for reading (Page 10 of this document). Those are like the wood frame of your house; they’re the big reading ideas that everything else hangs on. Take a look at Standard 10: “Read and comprehend complex literary and informational texts independently and proficiently.” If you trace that standard up through the grade-level standards, you’ll see how it manifests itself in each grade, across literature as well as informational material.
Appendix A of the standards, already a household phrase among the readability wonks, outlines the standards’ approach to figuring out a text’s complexity. It takes a three-part approach: quantitative, which means using computerized readability formulas like the well-known Lexile or Flesh-Kincaid to judge things like word frequency and sentence length; qualitative, which requires a human to judge a text’s structure, meaning, and other factors; and reader-task, meaning another set of judgments about why you are asking a student to read a given text, and what the reader brings—or doesn’t bring—to the experience of reading it.
This is where we can begin our update. A supplement to Appendix A, issued today, expands on the original Appendix’s approach to text complexity, based on a new study done to inform this process.
The supplement expands the number of quantitative tools that it sees as having value in judging those aspects of a text. The original Appendix A discussed only Flesh-Kincaid, MetaMetrics’ Lexile Framework for Reading, Renaissance Learning’s ATOS, and the University of Memphis’ Coh-Metrix. The supplement adds discussion of Questar Assessment’s Degrees of Reading Power, Pearson’s Reading Maturity, and ETS’ SourceRater.
The research on which the new appendix is based found six of these tools equally effective in predicting student difficulty reading the given text. (Although the research team found the Coh-Metrix tool valuable in evaluating a text, it wasn’t included in the comparison because it doesn’t generate a single number to reflect a text’s difficulty.)
There is some variation among the quantitative tools in the grade bands to which they assign texts, but each one climbs “reliably—though differently—up the text-complexity ladder,” according to the Council of Chief State School Officers and the National Governors Association, which released the supplement and its underlying research (and, of course you recall, co-led the initiative that produced the standards themselves).
The supplement suggests that teachers use such tools to assign a grade band to a text, and then move on to the “qualitative” and “reader task” sorts of judgments to narrow down further to a specific grade level.
It also offers a new chart that allows users to see how the ranges assigned to texts by each tool correlate to the grade-band expectations in the common standards. An accompanying guide for that chart walks users through how to put a text through its paces using the quantitative tools. All of these resources are on a page of the Student Achievement Partners’ website that focuses on the quantitative considerations of text complexity. SAP, you might recall, is the New York City nonprofit that counts among its founders the two people who co-led the writing of the ELA standards: David Coleman and Sue Pimentel.
The website has created a page that focuses on the qualitative factors of text complexity, as well. That has guides for things to think about as you are making those judgments, how to place a text’s qualitative features on a scale of difficulty, and a sample analysis of one text.
“Measures of Text Difficulty,” the research study that drove this new flock of guides and the supplement to Appendix A, was led by Jessica Nelson of Carnegie Mellon University, Charles Perfetti of the University of Pittsburgh, and David Liben and Meredith Liben of Student Achievement Partners. Using the six quantitative tools, they predicted how well each one’s text-difficulty prediction was borne out by student performance on national standardized reading tests. They also checked how each tool’s text-difficulty assignment matched experts’ judgments of difficulty.
If all these links have you bouncing around too much, you can go to the landing page for all Student Achievement Partners’ material on text complexity, and navigate wherever you like from there.
In the end, while the supplement seeks to expand on the thinking in the original Appendix A, it also echoes the original in saying that the process of determining a text’s complexity is limited and “imperfect.” It notes that even the three-part approach cannot be used on poetry and plays, and on texts for kindergarten and 1st grade pupils. And it notes that even the first step of the process—using quantitative measures to assign a text to a grade band—is particularly imperfect when used on narrative fiction in upper-grade levels.
A prime example? John Steinbeck’s The Grapes of Wrath. Some widely used quantitative tools often place that novel at grade 2 or 3 because its simple syntax masks its more complex meaning, the supplement notes. Such an example serves as a reminder that in some cases, the qualitative factors should trump the quantitative ones in sizing up a text’s complexity.
A bit of art, a bit of science, blended into a conversation that will soon move—if it hasn’t already—from the wonks’ dinner tables to a dinner table near you.