Study: Formulas Yield Widely Varied Graduation Rates
The way states calculate graduation rates could have a dramatic effect on the results and influence states' ability to meet the accountability provisions under the "No Child Left Behind" Act of 2001, a study concludes.
The analysis by the Urban Institute was presented here this month at a meeting sponsored by the Washington-based research organization.
Under the federal law, a reauthorization of the Elementary and Secondary Education Act, states must use graduation rates as indicators of whether their schools and districts are making annual progress on academic performance at the secondary level.
The researchers, though, found that "calculating an apparently simple value—the percent of students who graduate from high school—is anything but simple and the best way to go about it is anything but apparent."
The law defines the graduation rate as the percentage of public secondary students who graduate from high school with regular diplomas. Only students who receive such diplomas—as opposed to other state-issued credentials or General Education Development certificates—are to be counted as graduates, according to final federal rules.
But the rules give states considerable flexibility, noting that the U.S. secretary of education may approve a state-developed definition that "more accurately measures" the graduation rate.
"This flexibility has the potential to impact the integrity of accountability systems," said Christopher B. Swanson, the co-author of the new paper and a research associate at the Urban Institute.
Same Data, Different Results
Mr. Swanson and co-author Duncan Chaplin reviewed the "accountability workbooks" that states had filed with the Education Department as of April 30. Such plans show how states propose to meet the No Child Left Behind law's accountability mandates.
Of the 45 plans that were publicly available, the researchers found that only eight intended to use a longitudinal graduation rate that follows individual students over time. Twenty planned to use a method devised by the National Center for Education Statistics that compares the number of students who earn diplomas in a given year against the total number of graduates plus all students who dropped out in each of the past four years. The remaining 14 states proposed a wide variety of other strategies.
The study compared the NCES method, embraced by the largest number of states, with two other methods for calculating graduation rates. The authors applied the three different formulas to data for the class of 2000, using information from the Common Core of Data, a census of public schools and districts collected by the federal government.
In addition to the method crafted by the NCES, the researchers used a formula they've developed themselves, called the "Cumulative Promotion Index," and a method adapted from one developed by Jay Greene of the New York City-based Manhattan Institute, which they call the "Adjusted Completion Ratio."
Mr. Greene's approach compares the number of graduates in a given year with the size of the 9th grade class four years earlier, adjusting for changes in district enrollment.
The Urban Institute researchers' promotion index estimates the probability that a student entering 9th grade will complete high school on time with a regular diploma. It does so by multiplying the proportion of 12th graders who earn diplomas in a given year with the percent of students in grades 9, 10, and 11 who are promoted to the next grade that same year.
Using a modification of the NCES method that counts only regular diplomas holders (and not other state certificates or credentials), the researchers arrived at a graduation rate for the average district of 85 percent. Estimates using the Greene and Cumulative Promotion Index methods, in contrast, produced much lower graduation rates: 75 percent and 73 percent, respectively.
At the state level, the latter two formulas yielded an average graduation rate of about 68 percent for the class of 2000.
In comments made at the meeting, Mr. Swanson speculated that the considerably higher graduation rate using the NCES formula reflected the method's heavy reliance on dropout rates, which may be substantially undercounted.
Indeed, the researchers found that the NCES method could be used to determine graduation rates for only 24 states, because most states did not collect information on dropouts or failed to do so according to the Common Core of Data's standards. Similarly, the NCES indicator could be calculated for only 38 percent of districts nationwide using the Common Core of Data.
When the researchers used the formulas to calculate graduation rates for each of the nation's 100 largest school districts, they found a "surprising range of values" depending on which formula they used.
To take one example, the estimated graduation rate for Boston differed by 20 percentage points (60 percent under the Greene method and 80 percent under the Cumulative Promotion Index).
Once again, the NCES method could be used to calculate graduation rates for only 27 of the 100 districts, and typically could not be used to calculate rates for various minority groups, as required by the federal law.
"I think what we may really need here is less flexibility rather than more," in deciding which methods states use, Mr. Swanson said at the event.
He advocates developing scientifically based standards for calculating graduation rates that could be used in state accountability systems, and safeguards to ensure the data are both complete and accurate.
'Apples to Apples'
Another panelist at the event, Michael D. Casserly, the executive director of the Washington-based Council of the Great City Schools, which represents large urban school districts, said that "by and large, none of the indicators summarized in the paper are really ready for prime time."
When the graduation rates for a school system like Cincinnati vary from 17.6 percent on the Cumulative Promotion Index, to 28.4 percent using the Greene methodology, and to 53.6 percent under the NCES formula, he argued, it's not good enough to say the variations are inexplicable.
"There's something going on underneath these numbers that requires considerably more work," Mr. Casserly said.
"I think I would come down more on the side of additional flexibility, at least for the moment," he concluded, "while these methods are being sorted out."
Christine Wolfe, the director of policy for Undersecretary of Education Eugene W. Hickok, another panelist, said that "there are many folks who would have liked a national definition in the statute," but that congressional lawmakers didn't believe such a definition was appropriate.
Simply calculating graduation rates is a "significant challenge for most states," Ms. Wolfe said, because they lack robust data systems. That's particularly true when it comes to reporting the data at the school level for each of the racial, ethnic, and other subgroups of students required in the law.
The panelists at the Urban Institute's gathering noted that the method states choose may make less difference if their standards for adequate progress require schools and districts to improve graduation rates rather than getting over specific bars.
Because states, districts, and schools just have to show that those rates are improving, there may be "less compulsion" to choose a less rigorous definition, said Carmel Martin, the chief counsel to Sen. Jeff Bingaman, D-N.M., one of the primary sponsors of the law.
Even the best technical fix may not make it easy to compare graduation rates across states, Mr. Swanson pointed out, because states can set their own standards for graduation, coursetaking, and exit exams.
"Even if you can figure out a way to measure graduation rates that's scientifically based and methodologically rigorous from a technical standpoint," he said in an interview, "it may still be hard to compare graduation rates from state to state in an apples-to-apples kind of way. That's not something that the technical methodologies will fix."
Vol. 22, Issue 37, Pages 17,22