To many New Jersey residents, the news might seem reason to celebrate.
A report from the National Education Goals Panel last November showed the state’s high school completion rate had increased from 90 percent in 1990 to 92 percent in 1994--an apparent glimmer of hope picked up by some New Jersey newspapers.
But the news left Philip Burch, a research professor at Rutgers University in New Brunswick, scratching his head.
“I’m more than dubious about the figure,” he said.
According to a report from the New Jersey education department, the statewide graduation rate actually dropped from 85.8 percent in the 1992-93 school year to 83.5 percent in 1994-95.
So which figure is right?
As someone who’s spent years scrutinizing New Jersey’s graduation and dropout rates, Mr. Burch knows that both numbers can be right but that they’re measures of different things.
The confusion caused by the two figures in New Jersey underscores the struggle faced in nearly every state by experts trying to find comparable data on such crucial indicators as dropout and graduation rates and student test scores.
And people trying to evaluate a state’s progress often are left with apples and oranges, making any meaningful comparison difficult.
The goals panel’s completion rate, for example, draws on U.S. Bureau of the Census surveys of the number of 18- to 24-year-olds who said they had completed either high school or some equivalent. The New Jersey Department of Education figure, published in an annual report called “Vital Education Statistics,” represents the ratio of graduating students to that class’ statewide freshman enrollment.
Statisticians call the first measure a “status” rate because it asks young adults about their current educational status.
Mr. Burch is critical of the approach because the Census Bureau survey doesn’t distinguish between those who graduated with their high school class and those who dropped out and later got a General Educational Development certificate or completed an adult-education program. He and others also suspect that some survey respondents may falsely say they are high school graduates to avoid embarrassment.
Mr. Burch said he believes the state education department’s approach is the better measure, but it’s not the one the goals panel uses in its annual progress reports.
To compare states’ graduation rates fairly, the federal panel must ensure that the kind of rate used for all 50 states is the same.
But states collect and report their information in more than a dozen ways. So the goals panel uses the best comparable information it has: the “status” graduation rate reported by the Census Bureau, which surveys populations in each state the same way.
“There probably is a piece that is missing,” said Leslie Lawrence, an education associate with the goals panel. “But now there isn’t data out there that are comparable.”
The pressure to improve available data has built significantly as the public has demanded that its schools be held more accountable and as school reformers have sought to evaluate the success of improvement efforts.
Some education groups have responded. This month, the Council of Chief State School Officers convened a task force to help draft the kind of comparable graduation rate the National Education Goals Panel doesn’t yet have.
But many observers of education reform contend that the pressure hasn’t yielded results fast enough and that much of the information available remains incomparable or inaccurate, or isn’t yet collected.
“What we have is very marginal information, and part of the problem is that for a long time the schools didn’t want to be held accountable,” said Frank Newman, the president of the Denver-based Education Commission of the States.
“If you don’t present accurate information to the public, they’re going to lose confidence in their institutions,” said Pascal D. Forgione Jr., the commissioner of the National Center for Education Statistics. The U.S. Department of Education’s NCES is charged with collecting information on the nation’s educational progress.
Though efforts are under way to improve the reporting of both dropout and completion rates, researchers say it’s unlikely that all 50 states will be on board by 2000, the deadline many states have given themselves for showing significant progress in their students’ performance.
With all of these data, the problem is one of definition.
The NCES in the 1991-92 school year set a standard definition for what a dropout is. That year, about 14 states provided information on dropout rates that fit the NCES definition, which said a dropout was someone who had been enrolled any time during the previous year but was not enrolled on Oct. 1 of the current year.
This way to measure dropout rates is called an “event” rate, because it examines just one year, although it considers all high school grade levels. An event rate would not tell you how many students dropped out over the four years of high school.
The statistics center found that many states did not enforce an Oct. 1 cutoff date. Others varied in whether they counted students who drop out over the summer as having quit school in the current or previous year. Many didn’t account for students who dropped out and then returned.
CES officials say the number of states that now provide the center with dropout rates based on the common definition has grown to 25, plus the District of Columbia.
The standardized definition of high school completion being designed by the Council of Chief State School Officers will not depend on the status rate provided by the Census Bureau.
“I think [the status rate] doesn’t really tell you the information we want,” said Barbara Clements, who directs the chiefs’ NCES-funded effort. “It doesn’t tell us what our expectations are for schools, and for most parents, that is that kids graduate on time.” She convened the task force of educators, researchers, and policymakers to “come up with a rate we can all live with,” and hopes to have a recommendation by next month.
NAEP and State Assessments
Dropout statistics are not the only area where meaningful information is hard to come by.
Open a 1996 Louisiana education department study, “Louisiana Progress Profiles: State Report,” and look at the section on student performance. As the foreword to the annual report explains, the study is meant to give “policymakers, educators, parents, and the general public a clear and concise overview of public education in Louisiana.”
There you’ll find results for students’ performance on statewide criterion-referenced tests.
The results show that at least 90 percent of the state’s 3rd and 5th graders passed both the math and language arts sections; more than 80 percent of 7th graders passed both parts.
Now look at a similar report published by the Kentucky education department. It shows that just 28 percent of the state’s 4th graders scored at the proficient level or above on its statewide reading assessment.
So which state does a better job educating its students?
The answer is anything but clear, according to Mark D. Musick, the president of the Southern Regional Education Board, an Atlanta-based interstate cooperative effort that works to improve education in member states.
Comparisons are impossible based on these reports because Louisiana and Kentucky use their own state tests, and each has its own idea of how good is good enough to pass.
Mr. Musick recently compared more than a dozen state assessment results with results on the only national ongoing assessment of how well students perform in the core academic subjects: the National Assessment of Educational Progress.
The so-called nation’s report card is a congressionally mandated test managed by the NCES. It seeks to find out how many students have achieved at least basic, proficient, or advanced levels of performance, based on expert judgments about what students in grades 4, 8, and 12 should know and be able to do.
When Mr. Musick compared each state’s NAEP results against its own assessment results, he found wide variations.
For example, in 1994, the last year the NAEP reading test was given to 4th graders, just 15 percent of Louisiana’s students scored at a proficient level, meaning they had mastered challenging content for their grade. This result was among the lowest percentages for the 39 states that took part in the NAEP reading assessment that year.
The same year, 26 percent of Kentucky’s 4th graders achieved at least a proficient score on NAEP--a higher proportion than Louisiana’s, even though the states’ own reports imply the opposite: that Louisiana’s students on average outperform Kentucky’s.
Louisiana has begun designing new assessments with the hope of adopting higher-level standards, according to Louann Bierlein, the governor’s education adviser. The state plans to begin using new tests in spring 1999.
Mr. Musick found other wide variations in comparing several other states.
While no state he looked at had 40 percent or more scoring at the proficient level on either the NAEP reading or writing test, the states were reporting that anywhere between 11 percent and 88 percent of their students reached proficient levels on the states’ own exams.
“It appeared that NAEP was a relatively high standard and that several states were setting relatively low standards,” said Mr. Musick, who also is a member of the National Assessment Governing Board, which oversees the test.
“Overall, I would argue that the kind of things we need are more legitimate reference points,” he said. “States need to be paying more attention to what each other are doing and to the national assessments.”
What concerns him more than the fact that state and NAEP results can be so far apart is the fact that states seem to vary so much in what they consider proficient.
“What does it mean to say that in Kentucky 29 percent of its students are meeting its proficiency standard and Louisiana says 80 percent?” Mr. Musick said. “Do we believe those differences? My answer is no.”
Though the current debate over the reliability of the U.S. Consumer Price Index shows that even long-established economic measures are open to question, education experts said they would love to have the array of data available to their counterparts in economics.
As the chief education number-cruncher at the federal level, Mr. Forgione often envies the economists who generally can depend on the abundant data churned out on the nation’s economic health.
“We are playing catch-up with the education indicators,” he said. “When you realize how much work goes into just getting the CPI, we don’t have anything like that that goes in collecting education data.”
Several key pieces of education-related information are not even collected in most states, although many experts have said they would be good indicators of progress. These include the number of violent incidents in schools, the percentage of students who need remedial help in college, and the portion who have daily access to new technology.