U.S. Secretary of Education Margaret Spellings has indicated she may be open to new ways of measuring school progress, known as “growth” models, under the No Child Left Behind Act. But state officials are still unsure what the federal government might find acceptable.
In a speech last month, Ms. Spellings said the federal Department of Education would convene a panel of experts to consider ways of allowing states to incorporate that kind of gauge into their accountability systems. (“States to Get New Options on NCLB Law,” April 13, 2005.)
Unlike the current setup, which requires states to set and schools to meet annual targets for the percent of students who are rated “proficient” or higher on state reading and math tests each year, growth models give schools and districts credit for increasing student achievement, even if they have not yet reached the proficiency goals.
Defining what’s meant by “growth” and how to measure it, though, is far from clear.
“There have been enough different versions of what that might be that I don’t think there’s yet a coherent view,” said Brian Gong, the executive director of the Center for Assessment, a nonprofit group in Dover, N.H., that provides research and consulting services on assessment and accountability systems.
While a handful of states have proposed using some kind of growth model, said Kerri L. Briggs, a senior policy adviser in the U.S. Department of Education, Ms. Spellings and Raymond J. Simon, the acting deputy secretary of education, want to hear from researchers and experts in the field before approving any of the current proposals. The department has yet to name any members to the working group.
“It’s really too soon to get into any details,” Ms. Briggs said last week. “I think they want to know what’s possible, given the statutory parameters. I think they welcome the idea in the big picture.”
The Massachusetts Model
Massachusetts has already won federal approval for one such model. The state uses a performance index to rate schools that awards 100 points for every student who scores at the proficient level or higher on state tests, but also gives schools reduced credit for students at lower performance levels.
Under the existing NCLB rules, schools and districts must meet steadily increasing targets both for their total enrollments and for specific groups of students—such as those who are poor or minority—until all youngsters score at the proficient level in 2013-14.
To make adequate yearly progress in Massachusetts, schools must meet the state’s current target on the performance index for the total student population and for each subgroup. Schools or subgroups within a school that are far below the target can still make AYP, however, if their rate of improvement on the index, based on where they started, is steep enough that all students would reach proficiency by 2014 if it continued.
“We ought to distinguish between those schools down there that are making great progress versus those that aren’t, and they ought to get some credit for it,” said David P. Driscoll, Massachusetts’ commissioner of education.
Other states—including Minnesota and Oklahoma—use a similar method under the federal law’s “safe harbor” provision, designed to provide a second look at schools that don’t make AYP initially. It permits schools to make adequate progress if they reduce the percent of non-proficient students in the subgroup that missed its target by 10 percent from the previous year. But many consider such a jump too big to be reasonable for many schools.
Like most states, Massachusetts compares the performance of different cohorts of students over time to determine if a school is progressing. Because a school’s student body and the profile of a particular grade can vary by year, critics contend that’s like comparing apples and oranges.
In addition, because test scores are strongly correlated with such student characteristics as poverty, researchers argue that such “status” models say more about who attends a school than about how much its educators contribute to student learning.
In contrast, Tennessee has proposed using a “value added” system to measure its growth under the federal law.
Value-added models track the performance of individual students over time and judge schools based on how much academic growth each student makes from year to year.
“Some people think that by following individual student growth, it will reveal good performance that is not shown in the status and improvement models that are incorporated in NCLB,” said Mr. Gong. Whether that will prove true is unclear, he added.
So far, federal officials have been leery of value-added models largely because they are not designed to ensure that all students reach the proficient level by 2014. Instead, such models typically ask whether schools are making more or less progress with their students than comparison schools.
Tennessee has tried to address that issue. Under its proposal to the federal Education Department, a school that did not meet the state’s proficiency targets could still make adequate progress as long as the percent of students on track to pass the state’s high school exit test by graduation had increased by 10 percent over the previous year.
“I feel it’s another option to show progress in those areas where no progress is apparent,” said Connie J. Smith, who directs the office of innovation, improvement, and accountability in the Tennessee education department. The state based the proposal on a dozen years of tracking individual student data in grades 3-8.
“We’ve got a technique that we think shows progress over time and improvement toward meeting the objective for high school graduation,” Ms. Smith said. “We think we can be a model for the United States.”
She’s been talking with other states, including Arkansas, California, and Pennsylvania, about Tennessee’s plan, which had not been approved by the federal government as of last week.
Minnesota received approval to use a growth model similar to that in Massachusetts for its accountability plan. But the state would like to incorporate an additional growth calculation, subject to federal approval, once it is testing students in each of grades 3-8.
Patricia D. Olson, the assistant commissioner for accountability and improvement in the Minnesota education department, said the state would like to use a value-added model starting with its spring 2007 tests. The state will be conducting pilot studies of different growth models next year.
“It gives us a way to use multiple measures to make a determination,” Ms. Olson said, “and it should be fairer to everybody.”
“We do have schools that are doing a wonderful job at pulling their kids up, but they can’t quite hit that bar yet,” she said. “You want to be able to acknowledge that.”
Wait and See
Other states are interested in using growth models to satisfy the demands of the NCLB law, but are waiting to see what happens.
“I know that we support, in concept, the idea that states should have the option to use growth models,” said Pete Bylsma, the director of research, evaluation, and accountability for the Washington state education department. “We’re not there yet.”
Pennsylvania officials did some calculations to see what would happen if the state used an index similar to Massachusetts’. While a number of schools would have made AYP, an equal number that made adequate progress under the law’s existing safe-harbor provision would not have done so.
“So at that point, we decided that we were not going to pursue that for this year,” said John Weiss, the acting chief of the division of performance analysis and reporting in the Pennsylvania education department. The state also is piloting a value-added model that tracks individual students’ progress.
In Louisiana, Assistant Superintendent of Education Robin Jarvis said the state’s accountability commission is interested in value-added analyses, but wants to wait until the state has several years of data from new state tests before making a decision. “I don’t expect them to come back to that until spring 2007,” she said, “but they may surprise me.”
How Much Is Enough?
One of the biggest issues, said Mr. Gong of the Center for Assessment, is what rate of growth should be expected and how to determine it.
Should the same rate apply to all students, for example, or should different trajectories be set for students with disabilities or those who speak limited English? While some people think the latter is more realistic, others argue that setting different expectations for students based on their characteristics is precisely what the federal law was trying to get away from.
It’s unlikely that federal officials would approve growth models that set such low or minimal growth expectations that most youngsters wouldn’t reach proficiency by 2014.
Sandy Kress, a former adviser to President Bush who helped craft the federal law, said: “Margaret [Spellings] is opening the door, but I think the key word is ‘proficiency.’ I think there may be an openness to growth models that show an expectation of proficiency, in which students who are not now proficient but are on a clear path to proficiency can be credited for the growth.
“I do not see people getting growth models approved under NCLB language that are not what I call proficiency-rooted or proficiency-based,” he said.
At a meeting on the use of such longitudinal-data systems at the Washington-based Urban Institute this month, researchers also raised cautions about the technical issues associated with value-added models. Those concerns included the quality of the tests, whether to adjust for student and school characteristics, and what to do when some data on individual students are missing.
What’s clear is that support is growing for at least exploring value-added and other growth measures.
“I think we have to proceed very carefully,” said Steven Rivkin, an economist at Amherst College in Massachusetts, who has done value-added analyses using Texas data. But, he added, the way schools are now judged “is just clearly wrong, and my sense is using the value-added or growth models is going to get you closer to the right answer.”
Rob Meyer, who directs a center on value-added research at the University of Wisconsin-Madison, agreed: “The attainment model is so obviously flawed that we ought to work really hard to figure out whether we can get value-added to work.”