Federal

New Uses Explored for ‘Value Added’ Data

By Debra Viadero — May 28, 2008 6 min read
  • Save to favorites
  • Print

With “value added” methods of measuring student-learning gains continuing to grow in popularity, policymakers and researchers met here last week to explore possible new ways of using the sometimes controversial approaches and to debate their pluses and pitfalls.

The May 23 conference at the Urban Institute, a think tank based here in the nation’s capital, examined the policy implications for value-added statistical designs, which typically measure students’ learning gains from one year to the next. Such methods have been spreading since the early 1990s.

While value-added designs are still imperfect technically, various speakers at the gathering said, they can provide new information to help identify ineffective teaching and the impact of certain programs and practices, for example. The data they provide can help educators reflect on their own practices, give administrators grounds for denying tenure to poorly performing teachers, or be used by states to calculate whether districts are making adequate yearly progress under the federal No Child Left Behind Act.

And value-added models can answer such important research questions as what makes a good teacher and whether problems in retaining early-career teachers actually harm or help schools, speakers said.

Yet when it comes to high-stakes decisions, supporters, critics, and scholars of value-added research models seemed to agree on one point: Value-added calculations, if they’re used at all, should be one among several measures used in judging the quality of schools or teachers.

“Assessment results are one critically important measure,” said Ross Wiener, the vice president for programs and policy at the Education Trust, a Washington-based research and advocacy group that focuses on educational inequities. “There are other things that teachers do that are important.”

Last week’s Urban Institute event piggybacked on an April conference at the University of Wisconsin-Madison, where researchers aired technical cautions about value-added research methodology and shared some other research supporting its usefulness. (“Scrutiny Heightens for ‘Value Added’ Research Methods,” May 7, 2008.)

An organizer of the Wisconsin meeting said at the Washington event that the limitations of value-added designs should be kept in perspective. Both the Washington conference and the Wisconsin gathering that preceded it were sponsored jointly by the Carnegie Corporation of New York, the Joyce Foundation, and the Spencer Foundation. (All three philanthropies underwrite coverage in Education Week.)

“I ask you not to lose sight of what I think is the main message,” said Adam Gamoran, the director of the Madison-based Wisconsin Center for Education Research, “which is that value-added models are better than the alternatives.”

Measuring Change

When it comes to accountability efforts, the alternatives for most education systems are techniques that rely on snapshots of student achievement at a single time, such as percentages of students who meet state academic targets.

The theoretical appeal of value-added accountability systems, which measure learning gains from one year to the next, is that educators would get credit only for the progress students made in their classrooms and not get penalized for the learning deficiencies that students brought with them to school.

In practice, though, various value-added models are proving controversial. A case in point is the New York City school system’s efforts to use such techniques to rate schools and evaluate teachers’ job performance, noted Leo E. Casey, the vice president for academic high schools for the 200,000-member United Federation of Teachers, the local teachers’ union.

At the conference, Mr. Casey faulted the school system’s teacher-evaluation project for relying on scores from tests taken by students in January, for failing to take into account the fact that students are not randomly assigned to classes, and for employing statistical calculations that he said are unintelligible to nonstatisticians.

“It’s really important that teachers, students, and parents believe the system on which they are being graded is a fair system,” he told conference-goers.

Opposition from his group, which is an affiliate of the American Federation of Teachers, and other teachers’ unions led New York state lawmakers in April to legislate a two-year moratorium on any efforts by districts to link student-performance data to teacher-tenure decisions. In the meantime, a state task force will be formed to study the issue.

Teacher Characteristics

Studies that try to identify which characteristics of teachers are linked to students’ learning gains are another, less controversial use of value-added methodology. Do veteran teachers do a better job, for example, than novices?

Studies examining such questions have shown that, while experience has proved to be important in some ways, possession of other credentials, such as a master’s degree, seems to have no impact on student performance, according to Douglas N. Harris, an assistant professor of educational policy studies at the Wisconsin research center.

Given the cost of a master’s degree—about $80,000, by his calculations—value-added methods might be a less expensive way to reward good teachers and signal which ones a school system ought to hire, Mr. Harris suggested.

“But we still need a path to improvement, and existing credentials might serve that function,” he said.

Value-added research models can also provide more information than experimental studies about the long-term effectiveness of particular programs or interventions in schools, said Anthony S. Bryk, a Stanford University scholar who is the incoming president of the Carnegie Foundation for the Advancement of Teaching, based in Stanford, Calif.

Mr. Bryk is currently using the statistical technique to track the progress of a professional-development program known as the Literacy Collaborative in 750 schools. He said that, while randomized studies are considered the gold standard for research on effectiveness, they can’t provide information about the different contexts in which a particular program works, the range of effect sizes that are possible, or whether the improvements change over time.

“You can only get so far by weighing and measuring,” he said. “What I’m arguing for is the use of value-added models toward building a science of improvement.”

From Data to Decisions

Whether schools will know how to make use of data collected through value-added statistical techniques is an open question, however.

Daniel F. McCaffrey, a senior statistician in the Pittsburgh office of the Santa Monica, Calif.-based RAND Corp., studied 32 Pennsylvania school districts taking part in the first wave of a state pilot program aimed at providing districts with value-added student-achievement data in mathematics.

He and his research colleagues surveyed principals, other administrators, teachers, and parents in the districts involved in the program and compared their responses with those from other districts having similar demographic characteristics.

“We found it was really having no effect relative to the comparison districts,” Mr. McCaffrey said.

Even though educators, for instance, seemed to like the data they were getting and viewed the information as useful, few were doing anything with the results, he said. Twenty percent of the principals didn’t know they were participating in the study, Mr. McCaffrey said, noting also that the program was still young at that point in the evaluation process.

Despite such challenges, other speakers at the conference argued that the use of value-added methodology should become more widespread. Said Robert Gordon, a senior fellow at the Center for American Progress, a Washington think tank: “The way we will learn about implementation problems, I think, is to implement.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Managing AI in Schools: Practical Strategies for Districts
How should districts govern AI in schools? Learn practical strategies for policies, safety, transparency, and responsible adoption.
Content provided by Lightspeed Systems
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Absenteeism Webinar
Removing Transportation and Attendance Barriers for Homeless Youth
Join us to see how districts around the country are supporting vulnerable students, including those covered under the McKinney–Vento Act.
Content provided by HopSkipDrive
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Two Jobs, One Classroom: Strengthening Decoding While Teaching Grade-Level Text
Discover practical, research-informed practices that drive real reading growth without sacrificing grade-level learning.
Content provided by EPS Learning

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Federal Treasury Dept. Takes Over Student Loans as Ed. Dept. Hands Off More Programs
The Education Department is handing off a portion of its student loan portfolio to Treasury.
3 min read
The Treasury Department building is seen, on March 13, 2025, in Washington.
The Treasury Department building is seen, on March 13, 2025, in Washington.
Alex Brandon/AP
Federal Opinion The Trump Administration Has Mostly Dismantled the Ed. Dept. Should You Care?
Here’s how much the administration has really changed federal education policy.
7 min read
The United States Capitol building as a bookcase filled with red, white, and blue policy books in a Washington DC landscape.
Luca D'Urbino for Education Week
Federal Ed. Dept. Quietly Ends an Honor for Schools’ Environmental Work
Applicants found out when the online portal for award submissions never opened.
5 min read
Secretary of Education Arne Duncan, center, arrives for a tree planting ceremony at the Department of Education to announce plans to create the Green Ribbon Schools competition which will "raise environmental literacy," inside and outside the classroom and reduce a school's environmental footprint, on April 26, 2011. A Texas oak tree was planted at the ceremony.
Then-Secretary of Education Arne Duncan, center, arrives for a tree-planting ceremony on April 26, 2011, at the U.S. Department of Education to announce plans to create the Green Ribbon Schools competition. The Trump administration ended the recognition—which honored schools for reducing their environmental impact and offering hands-on environmental education—last year.
Tom Williams/Roll Call via Getty Images
Federal The Ed. Dept. Is Sending 118 Programs to Other Agencies. See Where They're Going
The Trump administration is partnering with at least four other agencies as it tries to shutter the Education Department.
Illustration of office chairs moving into different spaces.
Laura Baker/Education Week + Getty