'Value Added' Concept Proves Beneficial to Teacher Colleges
The use of “value added” information appears poised to expand into the nation’s teacher colleges, with more than a dozen states planning to use the technique to analyze how graduates of training programs fare in classrooms.
Supporters say the data could help determine which teacher education pathways produce teachers who are at least as good as—or even better than—other novice teachers, spurring other providers to emulate their practices.
The two states with the most experience using such data, Louisiana and Tennessee, have shown that it can be a powerful catalyst for change. Both can point to programs that have seen improvements in value-added scores after altering aspects of their programming. Nevertheless, teacher-educators and state officials alike continue to wrestle with how best to translate what are, in essence, fairly blunt measures of program effectiveness into a regular cycle for improving teacher education curricula.
“All value-added can do is signal to you that something’s the matter,” said George H. Noell, a professor at Louisiana State University, in Baton Rouge, who helps oversee the production of the value-added reports for teacher preparation in the state. “There aren’t many institutions that have practice getting student-level outcome data from their program graduates and figuring out what to do with it. It’s a whole new skill.”
For a concept that is only about a decade old, value-added is poised to expand rapidly in teacher education.
Some 14 states are in the process of using value-added to examine how the graduates of preparation programs are faring, many of which are using federal Race to the Top competition funding. The Obama administration has set its sights even further: It wants all states to report similar information on teacher-preparation programs and is pursuing that aim through requirements in the Higher Education Act.
The basic idea behind value-added is to examine growth in student test scores, holding constant factors like poverty or family characteristics that could skew scores, and then to determine what impact teachers had on that improvement. For teacher education, the process goes a step further, by analyzing how graduates of particular programs have done in the aggregate to raise their students’ scores.
The concept has been controversial among teacher-educators, however, in part because of how it has historically been implemented. A preliminary Florida effort to issue value-added data to teacher colleges, in 2009, brought complaints from schools of education that the data were simplistic, flawed, and subject to misinterpretation.
Tennessee’s attempts to provide such information, beginning in 2008, were initially mired in faulty data and confusing reports, officials there say. “In the first few years, we realized that there were a lot of errors in the data that we needed to clean, and there was quite a bit of pushback,” said Katrina M. Miller, an official at the Tennessee board of higher education who oversees the teacher-preparation portions of the state’s Race to the Top plan.
“Part of the reason we included value-added in the Race to the Top application was to start anew to develop a database, in collaboration with institutions, that would house all these data,” she said, “and make sure the linkages between teachers and higher education institutions were sound, and the reports were useful.”
Louisiana’s system, the most mature to date, has gradually gained acceptance among teacher-preparation programs, though some teacher-educators say they still have qualms about using test scores as a primary gauge of program effectiveness. (More measures will be added in a new set of performance standards, scheduled to debut in spring 2013.)
“It was frustrating at first. Based on earlier assessments, we always had exemplary status, and then to get these data showing some weaknesses—well, it was a shock,” said Gerald M. Carlson, the dean of the University of Louisiana at Lafayette. “But we ultimately said, ‘Let’s roll up our sleeves and see what we can find out.’ ”
The state’s system had identified the school, in 2008, as producing language arts teachers whose performance put them below that of other novice teachers. Similar problems appeared in a handful of other content areas in subsequent years’ reports. In response, the school set up teams of faculty to look at the curriculum, switched the sequencing of elementary math courses, and is now requiring faculty members to spend more time observing student-teachers, Mr. Carlson said.
Lackluster results in reading sent the Louisiana Resource Center for Educators scrambling to improve its training, said Nancy S. Roberts, its executive director. The Baton Rouge, La.-based alternative program, one of two private providers in the state, couples an intensive summer institute with on-the-job mentoring to train teachers. Officials realized after some deliberation that their training had not included enough explicit reading instruction.
“We realized we were teaching reading in the content areas, but not enough on reading fundamentals. The districts had told us that they wanted to teach reading their own way, and to back off of the fundamentals,” Ms. Roberts said. “We will never make that mistake again.”
After hiring a reading specialist to help revamp the curricula, the program added some 35 hours of reading content to its summer coursework. Results have now begun to tick upwards, and in fact, the center’s reading results were the highest among alternative programs on the most recent report.
Improvements in curricula generally take a while to show up in the value-added data, analysts say, since it takes at least two years for new candidates to be trained in the revamped classes, and then several more years for those teachers to instruct enough students to run the calculations.
The impetus for improvement aside, programs continue to grapple with how to home in on the specific changes that need to be made to their programming. Mr. Noell and other Louisiana officials now supply deeper-level analyses of the data to programs at their request.
They include such descriptive information as student scores on particular areas of the test—word problems, mathematical reasoning, or computation, for instance—or broken out by teacher-certification type.
Though they don’t always show clear patterns, such “drill down” data have proved helpful to the University of Louisiana at Lafayette, where the data showed that students taught by the university’s elementary teachers struggled with essay questions. The university’s teacher-educators have worked with colleagues from the liberal arts department to require more writing instruction in introductory English composition courses.
So far, Tennessee officials have been relying on public disclosure of the value-added information to encourage weaker programs to collaborate with the higher-performing ones. But the appetite for doing that varies, state officials said.
“Some programs have reached out for training on value-added,” Ms. Miller of the higher education commission said, “but I can tell you in all honesty that some are ignoring it.”
Teacher-educators say part of the reason is that they’re still trying to locate other sources of data to help them interpret the results. While it generally has performed well on the value-added system, Lipscomb University struggled to determine why some math teachers weren’t as effective as others. It eventually drew on surveys of program graduates, as well as data from classroom observation, to target a graduate credentialing program for additional math coursework, said Candice McQueen, the dean of the Nashville, Tenn.-based university’s college of education.
Ms. McQueen, who sits on the executive committee for the state’s association of teacher colleges, believes an intensive discussion about how programs are making use of the value-added data will be a focal point of upcoming meetings.
“Ultimately, we need to stop being defensive about this and to start looking at what we can use, and being proactive and specific about what other data points we need from the state,” she said.
To an extent, programs are also bumping up against limitations caused by privacy laws, which typically prohibit states from disclosing which teachers have been included in the value-added calculations. That has proved frustrating for programs that want to interview those candidates about their experiences or offer them additional training free of charge.
“Our future plan is to have candidates sign a document saying they agree to release their test scores,” said Mr. Carlson of the University of Louisiana. “Right now, we see numbers, but we don’t know who they are or where they came from.”
At least one major policy question about the systems is outstanding: Should the value-added information be used only for diagnostic purposes—or should it be formally integrated into states’ program-review and -approval processes?
So far, Louisiana alone appears to have consequences for programs based on the data. Some half-dozen programs have entered “programmatic intervention”—essentially, additional state oversight of the particular pathway or content area in which there appear to be weaknesses—though no programs have been decertified.
Tennessee officials want to integrate the student-achievement information into its program-accreditation cycle. The state is in the beginning stages of drafting such rules, Ms. Miller said.
But according to a review by the Washington-based Center on American Progress, only five of the 12 states that won RTT grants now plan to go beyond the reporting of scores to using them for program accountability. Its author, Edward Crowe, thinks that’s not enough.
“It strikes me as pretend accountability,” said Mr. Crowe, a consultant with the Woodrow Wilson National Fellowship Foundation, which runs a grant program to improve schools of education. “The history in teacher education is that nothing will happen, unless there is sustained external pressure to force programs to take a look at their own results.”
Another question that remains unanswered is whether public reporting will affect the larger marketplace in which such programs operate. None of the university administrators interviewed said they had experienced a notable increase or decrease in applications that they could trace back to the publicly reported value-added results. Nor is it clear whether districts and schools are using the information when hiring.
Some, like Mr. Noell of LSU, hold out hope that value-added will ultimately become so integrated into program improvement that it will eventually spur even greater changes.
“There could come a day where there are either no or few negative outliers,” Mr. Noell said. “And then I think you reset the conversation altogether. How can you raise the bar for teacher education overall? Wouldn’t it be nice to have new teachers who are as good on average as veteran educators?”
Vol. 31, Issue 21