Data Yield Clues to Effectiveness
The ways of understanding teacher quality are changing as information systems allow states and districts to track educators and their students over time.
If you know how much difference good teachers can make—and how hard it might be to spot one from a résumé—you can appreciate the value of new, analytical research being done by those who toil in the realm of “administrative” data kept on teachers.
That label refers to the information kept by states and school districts to track teachers for such purposes as pay and licensure, which is proving to be a treasure trove for officials seeking a better understanding of teacher quality.
The day is not far off, teacher-quality advocates say, when a host of professional and policy decisions could be informed by analysis of data from thousands of teachers and students observed over time. Such longitudinal data allow researchers to measure changes in student achievement—and to link them with teacher characteristics.
“They hold the potential of answering the questions that are important to quality teaching and, ultimately, the best education for our kids,” says Jacqueline J. Paone, the executive director of Colorado’s Alliance for Quality Teaching, a coalition of business leaders, state policymakers, and educators.
For instance, teacher-preparation programs could be slated for overhauls—or not—depending on how well their graduates perform. Or state policies could reflect new knowledge about which qualifications indicate teacher effectiveness.
Already, researchers using teacher data from Florida, New York, North Carolina, and Texas have begun drawing some important conclusions:
• Teacher effectiveness varies enormously within schools and districts, although teachers are consistently weakest in their first year or two.
• Αfter the novice years, the path a teacher took into the classroom seems to make little difference, and the value of experience does not build in equal increments with years on the job.
• A few good teachers in a row can raise students’ achievement significantly.
Ultimately, says Jane Hannaway, the principal investigator for the National Center for Analysis of Longitudinal Data in Education Research, or CALDER, at the Urban Institute, such data systems could give “real-time feedback at the classroom level.”
That feedback, eventually, could allow matches between specific types of students and specific types of teachers with observations of whether the pairings helped the students learn more.
The teacher data sets go back decades in some cases, and remain useful in their own right. For example, researchers with the Illinois Education Research Council mined that state’s records on who is teaching in Illinois, discovering that new-teacher attrition from the profession has remained fairly constant since the late 1980s, and in the vast majority of Illinois schools does not constitute the crisis that has been widely claimed nationally. The finding highlighted the special problems of a subset of the schools that overwhelmingly are serving children from low-income families.
Peering Inside the Box
But even the riches of data like those the research council used pale in comparison with data linking teacher characteristics and circumstances over time with student-assessment data. Such a link allows researchers to peer into the “big black box in education,” says Hannaway.
Researchers have figured out that teaching dwarfs other in-school contributors to student academic growth, but still don’t know much about how that happens.
CALDER was founded last year as a partnership between the Washington-based Urban Institute and scholars from six universities to take advantage of the burgeoning data on student achievement engendered by state and federal accountability systems, especially in those states that have good information on teachers. The researchers intend to focus initially on Florida, Missouri, New York, North Carolina, Texas, and Washington state, which have such comprehensive databases.
The center’s work is funded by the Institute of Education Sciences, the research arm of the U.S. Department of Education, which last summer awarded more than $62 million in grants to 13 state education departments for the design and upgrade of state longitudinal-data systems. That was the second round of grants under the program, which seeks to improve the quality and comprehensiveness of such systems for the purposes of federal reporting, research, and decision-making.
It’s not an easy job, just from a technical point of view, according to Hannaway. Typically, data on student achievement and on teacher characteristics are housed in different “silos,” sometimes more than one computer system, even within the same government agency.
“Getting all these data to talk to each other is not a trivial task,” Hannaway says. “They need to be linked together and linked over time.”
The Data Quality Campaign, a 2-year-old effort to promote state longitudinal-data systems in education, calls a unique teacher identifier one of the 10 essentials of doing the job right. The identifier—a number or code that pinpoints one individual—should also have the ability to match teachers to students, according to the campaign, which is based in Austin, Texas, and financed by the Bill & Melinda Gates Foundation.
But, according to the group, just 13 states have data systems far enough along to answer the question: Which teacher-preparation programs produce the graduates whose students have the strongest academic growth?
Louisiana is one of the states that, at least partially, can answer that question. Officials there unveiled last fall their first official results from a data system built specifically to gauge the effectiveness of Louisiana’s teacher-training programs.
A 2007 survey by the Data Quality Campaign finds that all but four states and the District of Columbia assign unique identification numbers to all teachers. Of the states that track teachers, only 12 can link teacher IDs to data on their students' performance.
Novice teachers grouped by undergraduate preparation program are being measured against experienced teachers, using student test-score gains. The idea is not to assess the teachers themselves, but to uncover the strengths and weaknesses of their preparation.
“We’re just going to another whole level of measuring the effectiveness” of training, says Jeanne M. Burns, the Louisiana education department’s associate commissioner for teacher education initiatives.
In the past, she says, the state’s accountability system for teacher-preparation programs centered on such measures as aspiring teachers’ passing rates on the certification exam, results from surveys of new teachers, and numbers of teachers produced for shortage areas.
The new system will also allow researchers to gauge the effectiveness of various local versions of a state-required program for supporting new teachers.
Burns says Louisiana’s system ran into little public opposition because it grew out of the work of a blue-ribbon commission on teacher quality that linked effectiveness to student learning gains.
“If they had not identified growth of learning as part of our teacher-preparation accountability system, I don’t think we’d be where we are today,” she says. Officials and advocates have been clear, she adds, that the endeavor is not “about getting rid of teacher-preparation programs.”
In Colorado, proponents of more advanced data systems that include teacher characteristics have further to go. The state founded an “educational data warehouse” in 2001 and it has made strides in being able to track students over their school careers. The state education department also collects teachers’ Social Security numbers as part of licensing.
But in practice, says Paone, of the Alliance for Quality Teaching, advocates are left without a ready source of information that they can use to track teachers from job to job.
“We hear, anecdotally, that teachers move from urban districts to more affluent suburban ones,” she says, “but we have no data to support that.” Knowing whether those suburban districts are, indeed, drawing teachers would help school leaders design policies that would keep more experienced teachers in place, Paone says.
Yet tensions remain around building such data systems. Teachers and their unions, in particular, worry about systems that link teacher data with student-achievement records.
While such systems have the potential to yield rich information on differences that affect student learning, they also raise a thorny question: Might teachers be ranked, assigned, or fired on the basis of such data?
To steer clear of the question, some experts advise a clear focus on student improvement, measured by assessment data, as the goal of any teacher database.
Whatever the exact output of the data system, those who design it must get local districts to see the value of the work beyond the administrative tracking that serves them directly, the experts say.
“If people resist this, what gets in the system is very bad data,” says Jay Pfeiffer, who heads the Florida education department’s accountability, research, and measurement division. “You have to get them enthusiastic about it.”
Pfeiffer warns, too, that the system’s partners—those who actually collect the information—must be confident that the data on individuals will not leak out. “Protecting the identifiable information is paramount in this,” he says. “One mistake unravels everything.”
In the 1980s, with Pfeiffer playing key roles along the way, Florida began building what is today one of only four state education data systems that meet all 10 criteria of the Data Quality Campaign. The other states in that category are Arkansas, Delaware, and Utah.
Those four states have information, for instance, from student transcripts and can match student records among the elementary, secondary, and postsecondary levels.
Recent papers by Douglas N. Harris and Tim R. Sass, economists at Michigan State University and Florida State University, respectively, dig into the Florida data for student learning gains that might be linked to teacher characteristics: college entrance- or placement-test scores, training, and certification by the National Board for Professional Teaching Standards.
The researchers found that undergraduate teacher preparation has little influence on student achievement, though content-focused professional development appears to help middle and high school math teachers raise scores. Harris and Sass discovered no evidence that teachers’ college entrance- or placement-test scores affected their students’ achievement gains.
On national certification, they concluded that the voluntary credential’s ability to indicate teacher effectiveness as measured by student test scores was “highly variable.” Also, the process of becoming nationally certified does not appear to boost teacher effectiveness, nor do nationally certified teachers appear to influence their colleagues in that regard.
The economists’ paper is just one example of what can be learned through such data.
“There’s lots of research potential in these databases,” Pfeiffer says.
So much potential, he continues, that the education and research communities need to find ways to work together to realize it. Ultimately, students will be the winners.
Vol. 27, Issue 18, Pages 20, 22-24Published in Print: January 10, 2008, as Data Yield Clues to Effectiveness