The U.S. Department of Education’s final teacher-preparation rules will require states to collect new data on starting teachers in a bid to ensure that the nation’s schools of education are turning out classroom-ready graduates. But questions loom on how the new requirements will be applied and how effective they will ultimately be.
Under the new rules, released Oct. 12, states will be required to rate teacher-prep programs annually based on several criteria, such as the number of graduates who get jobs in high-needs schools, how long those graduates stay in the teaching profession, and their impact on student-learning outcomes.
The department’s hope is that the data collected will provide more transparency on program effectiveness and help improve training. But some groups representing teachers, including the two national unions, worry that some programs may make bad choices in order to meet numerical goals that don’t actually measure a teacher’s effectiveness, much less the quality of his or her training program.
“Let’s say we actually come up with a good indicator that shows how students are progressing,” said Lily Eskelsen García, the president of the National Education Association. “Here’s the thing: There is no research, none, it doesn’t exist, that says these kids in this school in this situation, their report cards, their test scores, their growth, has anything to do with this school of education way over here, three or four levels away from them.”
While the final rules give states flexibility in determining what measures of student learning to use, American Federation of Teachers President Randi Weingarten also sharply criticized the emphasis on judging teacher-training programs on their graduates’ impact on student achievement. She and García argue that the requirement could encourage teacher-training programs to steer their graduates away from schools where new teachers are likely to face more challenges.
“The regulations will punish teacher-prep programs whose graduates go on to teach in our highest-needs schools, most often those with high concentrations of students who live in poverty and English-language learners—the exact opposite strategy of what we need,” Weingarten said in a statement.
But others have defended the Education Department’s approach.
The U.S. Department of Education’s already-controversial final teacher-preparation regulations build on the annual reporting requirements established under the Higher Education Act in an effort to gather more discrete information on the performance and impact of individual teacher education programs. The final regulations also include a number of changes from the proposed rules issued in November 2014.
Under the new rules, states will be required to use federally set criteria to evaluate individual teacher-preparation programs, including alternative-route and distance-learning programs. The criteria include feedback from graduates and employers, candidate-placement and -retention rates, and graduates’ impact on student learning. The final rules give states flexibility in determining relevant measures of student learning, as well as flexibility in weighing the various criteria to determine program ratings. In developing their reporting systems, states must consult with a diverse range of stakeholders involved in or affected by the teacher-prep field.
Based on the results, states will be required to categorize programs in one of at least three performance tiers: “low-performing,” “at-risk,” or “effective.” The final rules remove the requirement to use a fourth tier, “exceptional.” States must provide technical assistance to programs rated as low-performing.
States are expected to develop their reporting systems during this academic year and are permitted to use the 2017-18 year to test them. The systems must be fully in place in 2018-19. Results must be reported on institution report cards and state report cards annually in October and April, respectively, based on data collections from the previous year. Institutions must post their report card information prominently on their websites.
Effective in 2021-22, only programs that were rated effective in at least two of the previous three years will be eligible to offer federal TEACH grants (for students who commit to teach in low-income schools or high-need fields). In a change from the proposed regulations, there will not be a separate TEACH-grant-eligibility classification for STEM programs.
In another change from the proposed rules, the final regulations will not require programs to establish selective-admissions standards. That change is intended to help students enroll diverse student bodies. However, the final rules maintain a requirement for “rigorous exit standards.”
The rules clarify a state must issue ratings for any distance-education program—defined as a program in which 50 percent or more of the required course work is offered online—that has produced 25 or more certified teachers in the state in the reporting year.
Louisiana, for example, has been collecting similar data on new teachers since 2002, and education officials there say it’s been worth the effort.
Jeanne Burns, the associate commissioner for teacher and leadership initiatives for the Louisiana board of regents, said the University of Louisiana at Lafayette made changes to its elementary education curriculum when student-test data indicated that some graduates were performing below par in language arts instruction.
Three years after the changes, the data showed the school’s graduates had boosted their scores, Burns said.
“The biggest lesson for us is that even a program that looks strong—if you go in and break down how they’re doing in specific grade spans and content areas—probably has an area it can improve in,” said Burns.
Arizona has also already begun collecting data on new teachers as part of an effort to improve training, and officials at Mary Lou Fulton Teachers College at Arizona State University say the information has proved helpful.
The school learned from the state’s reports that its graduates stayed, on average, only one year in high-needs districts, leading to a concerted effort to better prepare students to meet the demands of teaching in poor communities, according to Nancy J. Perry, an associate dean. Whereas teacher-candidates prevously received about 15 weeks of student-teaching, most prospective educators in the school now get a full year of classroom training, often in high-needs schools. The students also receive regular feedback from professors as well as mentor teachers in the schools where they teach, thanks to a mobile-app that ASU created.
“Everyone, including the teacher-candidate, professors, and mentor teacher, has real-time data on where the candidate is strong and where she might need help,” said Perry. “The data collected in the app means we know we need to provide professional development to faculty based on candidate weaknesses, and the candidate can get support right away.”
And the retention rates for the ASU’s graduates have improved, Perry said, with ninety-two percent staying through a third year of teaching. The state’s three-year retention rate is 76 percent.
Arizona’s data-reporting system is not fully in place yet. Right now, Perry said, her school receives only teacher-retention data from the state department of education, but it will soon have access to teacher-evaluation records and student-achievement data.
“Data from the state is crucial to our program,” she said. “We don’t want to wait three years for our graduates to become good teachers. We want to make sure they’re good teachers on day one.”
But some states may find it more difficult than others to put a data-collection system in place, depending on how much disagreement there is about how to judge a teacher’s effectiveness.
Karen Syms Gallagher, the dean of the University of Southern California’s Rossier School of Education, expects California to have a tough time applying the new rules.
"[United Teachers Los Angeles] doesn’t want any teacher judged on the basis of students’ work, and it doesn’t want to share teacher evaluations, either,” she said. “That is going to require a national push on how to measure teacher effectiveness. The regs say each state can decide how to judge this, but it will be a battle.”
Gallagher predicts the state will ultimately use multiple measures of achievement, including portfolios, grades, student surveys, and analyses of students’ course trajectory in a subject.
“These measures are not as exact as achievement tests, but they do indicate some influence a teacher might have on what happens to students,” said Gallagher.
Aside from the thorny issue of judging teacher effectiveness, some states may simply not have the capacity to build new data-collection systems in accordance with the Education Department’s specifications, according to Deborah Koolbeck, the director of government relations for the American Association of Colleges for Teacher Education.
“The feds aren’t paying for this,” Koolbeck said. “So where is the money coming from?”
State education departments already have many issues to grapple with, Koolbeck noted, including how to apply the Every Student Succeeds Act, and some are in the midst of completely revamping their teacher-evaluation systems. Now they have the added burden of building a new data system.
“Teacher prep has been trying to get access to this information for a long time,” Koolbeck said. “It’s not like they don’t want to improve, and this information could help them improve. But this is a very NCLB-style ratings structure with all these high-stakes consequences attached, like losing federal funding,” she said, referring to the recently replaced No Child Left Behind Act.
Most concerning for Koolbeck is the uncertainty of what exactly the data can show.
“We don’t know if these variables are the right variables,” she said, echoing the concerns of the teachers’ unions. “Some of these things we’re measuring are beyond the reach of the teacher-prep program.”
A version of this article appeared in the October 26, 2016 edition of Education Week as Teacher-Prep Regs Demand Data