Accountability Opinion

Best Practices for Evaluating Teacher Ed. Programs

By Mary Brabeck & Frank C. Worrell — November 04, 2014 4 min read
  • Save to favorites
  • Print

As we continue to await the U.S. Department of Education’s new proposed regulations on accountability rules for teacher-preparation programs—be they state colleges or universities, private institutions, or alternative-certification routes—it’s important to keep in mind certain steps that can help ensure these pathways produce top-notch teachers, including program assessment and the smarter use of data.

Evidence shows that effective teachers are the most important in-school contributors to student learning. A majority of our nation’s teachers attend college and university programs of teacher education, so how can these programs do a better job of preparing educators for the pre-K-12 classroom? Given the ethical and professional responsibility to ready effective teachers for the full range of diverse learners, what data should these schools use to demonstrate the quality of their graduates?

The new single accreditor of teacher-preparation programs, the Council for the Accreditation of Educator Preparation, or CAEP (whose board Mary Brabeck chairs), has announced it will evaluate programs based on what teacher-candidates can do and how effectively they can teach, as demonstrated through reliable assessments, including classroom observations and students’ standardized-test scores. The forthcoming proposed regulations from the Education Department follow failed negotiations more than two years ago between the department and college and university teacher-preparation programs.

Evidence shows that effective teachers are the most important in-school contributors to student learning."

Most people following this issue expect that the regulations will require these programs to report their outcomes, using the best available, reliable, valid, and fair assessments. The department will likely require survey evidence to assess the satisfaction of principals with their teachers’ performance and teachers with their preparation programs, and perhaps even the satisfaction of pre-K-12 students themselves. Similar to the CAEP standards, the rules will also probably require reliable evidence that graduates can teach effectively and have a positive effect on pre-K-12 learning—commonly measured through assessments of student learning growth. (Student learning-growth assessments include the method of value-added modeling, which looks at changes in K-12 students’ achievement scores.)

All of these data points can inform program faculty members and the public about how well a teacher education program is doing. However, we know data can be subject to error, and bad decisions follow the reporting of inaccurate data or misinterpretations of the reliability and validity evidence. In addition, if the assumptions for the use of a statistic are not met, the “information” is at best useless and at worst dangerous.

Under what conditions can we trust the data to inform decisions about teacher-preparation programs? We invite the Education Department to look at our recent task force report.

The American Psychological Association, or APA, with support and encouragement from CAEP, convened a task force earlier this year (which Frank Worrell chaired) that published a practical resource, “Assessing and Evaluating Teacher Preparation Programs.” This report provides teacher education practitioners and policymakers with some best practices for the use of data, in order to make decisions about improving educator-preparation programs.

This report examines three methods for assessing the effectiveness of teacher education programs: value-added assessments of student achievement; standardized observation protocols; and teacher-performance surveys. These methodologies can be used to demonstrate that teacher-candidates who complete a program are well prepared to help all students learn. The report highlights both the usefulness and limitations of these three methodologies. And it provides a set of recommendations for their optimal use by teacher education programs and other stakeholders in teacher preparation, including states and professional associations.

The report addresses critical concepts such as reliability, validity, intended and unintended consequences of assessment, and overall fairness. It describes a host of factors that could degrade the validity of an assessment system and the quality of decisions made. It emphasizes that using multiple sources of data will result in higher-quality data in order to make valid judgments.

Collecting these data will not be easy without investment. Universities must assign resources—time, infrastructure, technical capacity, funding, and personnel—to collect pupil and teacher data of high integrity. States must build the appropriate data systems to successfully evaluate their teacher education programs.

Preparation programs must also develop data-collection expertise and the tools to analyze these evaluations. They will have to identify the elements or candidate attributes that make positive contributions to pre-K-12 student learning and use them to improve existing programs.

Faculty members, school and university administrators, state policymakers, and accrediting bodies must reach agreement about the merits of teacher-preparation programs.

Decisions about teacher education programs by the federal Education Department and all other stakeholders must be made now using the best data and methods available, even as we consider and acknowledge the limitations of these methods. The APA report serves as a guide to developing policy and practice that will allow teacher-preparation programs to demonstrate their progress toward readying the teachers we need.

A version of this article appeared in the November 05, 2014 edition of Education Week as Best Practices for Assessing Teacher Education Programs


Special Education Webinar Reading, Dyslexia, and Equity: Best Practices for Addressing a Threefold Challenge
Learn about proven strategies for instruction and intervention that support students with dyslexia.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Personalized Learning Webinar
No Time to Waste: Individualized Instruction Will Drive Change
Targeted support and intervention can boost student achievement. Join us to explore tutoring’s role in accelerating the turnaround. 
Content provided by Varsity Tutors for Schools
Student Well-Being K-12 Essentials Forum Social-Emotional Learning: Making It Meaningful
Join us for this event with educators and experts on the damage the pandemic did to academic and social and emotional well-being.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Accountability Timeline: How Federal School Accountability Has Waxed and Waned
From its origins in the 1990s to the most-recent tack, see how the federal approach to accountability has shifted.
4 min read
President George W. Bush, left, participates in the swearing-in ceremony for the Secretary of Education Margaret Spellings, center, at the U.S. Dept. of Education on Jan. 31, 2005 in Washington. On the far right holding a bible is her husband Robert Spellings.
President George W. Bush, left, participates in the swearing-in ceremony for the Secretary of Education Margaret Spellings, center, at the U.S. Dept. of Education on Jan. 31, 2005 in Washington. On the far right holding a bible is her husband Robert Spellings.
AP Photo/Pablo Martinez Monsivais
Accountability School Accountability Is Restarting After a Two-Year Pause. Here's What That Means
For a moment, the COVID-19 pandemic succeeded in doing what periodic protests about school accountability couldn't: Halting it.
10 min read
Illustration of a gauge.
Accountability Opinion Let's Take a Holistic Approach to Judging Schools
Parents wouldn't judge their kids based on a single factor. So, says Ron Berger of EL Education, why must schools use a lone test score?
8 min read
Images shows colorful speech bubbles that say "Q," "&," and "A."
Accountability Opinion Are K-12 State Tests Like a Visit to the Pediatrician?
Even if the doctor’s trip isn’t pleasant, at least parents get something out of it they believe is worthwhile.
3 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
DigitalVision Vectors/Getty