As we continue to await the U.S. Department of Education’s new proposed regulations on accountability rules for teacher-preparation programs—be they state colleges or universities, private institutions, or alternative-certification routes—it’s important to keep in mind certain steps that can help ensure these pathways produce top-notch teachers, including program assessment and the smarter use of data.
Evidence shows that effective teachers are the most important in-school contributors to student learning. A majority of our nation’s teachers attend college and university programs of teacher education, so how can these programs do a better job of preparing educators for the pre-K-12 classroom? Given the ethical and professional responsibility to ready effective teachers for the full range of diverse learners, what data should these schools use to demonstrate the quality of their graduates?
The new single accreditor of teacher-preparation programs, the Council for the Accreditation of Educator Preparation, or CAEP (whose board Mary Brabeck chairs), has announced it will evaluate programs based on what teacher-candidates can do and how effectively they can teach, as demonstrated through reliable assessments, including classroom observations and students’ standardized-test scores. The forthcoming proposed regulations from the Education Department follow failed negotiations more than two years ago between the department and college and university teacher-preparation programs.
Evidence shows that effective teachers are the most important in-school contributors to student learning."
Most people following this issue expect that the regulations will require these programs to report their outcomes, using the best available, reliable, valid, and fair assessments. The department will likely require survey evidence to assess the satisfaction of principals with their teachers’ performance and teachers with their preparation programs, and perhaps even the satisfaction of pre-K-12 students themselves. Similar to the CAEP standards, the rules will also probably require reliable evidence that graduates can teach effectively and have a positive effect on pre-K-12 learning—commonly measured through assessments of student learning growth. (Student learning-growth assessments include the method of value-added modeling, which looks at changes in K-12 students’ achievement scores.)
All of these data points can inform program faculty members and the public about how well a teacher education program is doing. However, we know data can be subject to error, and bad decisions follow the reporting of inaccurate data or misinterpretations of the reliability and validity evidence. In addition, if the assumptions for the use of a statistic are not met, the “information” is at best useless and at worst dangerous.
Under what conditions can we trust the data to inform decisions about teacher-preparation programs? We invite the Education Department to look at our recent task force report.
The American Psychological Association, or APA, with support and encouragement from CAEP, convened a task force earlier this year (which Frank Worrell chaired) that published a practical resource, “Assessing and Evaluating Teacher Preparation Programs.” This report provides teacher education practitioners and policymakers with some best practices for the use of data, in order to make decisions about improving educator-preparation programs.
This report examines three methods for assessing the effectiveness of teacher education programs: value-added assessments of student achievement; standardized observation protocols; and teacher-performance surveys. These methodologies can be used to demonstrate that teacher-candidates who complete a program are well prepared to help all students learn. The report highlights both the usefulness and limitations of these three methodologies. And it provides a set of recommendations for their optimal use by teacher education programs and other stakeholders in teacher preparation, including states and professional associations.
The report addresses critical concepts such as reliability, validity, intended and unintended consequences of assessment, and overall fairness. It describes a host of factors that could degrade the validity of an assessment system and the quality of decisions made. It emphasizes that using multiple sources of data will result in higher-quality data in order to make valid judgments.
Collecting these data will not be easy without investment. Universities must assign resources—time, infrastructure, technical capacity, funding, and personnel—to collect pupil and teacher data of high integrity. States must build the appropriate data systems to successfully evaluate their teacher education programs.
Preparation programs must also develop data-collection expertise and the tools to analyze these evaluations. They will have to identify the elements or candidate attributes that make positive contributions to pre-K-12 student learning and use them to improve existing programs.
Faculty members, school and university administrators, state policymakers, and accrediting bodies must reach agreement about the merits of teacher-preparation programs.
Decisions about teacher education programs by the federal Education Department and all other stakeholders must be made now using the best data and methods available, even as we consider and acknowledge the limitations of these methods. The APA report serves as a guide to developing policy and practice that will allow teacher-preparation programs to demonstrate their progress toward readying the teachers we need.
A version of this article appeared in the November 05, 2014 edition of Education Week as Best Practices for Assessing Teacher Education Programs