Opinion Blog

Peter DeWitt's

Finding Common Ground

A former K-5 public school principal turned author, presenter, and leadership coach, Peter DeWitt provides insights and advice for education leaders. Former superintendent Michael Nelson is a frequent contributor. Read more from this blog.

Education Opinion

The Pseudo-Science of Evaluating Teachers Via a “Student Learning Objectives” Strategy

By W. James Popham — December 11, 2013 4 min read
  • Save to favorites
  • Print

Today’s guest post is written by W. James Popham, Professor Emeritus at the University of California Graduate School of Education and Information Studies.

All across the land, teachers are being evaluated, at least in part, by the use of a procedure typically described as a “student learning objectives” (SLO) strategy. This is a serious mistake. Let me tell you why. First, however, I need to swat away a few straw-person arguments--rejoinders that might disincline otherwise sensible people to agree with me.

For openers, I definitely believe that teachers ought to be evaluated, both formatively (when the evaluation is aimed at instructional improvement) and summatively (when evaluation’s aim is to make such high-stakes decisions about teachers as rewarding them monetarily, identifying them for instructional refurbishing or, after a time, dismissing them). Both formative and summative teacher evaluation can end up better educating kids, and better educating kids is why educators exist in the first place.

Second, I am altogether supportive of using students’ learning as a heavy-duty factor in the formative and the summative evaluation of teachers. After all, although we may consider other factors when sprucing up a teacher’s instruction or in appraising a teacher; surely what a teacher accomplishes in the way of helping kids learn ought to be at or near the top of any teacher-appraisal framework. Moreover, when summatively determining a teacher’s impact on students’ learning, I believe that one of the best ways to do so is to use some sort of pretest-posttest model in which we use a pair of assessments to get a fix on a teacher’s contribution to students’ learning.

Okay, with these two disclaimers out of the way, what is it that I detest so much about the evaluation of teachers using a “student learning objectives” approach? Well, let’s first agree on what the essential features of this evaluative approach actually are.

Here’s a definition of SLOs that you can find on the Ohio Department of Education’s web site. The definition is representative of what most U.S. educators seem to think SLOs are. As our Ohio colleagues put it, “A student learning objective is a measurable, long-term academic growth target that a teacher sets at the beginning of the year for all students or for subgroups of students. Student learning objectives demonstrate a teacher’s impact on student learning.” For purposes of this analysis, I’m happy with the Ohio definition, although the instructional period involved might well be somewhat shorter lumps of time than a full academic year.

The way any teacher hops aboard the SLO evaluation express involves five separable steps:

First Step - calls for the teacher to identify a learning objective for students, hopefully a worthwhile one, to be mastered during a hefty chunk of instructional time--months rather than weeks or days.

Second Step- the teacher either builds of borrows a test that can be used to measure the degree to which students have attained whatever was chosen as the learning objective.

Third Step - the teacher administers this pretest to students at the outset of the instructional period to be involved, for instance a semester or school year.

Fourth Step - based on an analysis of students’ performance on the pretest, the teacher establishes a growth target, that is, predicts how well students will perform at the close of instruction, for instance, at the end of the school year. Such predictions typically isolate the level of individual students’ posttest performances as well as the proportion of students who will, in fact, attain a specified level of posttest performance. The prediction might be formulated along the following lines: “At least 75 percent of the students in my class will earn a score of 80 percent or higher on the posttest.”

Fifth and final step - in the SLO process is to ascertain whether the teacher’s projections regarding students’ success have been achieved by re-administering the original pretest, but now using it as a posttest. Alternately, if a test is available that’s equivalent to the pretest (and there almost never it), then this alternate form could be administered as the posttest. Okay, there it is in all its pretended precision. I hope you see its shortcomings.

The SLO strategy, as I am sure you recognize, is completely dependent on the accuracy with which its growth targets have been established. And asking teachers to identify appropriate targets for lengthy instructional periods of time, often based on quite skimpy interactions with students and even abetted by students’ performances on a pretest, is simply too big an ask.

What teachers come up with for growth targets are often remarkably removed from how well students will actually perform at posttest time. But the SLO approach appears to have created a quasi-legitimate, almost a “scientific” measuring stick to be used in evaluating a teacher, that is, the proportion of students who attain the prophesied growth target. But remember that this target is almost never predicted with any degree of accuracy.

I have no problem with using students’ pretest and posttest performances to help us evaluate teachers. After all, if what a teacher wants students to learn represents a reasonably defensible curricular aim, then let’s discover how much student movement toward that target had been promoted. Teacher evaluators can consider the quality of the teacher’s intended learnings, see how well students could perform before and after instruction, then arrive at an evaluative judgment about the quality of the teacher’s efforts. But to structure this evaluative process around students’ attainment of capriciously established, even if well intentioned, growth targets, that’s downright silly!

The opinions expressed in Peter DeWitt’s Finding Common Ground are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.