Opinion
Education Opinion

Report Cards for Teachers

By Nancy Flanagan — February 28, 2011 3 min read
  • Save to favorites
  • Print

First, take a look at this--the ultimate report card.

I’ve been trying to wrap my head around what a similarly full-featured report card would look like--one that compared student to student, teacher to teacher, or school to school, based on a range of critical indicators of progress and context. Information that would help all of us understand the factors --including the ever-important teacher skill and student effort--that combine to cause (verb used intentionally) valuable student learning.

In addition to the usual indicators of context and progress, suppose we threw in measures like: Does the student have a library card and use it regularly? It’s always good to suggest variables that might lead to improvement, after all.

In June of 1998, the Detroit Free press did something like this. They hired economists to create a multiple regression analysis--a statistical technique familiar to all graduate students in social sciences--and looked at statewide assessment scores, per-pupil expenditures and other key data, to determine which districts were providing best bang for the buck. It was a revealing piece of journalism, to say the least.

I can’t find the Freep piece online, but here’s Education Week’s description:

To measure the effects of poverty and other nonschool factors on achievement, the Free Press and other newspapers use such sophisticated statistical techniques as multiple regression analysis. Such methods can determine to what extent variations in test scores are related to differences in such factors as family income, student mobility, or limited English proficiency. The findings are used to create projections of likely test results for a school or district based on its student population.
Schools or districts whose actual test scores are much better than predicted are judged to be particularly effective at serving their students. Based on its study, the Free Press concluded that the Detroit public schools were beating the odds, while some wealthier suburbs could be doing more.

Funny thing, though. The public and the press weren’t really interested in this kind of rich, contextualized data--the layered analyses that might point schools and policy-makers toward the factors that are most impacting achievement. They want to know two things: Is School X (read: my child’s school) better than other schools? How does my child compare to other children?

Many elementary schools have created comprehensive reporting systems to give parents a clear picture of their child’s academic strengths and weaknesses--long checklists, carefully calibrated rubrics, deconstruction of separate skills involved in reading for meaning, and so on. And many of those same schools have later reversed course and gone back to simple report cards, including letter grades for young children.

In other words, don’t confuse me with all this information. I don’t care about decoding, fluency, comprehension, vocabulary and voice. Can my kid read? At grade level (whatever that is)? And--is he going to school with other kids whose parents care about reading?

States have created report cards for public schools, too--vastly oversimplified and reassuring to parents in the “right” neighborhoods that their hefty mortgage payments are worth the strain.

Now we have John Merrow suggesting that teachers should be evaluated in whole-school groups (as if that were a brand new concept, rather than the driving force behind the selection process advantaged parents use when seeking a school for their child):

The days of what I think of as trade union dealing are over; teachers have to bargain for more than pay and privileges. They need to be in the forefront of connecting their evaluations with student achievement. They need to be at that table, and I believe they ought to be arguing for school-wide evaluations. If it's just teacher-by-teacher, we will end up with even more bubble testing in more subjects. If it's school-wide, then everyone -- down to custodians and secretaries -- has a personal, vested interest in student success.

Well. I taught in a school with a high percentage of dedicated, intelligent teachers--and competent secretaries and kindly, caring custodians who lived in the community (until the district let them go and privatized the custodial staff). Way before test scores “measured” the effectiveness of my school, employees had a personal, vested interest in school success. We accepted responsibility. Without tests.

What Merrow’s missing here is the key reason we assess students. The purpose of testing students is to inform further instruction. It’s how we find out where the weak spots are--and develop a plan to address them. As long as we’re testing students only to evaluate their teachers--or their classroom aides and the cafeteria ladies-- there will be gaming, even when we’re using the school as unit of measurement.

There are better ways to evaluate teachers. Aren’t there?

The opinions expressed in Teacher in a Strange Land are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.