Remember that time a few years ago when Carolyn Abbott was named the worst 8th grade math teacher in all of New York City? You can bet she does.
It was a crushing blow to an apparently very good teacher: looking at the evidence it seems pretty clear that Abbott wasn’t the bad teacher she was made out to be. She taught a group of seventh graders one year to scores that landed them in the 98th percentile on the state’s math test. Not bad. Based on complex statistical modeling—"value-added” modeling—it was calculated that the same students would score in the 97th percentile the next year, a year in which they would also have Abbott as their teacher. But she let them down. Their scores, collectively, landed them in the 89th percentile. Still good, but not nearly good enough.
Or did they let her down? Instead of asking the obvious questions—what if the students somehow changed, not the teacher; or what if this was related to something else entirely, like the quality of the test the students took?—the die was cast. Abbott missed her mark. She had instantly become a Bad Teacher. Wham, bam, thank you VAM.
If dumb was dirt, the idea of using value-added models to evaluate teachers would cover acres and acres of otherwise arable educational farmland.
In my last post I wrote about some of the large-scale impacts I see VAM approaches having on schools and took a stab at explaining why they continue to have credibility anyway. I have more to say about that, but first I want to add a couple more logs to the fire, especially as value-added modeling gains traction as a tool for evaluating teacher educators as well as teachers.
So what happens if you are a teacher whose future hinges on value-added assessment scores and your principal asks you to open your classroom to a student teacher? Here in Pennsylvania, the shortest student teaching assignment is 12 weeks long, and many programs (ours at Gettysburg College is one) require student teachers to spend even more time in the classroom. Even if a student teacher only spends two weeks teaching full time (again, that’s the minimum in Pennsylvania; most of us do more), theoretically we’re talking about at least a unit’s worth of material being taught by a teacher with precisely zero experience as a professional educator. No wonder our cooperating teachers are scared to open their classrooms to beginners.
To put this in context, let’s say you had a job as a salesman where you could only get paid if you sold 2,000 units of something over a six-month period. Miss the mark and you risk being fired. Now let’s say you had to take on an amateur apprentice for four of those six months, and for two-thirds of one whole month you couldn’t sell anything because your apprentice needed to, you know, gain experience and learn how to do the job. What would you do in that situation? You might try to cram more selling into less time, or maybe you would loom over the apprentice at all times so you could still close the sale in case things got dicey. Maybe you’d only let your apprentice sell to the buyers that you know are capable of buying and willing to do it, denying those buyers a better experience, and denying your apprentice the opportunity to face challenges he’ll face when he starts selling without your help. You might even focus on giving specific, detailed, step-by-step instructions to the apprentice instead of letting him figure out how to sell things on his own.
But if you’re like most people, you’d probably do everything in your power to have the apprentice go spend time with somebody else so you could do your job. Someone less experienced, and, therefore, with less to lose. Who can afford to mentor others when their own jobs are at stake?
This is the conundrum we find ourselves in. It’s okay, though; we can dig ourselves out. You’ve already been told that your 2,000-unit sales goal was determined by a complicated statistical model that you could never possibly understand, and that your innate ability to sell any units at all can also be gauged by applying this model. Now imagine being told that the same model assures you that taking on that apprentice will not have a negative impact on your ability to sell your 2,000 units. Voila! It’s a magical model, see: it assesses your ability to contribute to the company’s bottom line by establishing a sales goal; predicts your ability to meet it, even though you can’t control the attitudes or proclivities of your potential buyers; and is so complex that it knows that having an inexperienced person step in for you for a good chunk of the sales year won’t affect your ability to meet your goal. Amazing!
Believe it or not, that’s exactly what we’re now being told. Don’t worry about taking a student teacher, the modelers are saying, because student teachers don’t really have an impact on a cooperating or mentor teacher’s value-added assessment scores after all. Or at least we don’t think they do. Go ahead, read the link; that’s real document.
And let it sink in.
Are you thinking what I’m thinking? I thought the value-added method was predicated on the idea that the teacher is the single most important variable in a student’s educational experience, and, since that’s the case, that we could use convoluted statistical modeling to isolate and measure the teacher’s impact on student learning. But here’s the thing: either the teacher makes a difference or the teacher doesn’t. You can’t have it both ways. If a student teacher doesn’t make a difference, how do we know that a regular classroom teacher does? What’s the point of trying to measure something that doesn’t make a difference?
Note that the document linked above does not say that efforts were made to isolate the impact of the student teacher versus the cooperating teacher with regard to each one’s impact on student learning. It says: “Based on the preliminary results of this pilot study, student-teachers have very little impact on the value-added report of licensed teachers.” It says, in other words, that you can put any teacher in another teacher’s classroom and not have that new teacher affect the regular teacher’s value-added score. It says—just to be perfectly, crystal clear—that the teacher doesn’t actually make a difference. Or, if the teacher does, that the model isn’t sophisticated enough to measure it. Lo and behold.
The thing that might be most insulting about value-added modeling as a means of assessing teaching is that it assumes that we know and agree on what good teaching is—because we would have to know that before we could create a method to assess it, no matter how simple or complex that method may be. But we don’t know for sure, partly because there are so many variables in teaching that they could never possibly be isolated and controlled, and partly because not everyone shares the same vision of what schools are for. Proponents of value-added modeling skip right past this problem, assuming that a teacher is effective if that teacher helps his or her students earn higher test scores. Not only is that demonstrably not true but it assumes that the only meaningful outcome of education is academic knowledge, and that it can be assessed in a standardized way. No it can’t. And there’s more to learn in school than just math and reading.
Enough is enough. Fixing the problem won’t be easy—again, I’ll have to save that discussion for another post—but understanding the problem is a crucial first step to solving it. For the record, I’m not against all tests. I give them routinely to hold my students, and myself, accountable for making sure that learning happens. But I’m against the misuse of tests—make no mistake about that. We all should be.