In the June 10th edition of On Board, Cathy Woodruff wrote, “Some members of the Board of Regents are expressing strong reservations about a plan to boost the weight given to standardized student test scores in evaluating teachers and principals under New York’s Annual Professional Performance Review (APPR) program.” One such regent who is uncomfortable with using standardized test scores is Roger Tilles.
The Board of Regents runs the public school system in New York. Actually, let me rephrase that. Merryl Tisch, the Chancellor of the Board of Regents said they don’t run the school system; they just set the policy that enforces rules on schools in New York State. However, it was a welcome comment when some of the regents said they wanted to slow down the process.
Using standardized test scores to evaluate teachers and principals is just wrong. It goes against so much of what education is supposed to be about. Standardized test scores don’t assess creativity, cooperative thinking and inquiry-based learning. When educators should be utilizing more technology they are instead working on test prep. As Tom Whitby from #edchat says, educators are being forced to focus on “drill, kill and bubble fill.” In addition to all of that, it sets up a negative dynamic between students and teachers.
Carol Burris, an award winning principal at one of the nation’s best high schools wrote, “The basic rule is this: No measure of performance used for high-stakes purposes should put the best interests of students in conflict with the best interests of the adults who serve them. That, in my opinion, is the ethical standard by which each evaluation component should be judged (Washington Post).”
Burris goes on to write,
This argument holds true for teachers as well. Many New York teachers of grades 3-8 were deeply conflicted this spring. On the one hand, they had a clear incentive to "test prep" for the recent Common Core exams, but they also knew that test prep was not the instruction that their students needed and deserved. These are the real-life consequences of the policies that are thoughtlessly being put into place because of the new evaluation systems."
Is There Value in Value-Added?
Woodruff went on to write,
Other Regents including Kathleen Cashin have also expressed concerns about the plan, which involves using a more sophisticated formula, known as "value-added," when student test score data is used in teacher and principal evaluations. Under APPR legislation approved in 2010, adoption of the new value-added model(VAM) by the Regents would trigger an increase in the weight of state test scores to 25 percent, up from the 20 percent share allotted under the current model, which is called growth."
In a memo that is dated June 10th, NY State Education Department Deputy Commissioner Ken Slentz said,
After much deliberation and discussion, the Department has decided not to recommend moving forward with a proposal that the Board of Regents consider adoption of a value-added model (VAM) for the 2012-2013 school year for teachers and principals in grades 4-8 ELA, Math and/or principals of schools with grades 9-12. Instead, the Department recommends use of an "enhanced growth model" for the 2012-2013 and 2013-2014 school years."
Hopefully they will take the year to see how devastating the combination of VAM and an increased percentage will be to public education. However, it’s not the first time the NY State Education Department said they would do something and then changed their decision after the fact. There is very little hope for Commissioner King and Chancellor Tisch but there are other Regents who seem to be much more open-minded.
Data Driven Decision Making
Data driven decision making has changed in the past five years. What was once something that had integrity has been replaced by a heavy-handed process to see children as data. In the world of corporate reform, test scores are looked upon as if they were profit. This is unfortunate, because when tests served student-centered purposes, they were one piece to a puzzle that could drive instruction in a valuable way.
However, now it is used to prove that educators are not doing their jobs. If it wasn’t the reason, why would cut points change after the tests have been given? Why would the state base a growth score on two years of testing that is completely different (i.e. longer, more rigorous, etc.)? Why would they think about changing the percentage to 25% after an agreement was made?
A few years ago the district where I work engaged in some Value-Added training. At first, it seemed like a valuable method to see growth in student learning. However, as the three days of training went on, and we learned about data points, we all became increasingly concerned about using state assessments as a way to track growth. The tests are different from year to year and growth should be measured in so many more ways than just through state assessments.
I foolishly walked away saying, “Surely the state wouldn’t move in this direction?”
As the years have gone on states have moved in this direction and they have set new rules as well. Teachers and administrators have no idea what is on the test. They can’t open the tests until an hour before and have to send them back right after the very narrow testing window is closed. What’s worse is that teachers get scores and absolutely no breakdown in what areas (i.e. comprehension, vocabulary, etc.) students did poorly.
In the end educators know that state assessments are not about the students, but about the teachers.
Honest Reflection is Needed
Regent Tilles said, “I don’t want to see the Common Core lost for the sake of too speedy an implementation” of APPR and elements related to student testing.”
The problem is that the whole implementation has been rushed, and most of us know that it is also greatly flawed. Woodruff wrote,
However, measurements of student achievement were a required element of teacher evaluation programs for states to qualify for federal Race to the Top funding. Student performance on standardized tests also is a key element in the federal No Child Left Behind law passed in 2001, which remains in effect."
The only way to correct a rushed implementation is by taking the time to learn from it and make it better. Unfortunately, “Chancellor Merryl Tisch expressed concern, however, about changing the pace now, after individual districts have negotiated APPR agreements with the understanding that the value-added technique will be introduced.”
“It’s a funny message to say, ‘Sorry, we were just kidding. Go back and re-negotiate,’” Tisch said. Tisch went on to say “We are at a very significant crossroads.” Yes, we are. We are talking about a “growth model” that really compares apples to oranges and the state education department doesn’t acknowledge that fact. They believe they should move forward on their Titanic-like voyage.
In addition, those in power believe that standardized tests being tied to teacher and administrator evaluation is the only way to assess whether educators are doing their job, while those in the trenches who actually evaluate teachers think it is just one piece of the puzzle that should not be tied to evaluation at all.
Connect with Peter on Twitter
Please also read:
New Problems with New York’s Teacher Evaluation Plan Found by Carol Burris