Last week, we reported that teachers at a Syracuse high school were charging that they’d received falsely low evaluation scores due to an apparent glitch in calculation. In fact, all of the city’s teachers had received scores that Teachers College sociologist Aaron Pallas deemed improbably low.
The day that blog post went up, Henninger High School teachers found out that their scores would stand, according to Syracuse’s The Post-Standard. The teachers’ claim that they should have received 12 points for the school’s gains on Regents tests was based on a (complicated) misunderstanding. The district’s evaluation plan doesn’t just compare the percentage of students who passed the exam last year to the percentage who passed the year before, the The Post-Standard explained:
Instead, it compares how the 2013 “cohort” of students compared to the 2012 cohort. The 2013 cohort includes all students who entered ninth grade in 2009—and ideally would have been seniors in the spring of 2013—regardless of what grade level they were actually in last year. Likewise, the 2012 cohort includes all students who entered high school in 2008.
That means that students who dropped out or did not take the tests were included in the calculation as well, thus likely deflating the test-score gains. However, the district is standing by that measure, reports the Post-Standard.
Meanwhile, the New York State Department of Education has now released the preliminary, composite evaluation results for teachers across New York state (with the exclusion of New York City, where implementation of the evaluation plan was delayed). As you can see from the chart below, the outcome was quite different than in Syracuse. Nearly 50 percent of teachers across the state state teachers were rated “highly effective,” compared to just 2 percent with that rating in Syracuse. While 40 percent of Syracuse teachers fall in the “developing” and “ineffective” categories and will therefore need improvement plans, only about 5 percent of teachers statewide will need them.
We know that Rochester, another urban district, had results similar to those in Syracuse. So what’s really going on here? State Education Commissioner John King told the Post-Standard, “I would observe that Syracuse and Rochester also have very different student performance outcomes than the state as a whole.” However, he emphasized that the details of the evaluation plans were developed locally, with both the district and the bargaining unit. (It’s also worth noting that the decision to give all teachers at a school the same growth measure—in this case a cohort’s growth on Regents—was one made at the local level.) King noted that it’s important to revisit and possibly “tweak” the plans.
The president of the Syracuse Teachers Association told the newspaper that “the union agreed to count student improvement on Regents exams as part of the evaluations, but not the way the district did it.”
It will be interesting to keep tabs on where all this goes, especially when the state releases data for each individual district in the next few months. We’ll see if urban/suburban patterns emerge and the other ways the nuances of the locally designed plans affected the outcomes for teachers.