Education Funding Opinion

Teachers: How do We Propose to Measure Student Outcomes?

By Anthony Cody — January 25, 2011 6 min read
  • Save to favorites
  • Print

This week a colleague at Teacher Leaders Network raised a provocative pair of questions.

1. In an era where numbers are currency, what alternative set of metrics and numbers (beyond assessment) can we suggest that reformers and policymakers consider when weighing teacher/school effectiveness? (ie: parent/student satisfaction surveys, levels of funding, graduation rate, rate of enrollment in AP classes, rate of employment or enrollment in college after graduation)
2. Given the limits of numerical accountability, what alternatives can we offer to reformers that are open to considering results that cannot be accounted for by a number? What are the softer variables that cannot be easily measured? (ie: student engagement, attitudes towards school, divergent thinking)

I have heard different forms of this conversation several times over the past few weeks. On the one side we have people, largely from the world of business, who have developed what seems to them a perfect way to improve our work. This method amounts to a four step cycle. First, set some measurable goals for ourselves. Then do our best to meet our goals. Then review our outcomes and see where we fell short and where we succeeded. Use this data to guide a revision to our methods so that our outcomes will improve.

This logic, coupled with the accountability mechanisms built into No Child Left Behind, have amounted to an almost irresistible set of pressures on teachers to become “data driven.” Some have succumbed, but many of us still resist, clinging to quaint ideals about the value of the whole child, the need for critical thinking and curiosity, and other things which are difficult to measure on standardized tests. But then we face a challenge, which my colleague has captured in these two questions.

What could possibly be wrong with the improvement model offered above? Lots of things. First of all, the data most readily available for measuring outcomes is usually standardized test scores. This leads us into the test preparation sinkhole most of our high needs schools find themselves, where instruction is continually narrowed to focus on improving those scores - to the detriment of many other learning goals that we value.

So then we get to the next question, which my colleague posed above. If we do not accept the test scores as an adequate marker of our effectiveness, what do we wish to offer in their place, since we must be accountable for student learning in some concrete and measurable way?

I believe that any answer to this must encompass the complexity of learning, and of our goals as educators.

The way I get my mind around this is to think about the ways that I have seen teachers take responsibility for student learning in meaningful ways. I cannot discuss this in the abstract. So here are some real models of authentic assessment.

Lesson Study: In this process teachers begin by discussing what it is that they desire for their students. What do they value most? What do they want to see from their students at year’s end? But this is a truly open-ended question. It is not “which standards do we want to choose to emphasize.” If the teachers are most concerned about how their students are treating one another, this would be a perfectly acceptable focus for their lesson study. Once they select the focus for their work, then they collaborate to create a set of lessons that will result in students learning this. The lessons are taught, and carefully observed, with close attention being paid to evidence of student learning. This, to me, is an example of teachers taking responsibility for student outcomes.

National Board certification likewise asks candidates to gather solid evidence of the impact their instruction has had on students, and document this with student work samples. Candidates must show concretely how student work reflects growth over time, and how their instruction made a difference. Videotapes of student-teacher interactions also shed light on this.

Oakland history teachers have been working for more than a decade on an assessment system where teachers district-wide give their students a common writing task, to respond to a question while drawing on evidence from a selection of primary historical documents. Students are given editorial cartoons, photographs, and written documents from the period in question, and asked to apply what they have learned about the events as they answer the question. Teachers then bring samples of their students’ work to district-wide scoring sessions, which allows them to compare the work their students are doing to work being done elsewhere in the district. This has helped to create a rich environment for collaboration and the sharing of strategies, as teachers whose students’ work is especially strong, can share the techniques they found effective.

In the mentoring program
I help direct, TeamScience, we use the Formative Assessment Tools associated with the Beginning Teacher Support and Assessment program (BTSA). Central to this process is a protocol called Assessment of Student Work, in which we collect all the student work from a given assignment, sort it into different levels of accomplishment, then work with our mentee to figure out how to move students at each level forward, based on the evidence we see.

All four of these are meaningful ways that teachers are learning about their teaching from looking closely at the work their students are producing. This is raw data come to life, as we delve into what our students are producing, and seek to overcome the obstacles we uncover.

From my point of view as an educator, the best reason to look at data is in order to get useful feedback to guide us in becoming better as teachers. We want to know, if we have a goal that our students are able to write a coherent analysis of a historic event, citing evidence, what is it they are actually able to do? Where are they falling short? How can we build these skills so they are successful?

The entire structure of No Child Left Behind has created a whole other purpose for gathering and looking at data, and that is to hold teachers and schools “accountable” for student test scores. Thus we have high stakes consequences - and ever more of them - for student achievement. This is a different purpose than we have as teachers, and unfortunately, when accountability drives assessment, we get a whole host of unintended consequences that we have become all too familiar with.

Assessment for accountability is, by necessity, going to look very different from assessment for the improvement of instruction. It must be standardized, it must be taken by large numbers of students at the same time in order to allow “fair” comparisons, and it must be cheap to score. Teachers are far more interested in the more authentic assessments I describe here, because they actually help us improve and better serve our students. But we are deeply concerned with data that shows how our students are learning, and our best professional growth often revolves around collaborative reflection on our instruction and the student work that results from it.

This does lead us in an improvement cycle similar to the one offered by the business model. But in the test-score driven cycle above, the question is almost always the same: “How can we boost these scores?” In the inquiry cycle that is represented by the examples I offer, the questions really vary, according to the challenges we have identified as teachers. The collection of data remains a critical step, but the data is more varied, and sometimes more qualitative. The teachers’ role as an active agent of change is much stronger, as they must play an active role in determining the focus of their inquiry, and figuring out the strategies they will pursue in order to improve their outcomes. We must look at student outcomes, but we cannot let the constraints of assessment for accountability purposes determine the nature of those outcomes.

And what might an evaluation look like connected to this? How about one that asked, as National Board portfolios do, for a teacher to share a collection of student work that demonstrates growth over time? How about one that took into account evidence that a teacher is engaged in the reflective processes described above? How about an evaluation where the evaluator spent time in the teacher’s class to see how he was applying the lessons he learned from examining last year’s student work?

What do you think? What sort of “measurable outcomes” should we be seeking as teachers?

The opinions expressed in Living in Dialogue are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.