Too Much of a Good Thing: Making Data Work for Schools
Data is absolutely useless. It’s what you do with it that matters.
Teachers know this, of course. But in an era when data is the coin of the realm, we have seen persistent confusion on the part of some legislators, policymakers, and administrators about what data can and can’t do.
Data can give useful information to parents, teachers, and kids about what students know, what they don’t, and how much growth they’ve made. It can highlight strengths and weaknesses on the part of individual teachers, schools, districts, states, and nations.
Data can also highlight kids lost in the mean—the small numbers of English-learners, high-poverty students, or African-American children who may be struggling in a school where the majority of students are doing fine.
What data can’t do is fix any of the problems it unearths. It can’t make a kid smarter, a teacher more effective, or a state more committed to its growing numbers of English-learners.
Obvious, right? Not to everyone.
At a meeting of state education chairs last summer, I made the case for a shift that every teacher I know would support. Take away some of the money that goes into developing tests, administering and scoring tests, and monitoring the test-takers and test-givers so they don’t cheat. Then put it instead into all those things that actually help students get smarter—like tutoring, home libraries, and meaningful professional development for teachers.
One of the state chairs told me, “Well, the reason we keep giving so many tests is that we’re not seeing the results we want.”
I responded with a bastardized Arkansan version of the "feeding the elephant" metaphor:
“If I keep weighing my pig, but I never feed my pig, I might get frustrated. ‘I keep weighing this darn pig, but it’s not getting any fatter!’ Testing kids over and over will never make them smarter.”
We’ve seen this phenomenon ever since the advent of No Child Left Behind: a glut of money going to the dull business of standardized testing, a lack of investment in those things that make schools stronger, and a student achievement needle that has barely budged by most reputable (i.e. NAEP, PISA) measures.
Making Data Work
So data itself is useless. But in the hands of skilled teachers and principals, the right data can translate into powerful changes for kids.
My school was one of a handful in the nation profiled this month by a film crew from the U.S. Department of Education. The brief video highlights the way our staff uses data to help high-poverty kids achieve, thrive, and go on to lead the lives they dream. It’s well worth the four minutes it will take you to watch: Improving Education: The View from Jones Elementary School.
What can teachers, principals, and policymakers do to make sure that data is a boon, not a bludgeon, to the students it’s intended to serve? Here are three thoughts; I’d love to hear your own.
What Teachers Can Do
We need to make sure we spend as much time analyzing data as we do gathering it. Given the time crunch all teachers experience on a daily basis, that’s often difficult to do.
A few months back, I did the Fountas and Pinnell Benchmark Reading Assessment—which takes 10 or 15 minutes per student—with each child in my class. I recorded the results and turned them in to my principal. Then I jammed the thick stack of stapled paper into a manila folder, stuck the folder in the top drawer of my filing cabinet, and pretty much forgot about it for a few weeks.
Finally I made the time to pull it out and go through the results. I listed the groups of kids who need guided reading lessons on various specific skills: fluency, comprehension, strategies for unknown words, certain phonics patterns, inferential questions, and so on. Next month, I’ll mix up my guided-reading block to meet with groups organized by specific skill rather than level.
Until the first day I meet with those groups, the four hours I spent giving the assessment will have been utterly useless. Yes, I completed the assessments with fidelity. I recorded the scores on report cards and on that sheet for my principal. But there has been no benefit to my students yet, and there won’t be until I translate all that data into individualized teaching that gives each student what she or he needs.
I did a fine job weighing the pig. I just haven’t gotten around to feeding it yet.
What Principals Can Do
The job of a principal has become close to impossible. I am in awe of principals like mine who somehow balance the mountain of logistics, the competing demands of district, state, and federal policies, and the needs of students, their families, and their teachers.
That said, even excellent principals sometimes devote too much time to making sure assessments are administered, and not enough time making sure that teachers have the knowledge, time, and tools to translate that data into more effective instruction.
So far this year, my students have taken four computerized tests, two Fact Fluency assessments, the Benchmark Reading Assessment, several grade-level phonics assessments, various district literacy tasks, and a one-on-one assessment of their oral English.
Every one of these assessments took time away from instruction. That doesn’t mean giving them was a bad idea. Each assessment furnished me with useful information about what my students understand and what they don’t, and unlike the state Benchmark exams, the data was available immediately.
But too much of a good thing can be a bad thing. There have been times this year when the testing-to-teaching ratio tilted too far to the testing side.
Principals need to make sure they’re getting that balance right. Any assessment the kids take needs to translate into more effective teaching and learning. If that doesn’t happen, the hours taken from teaching time won’t be worth the sacrifice.
What Policymakers Can Do
Principals and superintendents don’t have a choice about many of the tests they give. In April, my students will take the Iowa Test of Basic Skills (ITBS), and no one seems to know why.
The test gives almost no useful information, and the results tend to align very closely with the socioeconomic status and native language of the students who take it, regardless of those students’ actual abilities in reading, writing, and math.
Many high-performing nations don’t test every year. They choose quality over quantity, administering better, more expensive assessments, but testing less frequently. These assessments go beyond basic “pick-the-right bubble” skills a monkey could do in order to measure the more complex abilities that tend to determine success in college, a career, and life.
We are asking tests to do too many things: give useful information to parents; motivate students; evaluate teachers; rank schools, districts, and states; and on and on. In a thicket of tests that vary wildly in quality, it’s easy to forget the true purpose of school: providing an excellent education to every child who walks through the doors.
Data can work toward that end, but only when assessments measure the right things, are used in a way that supports schools rather than punishing them, and are given in moderation so we don’t sacrifice too much instructional time.
Policymakers can’t just ask, “Does this new test give us useful data?” They have to ask the harder question, “Does this new test give us data that justifies the cost, the logistical burden, and the lost instructional time it will impose?”
Education policy doesn’t advance in a straight line. It swings from side to side, and the swings can be wild.
There was a time when we didn’t have enough data. It was too easy for teachers to insist of a struggling student, “She has come so far,” without any real justification for that claim. Kids fell through the cracks, especially underserved kids in schools and districts with populations that were doing well in the aggregate.
But that time has long passed. The pendulum has swung further in the opposite direction than anyone thought it could swing. That pendulum seems to be powered by the bizarre assumption that the more data we generate, the smarter our students will become.
Data, shmata. It’s what we do with it that counts.