In many ways, data-driven decision making (D3M) in education is an old idea packaged as a new one. As far back as anyone can remember, teachers have given their students regular quizzes, projects, and tests. When students performed poorly, “data-driven” teachers retaught the material or tried to figure out what went wrong. Without the benefit of spreadsheets or data displays, teachers have attempted to tailor their instruction to different groups of students. To be sure, there have been assumptions, blindspots, and kids overlooked, but the fundamental idea of teaching, assessing, figuring out what works for whom, and re-teaching is as old skool as Tupac.
What’s new is the formalization of this process. Student learning is now quantified in test scores, stored in data warehouses, and made available for teachers to analyze. What’s also new is the creation of two very different camps of data-driven reformers, which Sol Stern recently referred to as instructionists and incentivists.
Instructionists like the authors of the book DataWise see data as a useful tool for identifying problems of teaching practice, investigating them, and addressing them. Their focus is on improving student learning, not just test scores, and they are very clear about this distinction. They worry about the dangers of test score inflation and gaming the system, and advise schools to take steps to ensure that their improvements are not simply the result of shortcut practices that do not improve student learning.
Instructionists define data broadly - student work, student attitudes, and more are all relevant. In this view, data are not a replacement for expertise, and data don’t make decisions. Rather, data are a useful tool for educators to harness to improve instruction. Instructionists stress that D3M requires a collaborative learning process, and are concerned about approaches that use data to blame individuals rather than support educators’ professional growth. For example, Boudett, City, and Murnane wrote in Data Wise:
Agreeing on norms like “no blame” is an essential first step in creating an atmosphere that supports productive data discussions. It is important to emphasize from the beginning that data will not be used to punish teachers, but to help them figure out how to teach their students more effectively.
Incentivists’ view of D3M is different. Perhaps best captured in adversarial approaches like CompStat (represented in The Wire as CityStat), D3M is a way to hold people’s feet to the fire. This approach is agnostic about the “how” of fixing the numbers, and thus rewards better statistics with little attention to how these numbers were produced. As such, incentivists see little need to study diverse kinds of data - in fact, non-quantified data are dismissed as anecdotal. In this view, data, not educators, are the experts.
Unfortunately, D3M is being (has been?) hijacked by incentivists. The focus is on the numbers, not the process of arriving there, which invites all kinds of mischief. Most of the D3M that I’ve observed has involved schools figuring out how to cut corners in order to make test score ends meet. And to those who are ready to crucify the teachers for doing so, let me reiterate that it is unrealistic to expect schools to ignore pressure from city, state, and federal muck-a-mucks to rapidly improve test scores, not learning. As long as we only reward end game numbers and ignore the process through which schools get there, I predict that the instructionist approach to D3M will have a hard time getting off the ground. This is regrettable, as the ongoing, systematic analysis of multiple forms of data is an immensely promising strategy for improving teaching and learning.
Tomorrow I’ll provide examples (good, bad, and ugly) of how teachers are currently using data. On Friday, I’ll investigate the corporations that have entered the K-12 marketplace to provide “data solutions” for school districts.