Have been out at Stanford the past couple days. Gave a talk yesterday at the Ed School about Cage-Busting Leadership, where local heavyweights (Terry Moe, Mike Kirst, Bill Evers, Rick Hanushek, et al.) pushed me to explain how far you can really get with cage-busting. Meanwhile, my old friend and dissertation chair, Harvard’s Paul Peterson, had his own fun. Pointing out that Bill Gates has now suggested that data is the answer to all our myriad problems, he asked, why worry about this cage-busting stuff at all?
While having some impish fun, Paul was also teeing up a critical point. Cage-busting requires precision, crisp definitions of problems and solutions, and the use of appropriate data. Too often, however, data can be treated as a shortcut or a quick-fix, rather than a tool. Check out Cage-Busting Leadership for the full story. But it may also be worth checking out a piece that Harvard’s savvy Jal Mehta and I penned in the most recent Educational Leadership. In “Data: No Deus ex Machina”, we argue that, “Data expose inequities, create transparency, and help drive organizational improvement.” That said, we note that, “Data can be a powerful tool. But we must recognize that collecting data is not using data; that data are an input into judgment rather than a replacement for it; that data can inform but not resolve difficult questions of politics and values; and that we need better ways to measure what matters, rather than valuing those things we can measure.”
We note, “Too often, as we talk to policymakers, system leaders, funders, advocates, and vendors, we get a whiff of deus ex machina, the theatrical trick of having a god drop from the heavens to miraculously save the day. (The phrase’s literal meaning is ‘God in the machine.’) Like a Euripides tragedy in which an unforeseen development bails out the playwright who has written himself into a corner, would-be reformers too often suggest that this wonderful thing called ‘data’ is going to resolve stubborn, long-standing problems.”
This is not the first time we’ve been down this road. Data have long promised easy answers, sometimes with discomfiting results. Frederick Kelly created the first modern multiple-choice test in 1914. Others quickly followed suit. By 1923, more than 300 standardized scales were available.
As I’ve noted in The Same Thing Over and Over, Stanford’s iconic dean of education, Ellwood Cubberley, cheered such assessments, insisting, “We can now measure an unknown class and say, rather definitely, that, for example, the class not only spells poorly but is 12 percent below standard.” Cubberley explained, “Standardized tests have meant nothing less than the ultimate changing of school administration from guesswork to scientific accuracy. The mere personal opinions of school board members and the lay public ... have been in large part eliminated.”
In hindsight, would-be reformers have consistently overestimated the potential of data and have used new data in inappropriate and troubling ways. We’d do well to keep this in mind if we intend to do more than repeat past mistakes. In the piece we flag some key problems and suggest possible responses. Among the key points:
The Problem of Judgment: There’s a seeming confidence that data will enable schools to just do “what works,” allowing science to stand in for imperfect judgment. But this is based on a caricatured view of scientific inquiry. As any real scientist or detective will tell you, questions of which hypotheses to pursue, what data to collect, and how to interpret ambiguous information rely heavily on human skill, judgment, and expertise.
The Problem of Politics: Some reformers suggest that if we just “do what the data tell us,” we’ll be able to sidestep many messy political fights about policy and practice. This is an old hope--the spinning wheels of politically driven reform have long been an impediment to sustained school improvement, but it’s naive to think that data can or should replace political debates over the values, goals, and purposes of public schools.
The Problem of Purpose: We’re often imprecise about what kind of data to use for what purpose. Crude data designed for public accountability are now being used to manage performance in ways that were never intended. One result is “data-driven” systems in which leaders give short shrift to the operations, hiring, and financial practices that are the backbone of any well-run organization and essential to supporting educators.
We believe in the promise of data--but as a tool, not as a talisman. We offer various suggestions on this count.
Build Human Expertise: Data don’t use themselves. At one level, building human expertise entails building technical skills so educators are better able to use data. More ambitiously, it involves building a field in which analysis and inquiry are central to the work.
Create Structures to Support Data Use: Teachers need training, support, and time to use data well. Increasing sophisticated data use over time requires practice and coaching from mentors or colleagues who are experienced and skilled in using data wisely. Unbundling teacher roles to explicitly create teacher leaders with expertise is one promising path.
Collect a Wider Range of Data: Test scores are useful and important, but they’re not the only form of data. Student writing samples, videos of classroom teaching, and the number of parents who attend parent-teacher conferences are other forms. What data you should collect depends in part on what problem you’re trying to solve.
Anyway, if you’re interested, you may want to check all of this out.
The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.