Opinion
Professional Development Opinion

Is “Scalable” a Code Word for Top-Down Reform?

By Anthony Cody — February 16, 2013 4 min read
  • Save to favorites
  • Print

About a week ago, John Thompson posted a note in which he conceded that “Educators who oppose the testing mania must admit that our preferred strategies would require high-quality implementation, and neither do we know how to scale them up.”

I want to explore this idea of “scalability,” because I think we may have a problem here.

As I wrote in a comment to his post, we have been told that the only way schools can improve is through “scalable” reforms. What does that mean? It seems to mean that whenever we come up with some great initiative, the only way it can make a difference is if it can be packaged and replicated. Or even more often, that we can only improve if we take some model that has been “proven” elsewhere and replicate it ourselves. There is certainly value in sharing great models, and many can be built upon and re-created anew. But I believe there is an underlying bias towards uniform solutions that are packaged and sold as innovations. I think of the great wave of Professional Learning Communities that swept through a few years ago. We had been working in Oakland in a variety of collaborative formations doing teacher research, lesson study, common assessments, and so forth. But we were told we had to do the “new” PLCs, and thousands of dollars were spent on expert consultants to show us how.

I think we DO know how to improve ourselves as educators, and we have been doing it for years. We know how to look at student work, we know how to conduct systematic inquiry into our practice, and so forth. We have far too little time to do this work, because this sort of intellectual work is not considered part of our job, far too often.

And periodically, at least in the past in Oakland, we would be told to stop whatever ongoing professional development we were doing to attend the latest whizbang research-proven data-driven pre-packaged program someone had convinced the District to shell out our scarce professional development funds for. In the 1990s we had Efficacy training for a year, to help us raise our expectations for all students. Seven years later, we got Standards in Practice, which was a process where teachers would gather, bring lessons to share, and be shown how to align them with grade level standards. For some reason, these processes never seemed to take hold.

We begin to see some of the trouble when we look closely at how reforms are rolled out, and what “replicability” means in practice. Carol Burris, an outspoken principal from New York wrote last week about her experience being trained in the new teacher evaluation system there.

We would have four sessions to prepare for Calibration Day. We would learn "the tool," and watch teaching videos for two days. Day Three--the pre-test. Day Four--Calibration Day and the Calibration Event. We would see a video of a teacher, use the rubric to rate her, and then try to sync up with the Master Coder.
If you miss one or two, you might not be misaligned," one of the Ambassadors reassured us.
Will my skill improve? Will I be scored on the teaching evidence I include?" you could hear the frustration in the questioner's voice.
As long as we get the right number with the Master Coder, that is all that matters. The ultimate goal is you want to be calibrated" was the reply.
I was starting to feel sorry for the calibrated presenter. In an attempt to make sense out of nonsense, she blurted, "Think of it this way. In first grade we teach kids how to fill in the bubbles...today we are learning to fill in the bubbles."

Carol Burris has stumbled onto one of the problems with scalable reforms. In order to be “replicable,” they must be carried out in the same way everywhere, by everyone.

I also recall from Oakland the fearful warnings that would come down from above. “The model will only work if it is implemented with fidelity!” This was the reason why we must all be aligned. If we do not implement the model as it was designed, then we cannot blame anyone but ourselves if it doesn’t work!

There are some huge problems with this approach to school change. First of all, we are assuming that solutions developed in one situation will work everywhere. This disregards the different contexts we find in every community. We also have short-circuited an important part of the change process, because we are not doing the work to figure out the assets and problems particular to our community, and we are not relying on the creative talents of the people doing the work to come up with solutions. And this is the biggest flaw of all, because as we know from Daniel Pink’s work on motivation, one of our greatest sources of drive is autonomy. People need to be challenged to come up with original solutions. When they are given packaged, scalable, replicable models to implement with fidelity, the air leaves the room. People slump in their chairs. They may resign themselves to “being aligned,” but there is no joy or inspiration to be found there.

Monty Neill added an interesting thought to the original discussion.

On scalability: taking specific programs and insisting they be designed to apply to highly divergent circumstances is educational stupidity or destructiveness. We have made some progress in identifying factors, elements, that are important in school improvement (some of which take serious money, some do not). This is not a question of scaling this or that specific model of professional collaboration, but of insisting educators have time to collaborate. This may be scaling of core factors, but it is not the scalability that 'deformers' have promoted.
That said, some strong things can be scaled up. Take the Learning Record. It is a structure that allows - requires - serious teacher autonomy, encourages collaboration, is child centered, provides for an organized gathering of information about each child and thus for individuation, yet has developmental reading scales that can be widely applied. In short, it is the kind of thing that can be scaled and - I think -- as we cannot be certain - would not cause the one-size-fits-few damage of most 'scalability' projects.


So when we hear that solutions must be “scalable,” let’s take time to question what that means.
If it means that we must buy into some prepackaged professional development model, and implement certain behaviors “with fidelity,” I think the professional educators in our schools could usually do better if they take on the challenge themselves. That does not mean we have to start completely from scratch. There are some wonderful models out there - Lesson Study, teacher research, and more. But when we choose the model ourselves, and figure out how to adjust it to fit our needs, we become active leaders as well as learners. We may not be completely aligned, we may not be replicable, but we will be true to ourselves and our community of learners.

What have your experiences been with “scalable” and “replicable” reforms? Is this a code-word for top-down reform? Or should we implement proven models with fidelity?

Continue the dialogue with me on Twitter at @AnthonyCody

The opinions expressed in Living in Dialogue are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.