During 32 years as a school superintendent, leading some of the country’s largest districts, it was my experience that most schools do not struggle with planning.
They have school improvement plans, pacing guides, professional development calendars, and detailed strategies for improvement. In many cases, they have more plans than they can reasonably execute.
What I see in my work with districts today is a disconnect between those plans and classroom practice. Many teachers have never seen their school’s improvement plan, pacing guides are inconsistently used, and instruction is often driven by years-old lesson plans or textbook sequences. None of it seems to help in an absolutely fundamental area: measuring whether students are actually learning and measuring frequently enough to change instruction.
The problem is not that schools ignore assessment. School systems invest heavily in it. Quarterly benchmarks are administered, results are analyzed, and reports are generated. Widely used tools like the Northwest Evaluation Association Measures of Academic Progress (MAP Growth), one of the most common benchmark platforms in the country, generally run upward of $12 per pupil for the basic testing bundle—before platform costs, administration time, or professional development are factored in.
Yet, these assessments rarely change what happens next. It’s like driving by looking in the rearview mirror. By the time results are returned and reviewed, the moment to intervene has passed. More important, even when data show that students have not learned, there is often no clear expectation that instruction will change immediately.
What gets measured, in practice, is not student learning but adult activity: We taught the lesson. We implemented the strategy. We gave the assessment. All of this can be true—and students still may not have learned.
Decades of research on formative assessment—going back at least to education researchers Dylan Wiliam and Paul Black—have emphasized the importance of checking for understanding and adjusting instruction. Yet, the discipline is inconsistently applied.
If schools are serious about improving outcomes, they must begin asking different questions: Did students learn what we just taught, and what are we doing right now if they did not? Those questions cannot be answered quarterly. They cannot be answered at the end of a unit. And they certainly cannot be answered after a state assessment. They must be answered every one to two weeks. In high-performing organizations, this kind of frequent monitoring is standard practice—often referred to as tracking key performance indicators or KPIs.
In high-performing schools, leaders track evidence of learning (or some other desired behavior, such as attendance), not just evidence of teaching. This requires a shift to short-cycle measurement using clear, actionable indicators.
What gets measured, in practice, is not student learning but adult activity.
Some examples show the process is straightforward.
In the Newport News City public schools in Virginia, an elementary school focused on early literacy tracks which students cannot accurately produce the sounds of targeted letters or phonics patterns every two weeks—and how many reach mastery after reteaching. Instruction is adjusted immediately, and every student who has not mastered the skill receives targeted reteaching and is reassessed within days.
In one of the district’s middle schools, math teams monitor not only how many students pass a common assessment but how many students who initially struggled are able to demonstrate mastery after targeted reteaching. The focus is not on content coverage but on whether students learn.
And in a district high school, leaders working to reduce absenteeism track how many chronically absent students improve their attendance over a two-week period and adjust outreach and support when they do not. This shifts the focus from reporting attendance to changing it.
These are not complicated systems. But they are disciplined and they require buy-in from the entire staff. They require schools to measure what matters most: Are students learning right now because of what we are doing?
When schools adopt short-cycle monitoring, several important shifts occur. Instruction becomes responsive—teachers no longer move on because the pacing guide says it is time. They move on because students have mastered the material. Reteaching becomes expected: If students do not learn, the system responds immediately. Team conversations improve, shifting from discussing what was taught to analyzing what was learned and what needs to change in the teaching. Accountability becomes clear, not in a punitive sense but in a results-oriented one. As effective schools researcher Ron Edmonds once told me, “Improvement depends less on how much we teach and more on whether students actually learn it.”
In many schools, initiative overload dilutes focus. It is easier to measure activity than impact. Too few systems are designed to monitor learning frequently and respond quickly. This challenge is often compounded by teachers planning in isolation, without a consistent, standards-aligned curriculum, leading to gaps in student learning. Schools are busy, yes, but not necessarily effective.
Improving schools does not require more programs, more assessments, or more plans. It requires a disciplined focus on a simple idea: If students are not learning, instruction must change—immediately. That only happens when schools consistently know whether students are learning. Not at the end of the quarter. Not after the test. But every one to two weeks.
Until that becomes common practice, many schools will continue doing what they have always done—working hard, covering content, and hoping results improve. Hope is not a strategy. Measurement and response are.