There is an increasingly popular narrative in the armchair-ed-reform community today which suggests that the primary barrier to improved student results is teachers’ ignorance of effective teaching strategies, or their failure to implement them.
If only—the thinking goes—if only we could identify the best teaching practices used by the best teachers and get all teachers to implement them, student learning would increase, achievement gaps would close, and all would be right with the world.
Let me start by laying my cards on the table: I believe that educational performance is determined in large part by teacher practice. In other words, how teachers carry out their work matters a great deal for students; this much is abundantly clear from research.
But what’s also clear, as Mary Kennedy argues in her brilliant article on fundamental attribution error in measuring teaching quality, is that student learning outcomes depend on a broad array of factors in addition to teacher practice. Learning occurs in an ecosystem that includes other teachers, students’ peers, school climate, the broader community, and (above all else) parents and the home environment.
The armchair reformers want us to believe that education works like this:
Teacher + Student = Learning
If we aren’t getting the level of learning we want, and we can’t change much about the student, the teacher is the logical place to focus. But what this simplistic model omits is the ecosystem in which the teaching-learning relationship exists.
Value-added models have shown that teachers are responsible for 20-40% of the variance in student performance, meaning 60-80% of student learning is dependent on factors other than the teacher. So let’s start by acknowledging that teaching simply does not have the power to solve all of society’s problems. Poverty matters. Parents matter. School culture matters. Student health matters. Teachers matter too, but they are far from the only salient factor in student learning.
I recently finished Phil Rosenzweig’s excellent book The Halo Effect and the Eight Other Business Delusions That Deceive Managers, which argues that studying highly successful exemplars does not necessarily give us transferable lessons that we can follow in order to ensure our success. While Rosenzweig’s book is about the financial performance of corporations, the “delusions” he describes are strikingly common in educational research and facile applications thereof. From an Amazon reviewer’s summary, the nine delusions are:
1. Halo Effect: Tendency to look at a company's overall performance and make attributions about its culture, leadership, values, and more. 2. Correlation and Causality: Two things may be correlated, but we may not know which one causes which. 3. Single Explanations: Many studies show that a particular factor leads to improved performance. But since many of these factors are highly correlated, the effect of each one is usually less than suggested. 4. Connecting the Winning Dots: If we pick a number of successful companies and search for what they have in common, we'll never isolate the reasons for their success, because we have no way of comparing them with less successful companies. 5. Rigorous Research: If the data aren't of good quality, the data size and research methodology don't matter. 6. Lasting Success: Almost all high-performing companies regress over time. The promise of a blueprint for lasting success is attractive but unrealistic. 7. Absolute Performance: Company performance is relative, not absolute. A company can improve and fall further behind its rivals at the same time. 8. The Wrong End of the Stick: It may be true that successful companies often pursued highly focused strategies, but highly focused strategies do not necessarily lead to success. 9. Organizational Physics: Company performance doesn't obey immutable laws of nature and can't be predicted with the accuracy of science - despite our desire for certainty and order.
Of course, these are basic tenets of scientific inquiry. Anyone who has been trained in any research discipline should recognize the pitfalls on this list. Most jarring in Rosenzweig’s analysis, though, is the prevalence of halo-ridden thinking in popular books such as Good to Great, which (though a business book) has been well-received and widely read in the education sector as a blueprint for improvement.
The halo effect works like this: When we see an outstanding success, we try to imitate it, assuming that the attributes we noticed caused the original success and will work for us as well. The problem is that we knew about the success to begin with, and went looking for a plausible explanation. We assume that successful people do everything right, simply because they are successful overall. For example, if test scores in a school are high, you might assume the principal is a good communicator, when in fact there is no evidence that this is the case.
When we then try to bring about improvement by imitating the successful cases, we often imitate the wrong things. The success of a company, teacher or school is dependent on wide array of factors (including unnoticed internal attributes, chance, external influences, and other things we can’t control), and not necessarily the attributes we noticed.
When we know in advance which teachers are successful, and attempt to identify what they do differently, we may not identify practices that it would be helpful to imitate; we may instead be identifying factors that are correlated with success, but are not the cause of it. Rosenzweig illustrates this brilliantly by describing the effect size of several different “best practices” that each explain 20-30% of the variance in performance. After describing effect sizes that total 100%, he asks, in effect, “Are these factors the ONLY factors that influence performance?” Of course not—there are more than three or four factors that determine a company’s performance. When we start adding factors and quickly reach 100%, we know we’re examining factors that are correlated with each other, not factors that explain separate pieces of the performance pie.
Let’s take a look at Bob Marzano’s bestselling book School Leadership That Works, which is certainly a helpful resource for principals. On pp. 42-43, Marzano lists 21 “responsibilities” and the correlation each has with student academic achievement. For example, “communication” explains 23% of the variance in school performance. If we pretend that we can separately add up all these correlations, we get a total of 506%. In other words, if we believe that we can simply implement best practices and get perfect results, Marzano’s research seems to tell us something even better (yet totally nonsensical): We’ll get perfect results times five. Clearly, our model is wrong: Best practices make a difference, but they do not ensure results.
But what if we fix our model? What if we could implement all of these best practices? How much of a difference would that make? Rosenzweig cites a different study that found that implementing all known best practices would lead to an improvement in performance of around 10%. Not exactly compelling, but much more realistic.
I have no quarrel with efforts to learn more about effective teaching, learning, and school leadership. As a scholar of school leadership, I am deeply committed to exploring ways leaders can bring about better results for our students.
But when we think we can brute-force our way to better schools simply by forcing teachers to imitate certain practices, we’re leaving out many pieces of the puzzle, such as the conditions under which educators work, and who the educators are in the first place.
My hunch is that the greatest potential for improving educational outcomes lies at two levels:
1. Improving our systems, the widespread conditions under which teaching and learning occur, and
2. Improving the caliber of people we attract and retain in the profession.
The calls to improve student learning solely through the identification of best practices and the forced implementation of these practices are growing louder, but are as hollow as they’ve always been.
The opinions expressed in On Performance are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.