Opinion Blog

Classroom Q&A

With Larry Ferlazzo

In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers’ questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to lferlazzo@epe.org. Read more from this blog.

Teaching Opinion

Correlation? Causation? Effect Sizes? What Should a Teacher Trust?

By Larry Ferlazzo — June 10, 2025 5 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
  • Save to favorites
  • Print

Today’s post is the third, and final, one in a series providing guidance to teachers on how to interpret education research.

Who Cares About Effect Sizes?

Cara Jackson currently serves as the president of the Association for Education Finance & Policy. She previously taught in the New York City public schools and conducted program evaluations for the Montgomery County public schools in Maryland:

Education leaders need to know how much to expect a program or practice to improve student outcomes. Such information can inform decisions about what programs to invest in and which ones to stop, saving teachers’ time and energy for programs with the most potential.

In this post, I discuss what “effect sizes” are, why effect sizes from well-designed studies are not the same as correlational evidence, and why that matters.

What is an “effect size,” and how is it measured?

An effect size is a standardized measure of how large a difference or relationship is between groups. Researchers use standard deviation units to measure the difference. While researchers may translate the standard deviation units into “days of school” or “months of learning” for the practitioner audience, research suggests this can lead to erroneous interpretations or unreliable and improbable conclusions.

Translations could be manipulated to make an effect that is small in standard deviation units appear large. That is, an effect that is small in standard deviation units might be presented in days, weeks, or months of learning to make the intervention look good.

One study reported that compared with traditional public school students, charter school students’ performance is equivalent to 16 additional days of learning in reading and six days in math. But as pointed out by Tom Loveless, these are quite small differences when expressed in standard deviation units.

For that reason, I focus here on interpreting the standard deviation metric. If you see an effect size presented in “days of school” or “months of learning,” be aware that this could be misleading.

Why does “correlational, not causation” matter for effect sizes?

In studies designed to identify the causal effect of a program, effect sizes as low as 0.10 standard deviations are considered large (Kraft, 2019). This may come as a surprise to fans of Hattie’s Visible Learning, which argues that the “zone of desired effects” is 0.40 and above. But that benchmark is based on making no distinction between correlational and causation.

As noted in the previous post, the correlation between some program or practice and student outcomes can reflect a lot of different factors other than the impact of the program, such as student motivation. If we want to know whether the program causes a student outcome, we need a comparison group that:

  1. hasn’t yet received the program, and
  2. is similar to the group of students receiving the program.

The similarity of groups matters because any differences between groups offers an alternative explanation for the relationship between the program and student outcomes. For example, we would want both groups to have similar levels of academic motivation, because differences in motivation could explain differences in outcomes. Correlational studies can control for some characteristics of students that we can observe and measure, but they do not rule out all alternative explanations.

The R3I Method for reading a research paper recommends looking for certain keywords in the methods section to distinguish between correlation and causation. In studies designed to make causal inferences, the methods section will likely mention one or more of the following words: experiment, randomized controlled trial, random assignment, or quasi-experimental.

Look for a table that describes the students who receive the program and students not receiving the program. Particularly if the study is quasi-experimental, it’s important to know whether students are similar prior to participating in the program. For example, a study of a program implemented with 4th grade students might use 3rd grade standardized-test scores to assess whether the groups are similar. This helps rule out alternative explanations for the findings.

In “The Princess Bride,” Inigo Montoya says, “You keep using that word. I do not think it means what you think it means.” While effect sizes are influenced by many factors, distinguishing between correlation and causation is fundamental to a shared understanding of the meaning of the word “effect.” And that meaning has implications for effect-size benchmarks.

intheprincessbride

Why do effect-size benchmarks matter?

It’s not that I simply dislike effect sizes larger than 1.0. As noted by past contributors to EdWeek, “Holding educational research to greater standards of evidence will very likely mean the effect sizes that are reported will be smaller. But they will reflect reality.”

Confusing correlation and causation may lead decisionmakers to have unrealistic expectations for how much improvement a program can produce. These unrealistic expectations could leave educators disappointed and pessimistic about the potential for improvement. Education leaders may avoid implementing programs or stop programs with solid evidence of effectiveness because they perceive the potential improvement as too small.

Key takeaways

Questionable translations of research findings and presenting correlations as “effects” can mislead people about whether a program causes an impact on student outcomes. Here are three things to look for in different sections of a study.

  • Methods: Does the study include a comparison group of students who did not receive the program or practice?
  • Findings: Does the study describe the groups in the study and whether they looked similar prior to the program or practice being implemented?
  • Results or technical appendix: Does the study include the effect size in standard deviation units?
question

Thanks to Cara for contributing her thoughts!

Consider contributing a question to be answered in a future post. You can send one to me at lferlazzo@epe.org. When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.

You can also contact me on Twitter at @Larryferlazzo.

Just a reminder; you can subscribe and receive updates from this blog via email. And if you missed any of the highlights from the first 13 years of this blog, you can see a categorized list here.

Related Tags:

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Smarter Tools, Stronger Outcomes: Empowering CTE Educators With Future-Ready Solutions
Open doors to meaningful, hands-on careers with research-backed insights, ideas, and examples of successful CTE programs.
Content provided by Pearson
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Professional Development Webinar
Recalibrating PLCs for Student Growth in the New Year
Get advice from K-12 leaders on resetting your PLCs for spring by utilizing winter assessment data and aligning PLC work with MTSS cycles.
Content provided by Otus
School Climate & Safety Webinar Strategies for Improving School Climate and Safety
Discover strategies that K-12 districts have utilized inside and outside the classroom to establish a positive school climate.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Teaching Forget About Hamsters. Make Bugs Your Classroom Pet
Beetles, spiders, and millipedes? These nontraditional class pets may ease students' stress.
5 min read
Phil Dreste provides roaches, beetles, isotopes and other insects for his students to study at Kenwood Elementary in Champaign, Ill., on Jan. 12, 2026.
Phil Dreste's 4th graders handle a giant African millipede, part of a rotating cast of class pets. Dreste also provides exotic roaches, spiders, and isopods for his students to study at Kenwood Elementary in Champaign, Ill., on Jan. 12, 2026. Invertebrates can make great pets that cost less and require less attention than more common class animals.
Kaiti Sullivan for Education Week
Teaching The World's Oldest Known Twinkie Turns 50 at a Maine High School
How a classroom experiment turned into a half-century study.
Elizabeth Walztoni, Bangor Daily News
4 min read
Libby Rosemeier, a former George Stevens Academy student in the Twinkie experiment class, and Roger Bennatti, the now-retired chemistry teacher who initiated the experiment, hold the 50-year-old snack cake that has been housed in a homemade box since 2004.
Libby Rosemeier, a former George Stevens Academy student in the Twinkie experiment class, and Roger Bennatti, the now-retired chemistry teacher who initiated the experiment, hold the 50-year-old snack cake that has been housed in a homemade box since 2004.
Linda Coan O'Kresik/Bangor Daily News
Teaching This Teacher Created a 'Six-Seven' Christmas Song That Delighted His Students
Music teacher shares lessons learned about how to use song lyrics to engage students in any subject.
2 min read
Christmas Wreath with red sound wave graphic equalizer bars and flying musical notes against black background. A large 6 and 7 made of pine and decorated with ornaments and lights in the foreground.
Vanessa Solis/Education Week + iStock/Getty Images
Teaching Opinion The Best and Worst of 2025's Education News
Larry Ferlazzo offers his thoughts on cellphone bans, absenteeism, vouchers, and more.
8 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week