Opinion Blog

Classroom Q&A

With Larry Ferlazzo

In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers’ questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to lferlazzo@epe.org. Read more from this blog.

Teaching Opinion

Correlation? Causation? Effect Sizes? What Should a Teacher Trust?

By Larry Ferlazzo — June 10, 2025 5 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
  • Save to favorites
  • Print

Today’s post is the third, and final, one in a series providing guidance to teachers on how to interpret education research.

Who Cares About Effect Sizes?

Cara Jackson currently serves as the president of the Association for Education Finance & Policy. She previously taught in the New York City public schools and conducted program evaluations for the Montgomery County public schools in Maryland:

Education leaders need to know how much to expect a program or practice to improve student outcomes. Such information can inform decisions about what programs to invest in and which ones to stop, saving teachers’ time and energy for programs with the most potential.

In this post, I discuss what “effect sizes” are, why effect sizes from well-designed studies are not the same as correlational evidence, and why that matters.

What is an “effect size,” and how is it measured?

An effect size is a standardized measure of how large a difference or relationship is between groups. Researchers use standard deviation units to measure the difference. While researchers may translate the standard deviation units into “days of school” or “months of learning” for the practitioner audience, research suggests this can lead to erroneous interpretations or unreliable and improbable conclusions.

Translations could be manipulated to make an effect that is small in standard deviation units appear large. That is, an effect that is small in standard deviation units might be presented in days, weeks, or months of learning to make the intervention look good.

One study reported that compared with traditional public school students, charter school students’ performance is equivalent to 16 additional days of learning in reading and six days in math. But as pointed out by Tom Loveless, these are quite small differences when expressed in standard deviation units.

For that reason, I focus here on interpreting the standard deviation metric. If you see an effect size presented in “days of school” or “months of learning,” be aware that this could be misleading.

Why does “correlational, not causation” matter for effect sizes?

In studies designed to identify the causal effect of a program, effect sizes as low as 0.10 standard deviations are considered large (Kraft, 2019). This may come as a surprise to fans of Hattie’s Visible Learning, which argues that the “zone of desired effects” is 0.40 and above. But that benchmark is based on making no distinction between correlational and causation.

As noted in the previous post, the correlation between some program or practice and student outcomes can reflect a lot of different factors other than the impact of the program, such as student motivation. If we want to know whether the program causes a student outcome, we need a comparison group that:

  1. hasn’t yet received the program, and
  2. is similar to the group of students receiving the program.

The similarity of groups matters because any differences between groups offers an alternative explanation for the relationship between the program and student outcomes. For example, we would want both groups to have similar levels of academic motivation, because differences in motivation could explain differences in outcomes. Correlational studies can control for some characteristics of students that we can observe and measure, but they do not rule out all alternative explanations.

The R3I Method for reading a research paper recommends looking for certain keywords in the methods section to distinguish between correlation and causation. In studies designed to make causal inferences, the methods section will likely mention one or more of the following words: experiment, randomized controlled trial, random assignment, or quasi-experimental.

Look for a table that describes the students who receive the program and students not receiving the program. Particularly if the study is quasi-experimental, it’s important to know whether students are similar prior to participating in the program. For example, a study of a program implemented with 4th grade students might use 3rd grade standardized-test scores to assess whether the groups are similar. This helps rule out alternative explanations for the findings.

In “The Princess Bride,” Inigo Montoya says, “You keep using that word. I do not think it means what you think it means.” While effect sizes are influenced by many factors, distinguishing between correlation and causation is fundamental to a shared understanding of the meaning of the word “effect.” And that meaning has implications for effect-size benchmarks.

intheprincessbride

Why do effect-size benchmarks matter?

It’s not that I simply dislike effect sizes larger than 1.0. As noted by past contributors to EdWeek, “Holding educational research to greater standards of evidence will very likely mean the effect sizes that are reported will be smaller. But they will reflect reality.”

Confusing correlation and causation may lead decisionmakers to have unrealistic expectations for how much improvement a program can produce. These unrealistic expectations could leave educators disappointed and pessimistic about the potential for improvement. Education leaders may avoid implementing programs or stop programs with solid evidence of effectiveness because they perceive the potential improvement as too small.

Key takeaways

Questionable translations of research findings and presenting correlations as “effects” can mislead people about whether a program causes an impact on student outcomes. Here are three things to look for in different sections of a study.

  • Methods: Does the study include a comparison group of students who did not receive the program or practice?
  • Findings: Does the study describe the groups in the study and whether they looked similar prior to the program or practice being implemented?
  • Results or technical appendix: Does the study include the effect size in standard deviation units?
question

Thanks to Cara for contributing her thoughts!

Consider contributing a question to be answered in a future post. You can send one to me at lferlazzo@epe.org. When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.

You can also contact me on Twitter at @Larryferlazzo.

Just a reminder; you can subscribe and receive updates from this blog via email. And if you missed any of the highlights from the first 13 years of this blog, you can see a categorized list here.

Related Tags:

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Stop the Drop: Turn Communication Into an Enrollment Booster
Turn everyday communication with families into powerful PR that builds trust, boosts reputation, and drives enrollment.
Content provided by TalkingPoints
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Special Education Webinar
Integrating and Interpreting MTSS Data: How Districts Are Designing Systems That Identify Student Needs
Discover practical ways to organize MTSS data that enable timely, confident MTSS decisions, ensuring every student is seen and supported.
Content provided by Panorama Education
Artificial Intelligence Live Online Discussion A Seat at the Table: AI Could Be Your Thought Partner
How can educators prepare young people for an AI-powered workplace? Join our discussion on using AI as a cognitive companion.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Teaching Letter to the Editor Learning Spaces Should Meet the Needs of All Students
Better classroom design can help neurodivergent learners thrive, says this letter to the editor.
1 min read
Education Week opinion letters submissions
Gwen Keraval for Education Week
Teaching What's the Ideal Classroom Seating Arrangement? Teachers Weigh In
Educators employ different seating strategies to optimize student learning.
1 min read
swingspaces pgk 45
Chairs are arranged in a classroom at a school in Bowie, Md. Classroom seating is one of the first decisions educators make at the start of the school year, and they have different approaches.
Pete Kiehart for Education Week
Teaching 'There's a Firehose of Information': Talking to Students About Minneapolis
Find curated coverage on discussing confusing, scary, or politically charged topics in the classroom.
2 min read
A child kneels in the snow among demonstrators holding signs during a news conference at Lake Hiawatha Park in Minneapolis, on Jan. 9, 2026, demanding Immigration and Customs Enforcement be kept out of schools and Minnesota following the killing of 37-year-old mother Renee Good by federal agents earlier on Wednesday.
A child kneels in the snow among demonstrators holding signs during a news conference at Lake Hiawatha Park in Minneapolis on Jan. 9, 2026, demanding Immigration and Customs Enforcement be kept out of schools following the killing of Renee Good by federal agents.
Kerem Yücel/Minnesota Public Radio via AP
Teaching In Their Own Words ‘Normal Looks Different’: Teaching Through Fear in Minneapolis
Tracy Byrd, a 9th grade English teacher, shares what teaching entails as federal agents patrol his city.
8 min read
MINNEAPOLIS, MN, January 22, 2026: Ninth grade teacher Tracy Byrd helps student Avi Veeramachaneni, 14, with his final essay on the last day of the semester at Washburn High School in Minneapolis, MN.
Tracy Byrd helps students with essays on Jan. 22 at Washburn High School in Minneapolis. As immigration raids and protests have played out across the city, he and fellow educators have sought to create a stable environment for students.
Caroline Yang for Education Week