Opinion Blog

Classroom Q&A

With Larry Ferlazzo

In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers’ questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to lferlazzo@epe.org. Read more from this blog.

Teaching Opinion

How Teachers Can Judge the Credibility of Research

By Larry Ferlazzo — March 21, 2025 7 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
  • Save to favorites
  • Print

We teachers are bombarded with “research-backed” this or “evidence-supported” that.

Maybe we have the time to read it and maybe we don’t.

But what are the questions we should be asking about it?

Today’s post is the second in a three-part series (see Part One here) offering a checklist that teachers can use to judge the credibility of the research behind the actions we are being told we should take in the classroom.

Don’t Waste Teachers’ Time

Cara Jackson currently serves as the president of the Association for Education Finance & Policy. She previously taught in the New York City public schools and conducted program evaluations for the Montgomery County public schools in Maryland.

Educators are often skeptical of claims that a specific program can improve student outcomes. Research evidence could help assess these claims, enabling school districts to make better spending decisions. Yet, educators’ use of research to make purchasing decisions is limited. Even when educators want to use evidence, they have limited time, and companies selling the programs are unlikely to be forthcoming about the limitations of studies that demonstrate effectiveness.

Educators should ask whether the study is designed in a way that allows us to say whether a program caused changes in student outcomes. You’ve probably heard the phrase “correlation isn’t causation,” but what does that really mean?

Correlational studies measure the relationship between two things, but the correlation could be explained by something other than the two things of interest. For example, teachers might see that students who turn in more homework assignments tend to have higher test scores. Homework could cause higher test scores, but it could also be that students who are more motivated are both more likely to do their homework and to do well on tests.

A key question to ask is, “Can we rule out all plausible alternative explanations?” To begin ruling out rival explanations, I would look for a control or comparison group and whether the study accounts for prior performance.

akeyquestion

Does the study have a control or comparison group?

If a study only has data from one group of students who took part in a program, changes in outcomes could be explained by many factors other than the program. Imagine that a group of 1st graders receive a reading intervention and their test scores increase between fall and spring. It’s possible that test scores improved because students matured, became more familiar with the test, or read at home. We need to compare them with another group of students to rule out those possibilities.

The difference between control and comparison groups is a critical one. In a control group, researchers randomly assign who takes part in a program—or it could be a natural experiment, such as a lottery, in which people are randomly selected to take part in the program. With a comparison group, researchers compare students who receive the program with a group of similar students. The goal is to have two groups that are similar (as seen in the fruit baskets below).

fruit

Figure by Cara Jackson

Does the study account for group differences?

Students in the two groups may be slightly different, and those differences could explain the outcomes. Group differences are particularly important to consider when using comparison rather than control groups, since students in the group taking part in the program may have sought out a learning opportunity.

For example, students who opt into voucher programs, honors or Advanced Placement courses, or gifted programs may have higher motivation or prior performance than the students who make up the control group. Accounting for prior performance helps to address concerns you might have that the findings reflect preexisting group differences.

Even if the researchers randomly assigned students, groups could be different by chance. Also, it’s not unusual for some students to have missing data, and the groups of students with complete data might be less similar than the original sample. Accounting for prior performance can help address these concerns, though we might still be concerned about differences between groups that are hard to observe.

Example: parent involvement

To illustrate the difference between correlation and causation, let’s take a look at studies of parent involvement. A New York Times article claimed parent involvement was overrated, while a CNN article countered with examples of studies of initiatives to increase parent involvement that improved student outcomes. So, which is right?

The authors of the “parent involvement is overrated” article analyzed longitudinal surveys with nationally representative samples of over 25,000 students in elementary, middle, and high school. They examined 63 different measures of parent involvement, including communicating with teachers or administrators, observing classrooms, helping to pick classes, homework support, and volunteering at school. They conclude that most parental involvement fails to improve student outcomes.

The authors of the counterpoint article cite 10 studies in which participants were randomly assigned to an intervention to increase parent involvement. For example, in a study focused on 10,000 middle and high school students in West Virginia, researchers randomly assigned parents to receive or not receive weekly automated alerts to parents about their child’s missed assignments, grades, and class absences. They found that the alerts reduced course failures by 27 percent, increased class attendance by 12 percent, and increased student retention (Bergman & Chan, 2021).

Why were the conclusions different?

The first study, which analyzed longitudinal survey data, is a correlational study that did not involve assigning parents or students to different groups. It’s possible that parent involvement does have a negative impact on student achievement—if, for example, parents misunderstood the subject they were trying to help with or if their involvement caused their child severe stress. But correlational studies cannot rule out alternative explanations for the relationship between parent involvement and student achievement.

One alternative explanation has to do with parents self-selecting into different levels of involvement. Parents might become more involved when they sense their child needs help and less involved when their child is doing well. While the authors control for prior student achievement, such controls might not account for all differences between students whose parents are more or less involved.

Student characteristics such as motivation, or social and behavioral issues, might have prompted greater parental involvement. If so, the negative correlation between parent involvement and student outcomes could exist only because parents get more involved when their children are struggling in ways that cause poor achievement .

In contrast, studies in the counterpoint article can rule out self-selection as an explanation for their findings. Parents did not self-select into receiving the weekly alerts: Rather, the researchers randomly decided who did and did not receive alerts and then compared the two groups. As a result, we can be reasonably confident that introducing the weekly alerts caused reduced course failures, increased class attendance, and increased student retention. For educators interested in improving such outcomes, the study demonstrates effectiveness of a specific practice for middle and high school students.

Implications for educational leaders

We should not waste instructional time or squander teachers’ goodwill by spending time and money on programs that don’t help anyone but the companies selling them. This requires education leaders to know the difference between correlation and causation. When examining evidence, ask:

  • Does this study include a control or comparison group?
  • Does it account for differences between groups that could explain away the findings?
  • How confident am I that the study rules out other explanations for the findings?

In the next post, I’ll move on to another question: how big of a causal effect can we reasonably expect of an educational program?

weshouldnotwaste

Thanks to Cara for contributing her thoughts!

Consider contributing a question to be answered in a future post. You can send one to me at lferlazzo@epe.org. When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.

You can also contact me on Twitter at @Larryferlazzo.

Just a reminder; you can subscribe and receive updates from this blog via email. And if you missed any of the highlights from the first 13 years of this blog, you can see a categorized list here.

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Stop the Drop: Turn Communication Into an Enrollment Booster
Turn everyday communication with families into powerful PR that builds trust, boosts reputation, and drives enrollment.
Content provided by TalkingPoints
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Special Education Webinar
Integrating and Interpreting MTSS Data: How Districts Are Designing Systems That Identify Student Needs
Discover practical ways to organize MTSS data that enable timely, confident MTSS decisions, ensuring every student is seen and supported.
Content provided by Panorama Education
Artificial Intelligence Live Online Discussion A Seat at the Table: AI Could Be Your Thought Partner
How can educators prepare young people for an AI-powered workplace? Join our discussion on using AI as a cognitive companion.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Teaching Letter to the Editor Learning Spaces Should Meet the Needs of All Students
Better classroom design can help neurodivergent learners thrive, says this letter to the editor.
1 min read
Education Week opinion letters submissions
Gwen Keraval for Education Week
Teaching What's the Ideal Classroom Seating Arrangement? Teachers Weigh In
Educators employ different seating strategies to optimize student learning.
1 min read
swingspaces pgk 45
Chairs are arranged in a classroom at a school in Bowie, Md. Classroom seating is one of the first decisions educators make at the start of the school year, and they have different approaches.
Pete Kiehart for Education Week
Teaching 'There's a Firehose of Information': Talking to Students About Minneapolis
Find curated coverage on discussing confusing, scary, or politically charged topics in the classroom.
2 min read
A child kneels in the snow among demonstrators holding signs during a news conference at Lake Hiawatha Park in Minneapolis, on Jan. 9, 2026, demanding Immigration and Customs Enforcement be kept out of schools and Minnesota following the killing of 37-year-old mother Renee Good by federal agents earlier on Wednesday.
A child kneels in the snow among demonstrators holding signs during a news conference at Lake Hiawatha Park in Minneapolis on Jan. 9, 2026, demanding Immigration and Customs Enforcement be kept out of schools following the killing of Renee Good by federal agents.
Kerem Yücel/Minnesota Public Radio via AP
Teaching In Their Own Words ‘Normal Looks Different’: Teaching Through Fear in Minneapolis
Tracy Byrd, a 9th grade English teacher, shares what teaching entails as federal agents patrol his city.
8 min read
MINNEAPOLIS, MN, January 22, 2026: Ninth grade teacher Tracy Byrd helps student Avi Veeramachaneni, 14, with his final essay on the last day of the semester at Washburn High School in Minneapolis, MN.
Tracy Byrd helps students with essays on Jan. 22 at Washburn High School in Minneapolis. As immigration raids and protests have played out across the city, he and fellow educators have sought to create a stable environment for students.
Caroline Yang for Education Week