Leadership Symposium Early Bird Deadline Approaching | Join K-12 leaders nationwide for three days of empowering strategies, networking, and inspiration! Discounted pricing ends March 1. Register today.
Personalized Learning

Personalized Learning Pilot Program Reports Gains in Literacy Scores

By Sarah Schwartz — March 26, 2018 4 min read
  • Save to favorites
  • Print


Students saw significant gains in literacy scores in a personalized learning pilot program in Chicago schools, according to a new report from LEAP Innovations, a nonprofit that supports tech-powered personalized learning.

LEAP’s year-long study tested literacy and math personalized learning ed-tech products with students in grades 3-8 across 14 schools in Chicago. At the end of the 2015-16 academic year, researchers collected the NWEA Measures of Academic Progress (MAP) scores for the students in the pilot cohorts, and a comparison group of students attending traditional public and charter schools in Chicago.

In the five schools piloting literacy programs, students in the pilot cohort scored, on average, 2.94 points higher on the literacy MAP than students in a comparison group—an increase of 13 percentile points. The eight schools piloting math programs saw mixed results, with no significant difference between students’ scores in the pilot network and comparison group.

A LEAP research team conducted the study, with design and analysis support from an external advisory board including members from RAND Education and researchers from Stanford University, Loyola University Chicago, and DePaul University.

The LEAP program is selective: Both schools and ed-tech companies have to apply to be part of the experimental cohort. The organization chose five public schools, eight charter schools, and one Archdiocese school in Chicago, resulting in a demographic breakdown that was representative of the city’s student population, said Jake Williams, a research analyst with the project.

Providers representing the 16 approved tools met with cohort educators at a “match day,” where they presented demos tailored to individual school needs. Each school chose its own subject area focus—literacy or math—and the approved product that it would use. Altogether, the pilot cohorts chose eight of the 16 tools—Lexia Reading Core5 and myON for literacy, and six different math platforms.

Looking at the data by product, only one tool—Lexia Reading Core5, piloted in four schools—showed a significant positive effect on student achievement. The other literacy program, myON, didn’t have a statistically significant impact on scores because it was only piloted in one school. As results from math cohorts varied greatly across schools, LEAP plans to more closely monitor individual classroom implementation plans in future pilots.

To determine cohort students’ MAP score gains, researchers used a method of data analysis called propensity score matching. They compared the MAP scores of cohort students to the scores of students who were very statistically similar, but weren’t participating in the LEAP program—controlling for student achievement and characteristics like grade level, gender, race, and free and reduced price lunch status. (When a randomized experiment isn’t possible, researchers say propensity score matching can be used to estimate the effects of an intervention.)

Importance of Implementation Support

This study reports the findings from the second group of LEAP cohorts. The first pilot in the 2014-15 school year—which only tested literacy materials, not math—also showed student gains, though the effect size was smaller: a 1.07 point increase in test scores, corresponding to 6 percentile points.

The gap between the first study and the second demonstrates how important professional development and educator support are to personalized learning, said Beth Herbert, the chief of staff at LEAP Innovations, in an interview. Any implementation, she said, “really needs to be about the personalized learning practices: the teaching and learning strategies that personalize the learning experience for the students.”

In the first pilot, educators attended a match day and then attended one or two professional development sessions about the purpose of the project and how to use their ed-tech product, said Herbert.

In the second year, the supports were far greater. Schools received semester-long professional development courses, starting in the winter before the pilot school year, that provided guidance for staffing, scheduling, and teaching strategies. Coaching and professional development for teachers and their teams continued throughout the pilot year. And before making a final decision on a personalized learning product, schools ran “mini-pilots” to decide which one best fit their needs.

This structured planning time gave teachers the tools to make more informed choices about what platform to use, and a “better runway” to plan their implementation strategies, said Herbert. Teachers created new routines and fostered a new instructional culture.

These results are consistent with previous studies of personalized learning. The RAND Corporation, which has studied the topic in depth, has found some evidence of small achievement gains in personalized learning implementations—but in all of these studies, schools received extra funding or support for organizational redesigns.

And some preliminary evidence from the LEAP pilot suggests that introducing the tech without the teacher training wouldn’t result in the same outcomes. Students who were in schools that housed pilot network cohorts, but were not in the cohort themselves, didn’t see as large of an effect in test score gains as cohort students, even though they were using the same ed-tech products. These results are only preliminary, the report notes, but LEAP plans to investigate further the effects of implementation fidelity in future pilots.

See more:

Related Tags:

A version of this news article first appeared in the Digital Education blog.