Note: Today’s guest post was written by Mary Brown, a reading intervention specialist at Franklin Local School District. She can be reached at mary.brown@franklinlocalschools.org.
Everywhere you turn in the education community these days, people are talking about data-driven instruction. And that’s great! More information about what students understand and where they still need help is a good thing.
But we’re educators, not data analysts, and sometimes all these numbers can feel like, well, a whole lot of numbers that aren’t connected to real students with real learning successes and struggles. However, by taking a step-by-step approach to analyzing and acting on data, a school or district can turn all those numbers into improved student learning.
As a literacy specialist for nearly two decades, I have had the opportunity to use a slew of intervention programs and, though they were all perfectly fine, it was always incumbent upon me to determine the specific foundational skills students needed to work on to grow and progress toward grade-level benchmarks. For the last four years, my district has truly embraced tech-enabled literacy instruction. Here are six steps we’ve taken to make data work for us.
1) Begin with solid assessments and reports.
To generate reports that help our literacy team and district collect data and put it to work for our students. The assessments and the data generated allow for both diagnostic and prescriptive instruction. Unlike in years past, I have every tool I need to provide data-driven instruction, as well as generate reports for local and state accountability.
The standards report we use illustrates the level of mastery for each Ohio State Standard for every student, as well as their projected growth toward each. The Pathway to Proficiency Report has strong correlations to our state test, and the line recalibrates each time the student is assessed, so teachers instantly know who is on track for reaching proficiency.
2) Establish your benchmarks early...but not too early.
The next step is to assess students to create benchmarks of where they are when the school year begins. Accurate information means not wasting unnecessary time on intervention for those who really don’t need it and keeping those students moving forward in their classroom with daily, rigorous, districtwide adopted reading instruction.
It can still be a challenge to ensure benchmarks are accurate, so we try to improve the accuracy of early assessments a few ways.
- First, we allow time for students to get past their “back-to-school mode” after summer vacation by not testing for the first week or two that they’re back.
- Teachers make sure to provide quiet testing environments with limited distractions. Some even offer noise-cancelling headphones.
- Finally, teachers give clear pre-test instructions and explain the importance of the assessment to their students.
3) Look beyond obvious metrics.
Students’ reading proficiency isn’t the only thing you can benchmark during these assessments. For example, we all know that some students will persevere where others may not. To help build their stamina, I track the actual time they spend on the test and reward their effort when they spend a little more.
4) Build groups.
Using the benchmark assessments, we build our reading groups with students scoring above the 40th percentile constituting the non-intervention group, those between the 40th and 25th percentile being on watch, and students who performed below the 25th percentile flagged as being in need or in urgent need of intervention. This school year we are raising those benchmarks.
I form groups so I can target instruction and provide practice for interventions based on Star’s skill recommendations for each range of scores. I also use learning progressions to determine the prerequisite skills my students need to understand and determine the most critical skills they need to learn next.
5) Intervene and monitor.
We set goals for students who need intervention and monitor them for progress every two weeks to ascertain if the interventions are appropriate and effective. Once we have four data points from progress monitoring, we begin building trend lines.
With each test, we have immediate access not only to a student’s percentile ranking, but to their grade-level proficiency and their zone of proximal development (ZPD), which allows teachers to gear the students’ independent and instructional reading assignments according to the Lexile scores where the optimal range of reading challenge is provided and the most reading growth will occur.
A new tool I’m looking forward to using this year is Renaissance’s new custom assessments tile, which will allow me to plan assignments based on the median score of my intervention groups and pull up activities that correlate with the skills my students have demonstrated they need, saving time in planning targeted instruction.
As students are monitored, I like to look back at how each of my intervention students performed during the quarterly benchmarks of the previous year to see if there are any trend lines.
6) Abandon habit for choice.
“Not choice, but habit rules the unreflecting herd,” said William Wordsworth. Sometimes teachers, just like anyone else, can fall into the rut and comfort of habit.
As we have learned over the past four years, accurate data applied in a consistent and thoughtful manner can remind us that we have a choice in how we reach out to students, and it can empower that choice significantly.