Q&A: Lessons Learned From Next Generation Learning Challenges
Q&A With Andrew Calkins
Revamping the way a whole school or district meets students' needs requires changing the mindset of educators and experimenting with new approaches that may or may not work.
Over the past four years, the six-year-old Next Generation Learning Challenges grant program has poured millions of dollars into K-12 school models that use technology, personalized learning, and new forms of assessment in an attempt to overhaul the traditional educational process.
Underwritten by organizations including the Bill & Melinda Gates Foundation and The William and Flora Hewlett Foundation, and overseen by education nonprofit EDUCAUSE, the Learning Challenges grant has awarded $40 million directly or through partners to 130 schools, districts, and organizations looking to overhaul and disrupt traditional models to boost college readiness and completion, said Andrew Calkins, the deputy director of the learning challenges grant program. (The Gates Foundation provides support for coverage in Education Week of college- and career-ready standards and the use of personalized learning.)
During that time, grant officials have learned a lot about the challenges, tools, and strategies needed to implement those innovative learning models and evaluate their success. In June, the organization released a report, "Measures That Matter Most," which took a closer look at how Next-Gen educators gauge their success.
Calkins spoke with Education Week about the evolution of grantees' efforts. The interview has been edited for brevity and clarity.
What is the goal of the Next Generation Learning Challenges?
[Our grants] are intended to surface people, organizations, and schools trying to rethink the entire thing: the nature of learning models, the definitions of student success, the organization and budget models necessary to drive the learning models. It's a little different from just thinking of personalized learning—it's part of a bigger opportunity to reimagine the entire experience students have in public schools. Personalized learning is a big part of it, but Next-Gen learning incorporates aspects of what is gradually coming to be thought of as personalized learning, as well as competency-based learning, blended or technology-enabled learning, and experiential learning all pitched around these richer, deeper definitions of student success.
What trends are you seeing around personalized learning and the role it's playing in pushing the entire Next-Gen model forward?
Districts and schools are working on learner profiles in lots of different ways and figuring out how they can know each student better than they have in the past. Some of those efforts are technology-fueled, and some start as early as kindergarten. There's the development of individualized-learning pathways for students, going as fast as they can or as slow as they need. They're doing this through a mixture of learning modalities, sometimes using individual diagnostic prescriptive software or involving some team-based project work, peer-to-peer tutoring, direct instruction in small groups—a whole set of different kinds of experiences. Almost all of them are doing some form of competency-based student progression. Students are not all proceeding as a cohort in lock step.
How are schools measuring student progress with these new strategies?
All of our models are deeply immersed in experimenting around assessment design. Some are the classic ones that we all recognize as tests, but making them part of a much more comprehensive multimodal set of approaches designed to generate a much more nuanced view of what students need and how well they're progressing and how they themselves can use the assessment measurement data to fuel and inform their own learning.
So how can these Next-Gen assessments home in on how much an individual student is learning, especially if that student is on a personalized learning path?
A whole bucket of [the assessments] belong to the category of performance-based assessment. A student's demonstration of progress or competency happens in the presentation of their work product. Sometimes, that happens in an exhibition or an oral presentation that often includes some form of written work. Sometimes, that happens as part of a team presentation. Sometimes, it is a performance of a certain identified skill set.
Another bucket of assessment is the use of professional observation by teachers. But increasingly, there's interest in peer-to-peer evaluation at the student level and in nuanced perspectives on roles that students have played as part of a team. Another emerging assessment tracks keystrokes. It's diagnostic software that can track exactly where students go when they're making mistakes and how they go about using the system to address that problem. It tells you about their self-efficacy and their readiness to be an effective learner.
How are these Next-Gen schools and districts assessing their own efforts in the personalized learning process?
They are consumed with the responsibility they feel to report out on their performance against this richer, deeper goal set. Everybody can see the scores on state tests, and that's part of the story, but how they are measuring and articulating on the rest of that, that's very much still an emerging science.
Those converting from existing schools will report out on student behavior, declines in absenteeism and other forms of behavioral issues that lead to suspensions and so on. Many will do self-reported surveys and do a benchmark at the beginning of the year and the end of the year showing interesting differences in student attitudes and student understanding and ability to shape and manage their own learning. It's way too early to settle on any one set of strategies.
Have there been strategies that you have seen that definitely didn't work?
The answer to that is an emphatic yes. We would count all of those kinds of failures as learning points and therefore as successes that are leading schools to improve their approaches. A number have commented on the trickiness and challenge in trying to achieve a level of comparability in doing performance-based assessment. To make sure when you're evaluating many hundreds of student exhibitions or presentations of knowledge, that you're maintaining a central and comparably rigorous set of standards even though there are different people participating in the judgment process. That's been an issue in portfolio kinds of assessments for some time.
What will be the next big thing in personalized learning and the efforts of the schools you work with?
If there is one big thing, it's the recognition of the importance of a full, deep set of competencies for students to be successful, especially in the uncertain world that they are emerging into from high school. If there's anything we know about the jobs that are going to be created over their lifetimes, it's that they will be asking for higher-level skills of the kind that our schools generally have not focused on.
Vol. 36, Issue 09, Pages 16, 19Published in Print: October 19, 2016, as Educational Innovation and Evaluation