This post originally appeared in the August issue of JSD.
Last year, I bought my parents a new gadget for Christmas called a Fitbit. Like a supercharged pedometer, the Fitbit is a small device that they wear during the day and even through the night. It collects vast amounts of data, including the number of steps they take, calories burned, and information on sleep patterns. The data syncs with an online profile so that my parents can see how active they have been over the last week, month, or year. Perhaps the most ingenious feature is an icon of a flower that appears to grow or shrink depending on how active my parents have been. It’s an elegant and simple feedback mechanism that not only provides a status report, but also presents information that motivates my parents to change their exercise behavior.
Could a similarly structured model of data-based feedback be designed to improve professional learning practices?
In schools and systems, we have no shortage of valuable data. Annual student performance data on state assessments are reported each fall. Formative assessments provide more regular benchmark data on student progress. Student attendance, behavior, and grade data are readily accessible. Similarly, data on teacher and administrator practice is collected more systematically. New educator evaluation systems and rubrics based on professional standards promise to generate more information on the state of our practice. Surveys on educator beliefs, school climate, and leadership provide important data on the views of teachers, administrators, parents, and students on school and district culture.
How do we ensure that we make these data actionable for professional learning?
Like the flower icon on the Fitbit, how can data provide a simple snapshot of progress for professional development practices? Here are two ideas for moving toward this type of feedback model:
1. Use implementation and outcome data to make professional learning decisions.
Evaluation of professional learning typically assesses participant reaction through questionnaires or feedback forms, but seldom applies deeper levels of information, including participant use of new knowledge and skills and the effect on student learning outcomes (GUSKEY). Mapping the connections among student data, evidence of educator practice, and professional learning is complex but can support a more robust and refined feedback model.
2. Identify the metrics that are grounded in current research.
In a JSD article published last year, Douglas Reeves and Tony Flach wrote that they observed many schools where “the availability of data is inversely proportional to meaningful analysis.” Similarly, it is easy to become overwhelmed by feedback data from practitioners. Using research to identify the key metrics that should be collected can be helpful in streamlining analysis and generating actionable data.
These are not simple tasks, but ones that require collaborative thinking and problem solving by practitioners and researchers. By drawing on this expertise, we can create the simple and elegant model of feedback that helps practice blossom at all levels.
President, Learning Forward
The opinions expressed in Learning Forward’s PD Watch are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.