A recent study from a federally funded research center shows that small adjustments to how a state calculates the ratings of child-care providers can result in big changes in how those providers are ranked. More research is needed to determine which calculation approach accurately captures the differences among providers, the study says.
Michigan, a 2013 Race to the Top-Early Learning Challenge grant recipient, has been working with REL-Midwest to study the state’s rating system, which measures five domains of early-childhood quality. In June 2013, Michigan implemented some changes to rating system calculations that ended up dramatically redistributing the star ratings of the 11,000 participating programs, REL-Midwest found. Instead of having many one-star programs and many top-rated, or five-star, programs, providers were more evenly distributed in the categories of two, three, and four stars, according to the research.
“These very small changes [to the rating system] completely changed the landscape of quality ratings across the state, even though nothing changed at the program level,” said Ann-Marie Faria, the report’s lead author.
Quality rating and improvement systems, also known as QRIS, have been a hot topic in early-childhood education for more than a decade. Recently, a huge boost came from the Race to the Top-Early Learning Challenge grants, which prioritized implementation and expansion of these rating systems. A QRIS allows a state to rank its child-care programs, usually using a system of one to five stars. Parents can then easily find the top-ranked programs near them, and child-care providers can use the system as a guide for improvement.
Self-Evaluation vs. Outside Assessment
The researchers also found that the evaluation scores child-care providers gave themselves tended to be different from the scores that outside observers gave. Under the Michigan system, providers are asked to rate themselves on issues related to structural qualities, such as how well the program handles issues related to administration and management. Providers scored themselves highly by such measures.
But outside observers from the state were asked to evaluate a subset of providers on how teachers interacted with students, not structural measures. When providers were evaluated on those human interactions, they did not score as highly.
Those human interactions are considered to be the most important part of what child-care providers do. But in Michigan, as in many other states, not every early-childhood program goes through the expensive and time-consuming process of outside evaluation. The outside observation process only kicks in for centers that request it and that are close to achieving the top rank.
These findings can help other states as they determine the right mix of factors—for example, teacher credentials, child-teacher ratios, and measures of child-teacher interaction—that should go into their rating systems. While the average person can generally tell the difference between a top-ranked program and one that meets only minimum standards, it’s harder to determine what separates a two-star program from a three-star program, Faria said.
And states are still working to determine if a top-ranked program leads to better outcomes for the children that are enrolled there. Some research has found that there is little connection between star rankings and learning outcomes.
A version of this news article first appeared in the Early Years blog.