A new study on child-care rating systems appears to bolster concerns among some in the early-learning field that the ratings generated by those systems are only tenuously connected to learning outcomes.
The researchers, who were from several universities, found that children attending highly rated pre-K programs did not have significantly better results in math, prereading, language, and social skills when they finished the programs, compared with the children attending lower-rated programs.
The findings, published last month in the journal Science, could have implications for states as they work to tie their ratings to real-world outcomes.
Researchers were studying “quality rating and improvement systems.” As a result of federal Race to the Top Early Learning Challenge grants, funding from states, and foundation support, nearly every state has or is creating such a system, known by the shorthand QRIS. About 13,000 child-care programs in 20 states have been rated through a QRIS. Most of the systems use symbols such as stars to represent levels of quality. But those systems draw in so many elements that a center’s rank may end up with a distant connection to teacher-child interactions, which are known to be a strong predictor of how well children do in preschool and afterward, said Terri J. Sabol, a postdoctoral fellow at the Institute for Policy Research at Northwestern University and the study’s lead author.
“My biggest take-away is that states need to simplify their rating systems,” Ms. Sabol said in an interview. “There’s something really appealing about having these five-star systems, but that comes at a cost because those stars don’t mean a lot for child outcomes.”
Gladys Wilson, the president and CEO of Qualistar Colorado, an organization that rates child-care centers in that state, said the rating system has had the benefit of providing a clear path to continuous improvement for care providers. The “improvement” aspect of a QRIS is as important as the ratings themselves, she said.
Qualistar has been intensively studied by the RAND Corp., which also found little connection between learning outcomes and ratings, though the study couldn’t draw strong conclusions because of difficulty in tracking children.The new study uses data collected in two studies that provided detailed information on prekindergarten teachers, children, and classrooms in 11 states, collected between 2001 and 2004.
Using information on more than 2,400 children in 673 preschools, the authors of the new report plugged those numbers into scoring algorithms that they created for nine states. Each of those states rates programs on staff qualifications, staff-child ratio and group size, family partnerships, and learning environment. Those measures and others are combined to produce a rating.
The researchers also created an additional measure, teacher-child interactions, which was evaluated through the Classroom Assessment Scoring System, or CLASS. The CLASS measure has been adopted in the past few years by Head Start as a method of evaluating preschool quality.
After linking outcomes to the evaluation measures, the authors found that teacher interactions had the highest connection to student learning, followed by learning environment. Teacher qualifications, class size, and family partnerships had a weaker and sometimes inconsistent connection. Thus, rating systems that combined all those measures also had a weaker and less consistent connection to child outcomes, the study shows.
Study co-author Robert C. Pianta, the dean of the education school at the University of Virginia and the creator of the CLASS evaluation instrument, said that one way to simplify rating systems could be to make some elements non-negotiable.
“There shouldn’t be any variation in [teacher-child] ratio, or health or safety provisions, or whether the teacher has certain level of training,” he said. Once those elements are removed, the systems can focus more closely on the most powerful measures, he said."The people doing this work are terrific, they’re very knowledgeable about the field,” Mr. Pianta said, but he added that the desire to include many different measurements is a challenge. “We’re really rolling out a big policy without knowing what the consequences of that policy might or might not be,” he said.
The new findings are similar to the results of the RAND study of the 14-year-old Qualistar program, which included evaluations of 1,300 children served by more than 100 child-care centers and in-home day-care providers. Only 7 percent of the children remained in the study for its entire duration, but the findings suggested more research was needed before ratings programs were implemented at scale.
Gail L. Zellman, the principal investigator on the Qualistar study for RAND, said that child-care rating systems have had the beneficial effect of driving a conversation about what is most important in a good day-care center or preschool. But, she added, “the field has not sufficiently determined how to evaluate quality and how to assess it in a valid way.”
A version of this article appeared in the September 11, 2013 edition of Education Week as Child-Care Rating Systems Earn Few Stars in Study