How did they do it? That’s the question leaders in most states want to know as they confront an end-of-the-month deadline for telling the federal government how they will comply with the accountability provisions in the “No Child Left Behind” Act of 2001.
The five approved state accountability plans are available from both the Education Department and the Council of Chief State School Officers.
The five plans that won approval from the U.S. Department of Education this month—from Colorado, Indiana, Massachusetts, New York, and Ohio—are notable for their variety.
Those differences could help other states as they try to mesh their existing accountability systems with the federal law. The plans also suggest where the federal government will stand firm and where it may be willing to bend.
“The value of these early-approved states is it really shows you what is going to be seen as consistent with the law,” said Scott R. Palmer, a lawyer with Nixon Peabody, a Washington-based law firm, who served as a deputy assistant secretary for civil rights in the Education Department during the Clinton administration.
“It clearly provided what we wanted, and that was models for other states to look at and also an opportunity for states to use their ingenuity to figure out how this could be done,” said Patricia F. Sullivan, a deputy executive director of the Council of Chief State School Officers.
Testing and accountability directors from about 40 states met in Washington Jan. 9 and 10, at a CCSSO event underwritten by the Education Department, to review the five plans and share ideas.
‘A Willingness to Talk’
Under the law, a reauthorization of the Elementary and Secondary Education Act, schools must meet separate, annual targets for the percent of students who score at the “proficient” level on state reading and math tests, both for the student population as a whole and for specific demographic groups. States must raise those targets in steady increments so that all students score at state-defined proficiency levels by 2013-14.
Schools that don’t achieve that “adequate yearly progress” for two or more years running, and receive federal Title I money, are subject to penalties.
One of the biggest challenges for states has been how to mesh the law’s prescriptive requirements for measuring progress with existing state accountability systems. The law requires states to come up with a single, unified plan for rating all schools.
Two of the five states have chosen to overlay AYP requirements on their core accountability systems. Indiana, for instance, rates schools based on the percent of students who pass state English and math tests and the improvement of a cohort of students on those same tests over time. Now, schools that fail to make their AYP targets for two consecutive years will not be able to earn the state’s highest two designations.
Ohio has taken a similar tack. The state will place schools in one of five performance categories, based on multiple measures. Those include whether schools are raising the test scores of all students on state tests; how schools perform on 22 indicators included on school report cards; and, eventually, gauging the gains of individual students over time.
Schools may move up or down in the categories, depending on whether they meet their AYP objectives. They will not be able to earn Ohio’s highest designation of “excellent” without meeting their AYP goals.
That approach has been advocated by Sandy Kress, a former education adviser to President Bush. Mr. Kress has been working with Ohio and other states on their accountability plans as a consultant to the Business Roundtable, a Washington-based group of corporate leaders.
In contrast, both New York state and Massachusetts have modified their accountability systems to incorporate adequate yearly progress directly. New York rates schools based on separate performance indices in reading and mathematics that give schools credit for the percent of students who have achieved basic or full proficiency on state tests. The state will set AYP targets for schools based on those indices.
“High performing” schools will have to meet or exceed all state standards and achieve adequate progress for each subgroup of students. The state also plans to recognize “rapidly improving” schools, those that are below a state standard but meet their AYP goals for three consecutive years. Schools that fail to make enough progress will be subject to improvement and corrective action.
New York is establishing a similar index at the high school level. It will be based on the percent of 9th graders who pass the state regents’ exams in math and English by grade 12. High school students may take the graduation exams multiple times.
State officials had to convince the federal government that only students’ first reported scores in grade 12, including those who had passed the tests earlier, should be counted for accountability purposes."This was an extensive point of discussion,” said James A. Kadamus, New York’s deputy commissioner for elementary, middle, and secondary education. The federal government has agreed to revisit its rules on the issue, he added.
Massachusetts also has devised a performance index that gives schools full credit for every child who scores at the proficient or “advanced” levels on state tests, but also accords them partial credit for moving students closer to that bar. The state has plotted a trajectory for every school that shows how much it needs to improve each year to meet the 2013-14 goal for having all students at the proficient level.
As long as schools remain on that course, the state will consider them to have made adequate progress, even if they have not met their AYP targets.
Even with such flexibility, it will be a challenge for some subgroups in states around the country to meet their initial AYP goals.
“The challenge, not only for Colorado but for all states, is how the initial starting points are required to be calculated and applied to all subgroups immediately,” said William E. Windler, Colorado’s assistant commissioner of education for special services. “It certainly points out, in a very dramatic way, where the achievement gaps currently exist.”
Minimum Group Size
Under the federal law, states can determine the minimum number of students per subgroup for the results to be deemed statistically reliable.
The five states that earned the first approvals have set different minimum group sizes. They also have chosen to vary the sizes for reporting and accountability purposes and to use additional statistical techniques to ensure that they do not erroneously identify schools for improvement.
New York, for instance, has decided that at least five students must be in a subgroup to report test results, and at least 40 in a subgroup to make AYP determinations. But it has also won approval from the Education Department to explore using a smaller minimum group size accompanied by a “confidence interval.” That term means a band within which scores could fall and still be considered to have satisfied AYP because of the possibility of measurement error.
Ohio requires at least 10 students per subgroup for reporting test scores, but 30 for calculating adequate yearly progress. And it has increased the subgroup size to 45 for students with disabilities. The state also has chosen to require at least 40 students per subgroup before schools must meet the law’s requirement that they test 95 percent of students in each subgroup.
To give schools the benefit of the doubt even more, Ohio will average a school’s test scores across three years, compare that tally to its current test scores, and use whichever is higher to decide if it passed AYP muster.
“That’s going to address the confidence of the conclusions we draw about schools,” said Mitchell D. Chester, the state’s assistant superintendent for policy development.
Federal officials appear to have embraced varied approaches as long as states were able to provide strong rationales and data for their policies.
“My advice to people would be make your decisions based on the history of what you’ve been doing in your state, and make your case using evidence and data,” said New York’s Mr. Kadamus.
But William J. Erpenbach, a consultant on standards and accountability, said that approach may pose a challenge for smaller states. “If you don’t have the data or the expertise on your staff to run the data,” he remarked, “that’s a problem.”
Under the law, schools also must show gains on one additional indicator besides test scores. At the high school level, states must use graduation rates.
At the elementary and middle school levels, three of the five states chose attendance rates as the additional measure.
New York state selected students’ performance on state science tests, until it can break down attendance data for each subgroup. Colorado chose the percent of students performing at the advanced level on state reading and math tests.
The federal Education Department also appears willing to provide some flexibility, for now, on how states count the performance of students who take alternative assessments under their state accountability systems.
Indiana and New York will count alternative-assessment scores as “not proficient” for now, pending further federal regulations. Colorado, Massachusetts, and Ohio plan to use indices that will align the results on such tests with performance on their regular exams. In Colorado, for example, students who receive a “developing” score on the alternative assessments will be considered proficient for AYP purposes.
Federal officials plan to announce a notice for proposed rulemaking on alternative assessments and other issues soon.
State officials are struggling with similar issues regarding testing for students with limited English proficiency. New York will use an English and second-language achievement test, aligned with its standards, for students who have been in the United States for less than three years. Indiana has proposed a portfolio assessment, aligned to its standards, for such students.