In reviewing state accountability plans under the “No Child Left Behind” Act of 2001, federal officials had to decide where to be flexible and where to draw the line. Below are some examples of what passed muster and what did not, based on Education Week‘s review of approved plans from 49 states and Puerto Rico. (Approved plans from Alaska and the District of Columbia were not available late last week.)
Identifying Schools for Improvement: Originally, some states had proposed identifying only those schools that failed to meet the states’ annual targets in the same subject and the same subgroup for two years running. The Department of Education rejected that narrower definition, but permitted states to designate only those schools that miss their targets in the same subject for two consecutive years, regardless of the subgroup—the criterion chosen by almost every state.
Timelines: The law requires states to set intermediate goals that will bring all students to proficiency on state tests by 2013-14. While states were encouraged to adopt intermediate goals that require steady progress over time, federal officials gave the nod to plans that require less improvement in the early years and much steeper gains later on. In fact, that’s the strategy embraced by at least 20 states.
Indexes: Some state accountability systems use performance indexes that combine scores across grades, levels of student achievement, and subjects. Department officials approved the use of performance indexes to measure adequate progress in reading and math in states such as Minnesota, Oklahoma, Rhode Island, and Vermont. But states must establish a separate index for each subject.
States also cannot give extra weight to students at advanced levels of performance because of concerns the method could mask the results for lower- performing students.
Pennsylvania is still negotiating plans for a performance index that would reflect both absolute performance and growth.
Participation Rates: Schools and districts must test at least 95 percent of students in each subgroup and in the school as a whole to qualify for showing adequate yearly progress. To calculate participation rates, states must count all students who are enrolled in a school in the tested grades on or near the testing dates.
Some states wanted to count students who do not take the test as participants but assign them a score of “0" or “not proficient” to encourage schools to test everyone. Federal officials said no, although states are free to score such students as “not proficient” in calculating adequate progress in reading and math.
Minimum Group Size: States don’t have to hold schools accountable for the performance of subgroups when the number of students in such a group is too small to yield “statistically reliable information.” The Education Department gave states broad leeway in interpreting what that term means. The dilemma, as the Illinois plan notes, is that all school-level test results are subject to variation because of measurement error and fluctuations in the year- to-year “supply” of students. The smaller the subgroup, the greater the likelihood of erroneous identification. Yet if states make the minimum subgroup size too large, most schools will escape subgroup accountability.
To balance that tension, a number of states are using statistical techniques, such as “confidence intervals,” to make decisions about subgroups with 95 percent or 99 percent certainty. Louisiana, South Dakota, and Utah, for example, will require annual progress for subgroups of 10 or more students, with a 99 percent “confidence interval.” Most states have set a larger minimum group size of 30 or 40 students.
Texas will require each subgroup to represent at least 10 percent of all test- takers and at least 50 students. But it will waive the 10 percent rule for very large schools or districts that have 200 or more students in a subgroup
The Education Department rejected Arizona’s proposal that schools be held accountable for a subgroup’s failure to meet its targets only if the subgroup represented at least half a school’s total student population. Federal officials also turned down Pennsylvania’s original plan to have a minimum group size of 75.
“Other” Indicators: Schools and districts also must make annual progress on at least one additional indicator chosen by the state. At the high school level, states must include graduation rates, although wide variations exist in how states are calculating that measure. At the elementary and middle school levels, most states have chosen to use attendance rates.
But some have chosen novel twists. Wyoming, for example, will require schools to decrease the percentage of students scoring in the “novice” category on the state’s reading tests. In Georgia, elementary and middle schools can select from a range of indicators, including the percent of students who score at the “proficient” level on state science and social studies exams.
Florida will not recognize schools as making adequate progress if they’ve earned a D or F rating under the state’s accountability system. Schools also must show progress on the state’s writing exams. Mississippi plans to look at the amount of academic growth that individual students in a school attain from year to year as an additional indicator.
Very Small Schools: Federal officials insisted that states hold all schools accountable for adequate progress, including very small schools. States have devised some interesting plans for doing so. In Oregon, small schools will be evaluated by their districts using state guidelines that include examining additional years of data and the results of local assessments. Vermont plans to conduct a “small-school review” for schools in which fewer than 30 students are tested over a two-year period. South Dakota will conduct a “desk audit” for very small schools that have failed to make adequate progress.
Research Associates Susan E. Ansell, Melissa McCabe, Jennifer Park, and Lisa N. Staresina helped collect states’ approved accountability plans for this story.