Your Education Road Map

Politics K-12®

ESSA. Congress. State chiefs. School spending. Elections. Education Week reporters keep watch on education policy and politics in the nation’s capital and in the states. Read more from this blog.


New SIG Analysis Yields Same Old Conclusion: Mixed Results

By Alyson Klein — February 14, 2014 1 min read
  • Save to favorites
  • Print

A newly revamped analysis of the Obama administration’s controversial and costly School Improvement Grant program continues to show that billions in federal money—plus major federal strings—equals a mixed track record when it comes to one of the toughest challenges in education policy: turning around perennially foundering schools.

Just like under an analysis released by the U.S. Department of Education in November, roughly two thirds of schools who participated in the program showed gains in the first year, while another third slid backwards.

If that analysis sounds familiar, that’s because it closely mirrors SIG data that was previously put out by the department—and then promptly pulled back after department officials realized its contractor, the American Institutes for Research, or AIR, had erroneously excluded too many schools from the mix. AIR crunched the student outcome data for the SIG program a second time, but the results didn’t change substantially.

And, like the original analysis, the revamped analysis showed that schools that the first cohort of schools in the program—those that started in the 2010-11 school year—made greater progress overall than schools in the second cohort—those that started the program in the 2011-12 school year.
The revamped data, like the original data, showed that schools in small towns and rural areas are generally outpacing their urban and suburban counterparts, especially in math.

Perhaps the biggest change came in the overall averages. Under the new, recently revamped data, schools in the program’s first cohort showed about a 7 percentage point improvement in math, compared to 8 percentage points in the original analysis. And schools in the second cohort improved an average of one percentage point in the revamped data, as opposed to 2 percentage points in the original data.

Meanwhile, in reading, Cohort 1 schools went up about 3 percentage points, on average, while Cohort 2 schools went up about 2 percentage points. Under the original analysis Cohort 1 schools looked better (improving at a rate of 5 percentage points) while Cohort 2 schools looked worse (edging up just one percentage point.)

Check out the full data here.

Related Tags: