Blog

Your Education Road Map

Politics K-12®

Politics K-12 kept watch on education policy and politics in the nation’s capital and in the states. This blog is no longer being updated, but you can continue to explore these issues on edweek.org by visiting our related topic pages: Federal, States.

Federal

Five Unanswered Questions in the SIG Data

By Alyson Klein — November 26, 2013 4 min read
  • Save to favorites
  • Print

So the U.S. Department of Education released summary data last week on the School Improvement Grant or SIG program. In a nutshell, the data showed that after $3 billion in stimulus funding, plus more than $1 billion in regular congressional appropriations, roughly two-thirds of SIG schools that were in the program for two years showed some improvement. But another third of SIG schools stagnated—or even slid backward.

There are big differences of opinion over whether that constitutes “incremental progress” (U.S. Secretary of Education Arne Duncan’s view) or whether that’s a total disaster for the already controversial, much maligned SIG program.

But almost everyone agrees that the data left out a lot of things that could prove pivotal when trying to make claims about the efficacy of the program:

1. The summary data doesn’t tell us anything about how struggling schools are doing that didn’t get SIG grants, as Andy Smarick of Bellwether Education Partners wrote in this very smart blog post. So in research terms, we don’t really have a “treatment” and “control” group. One study, however, by Thomas Dee of Stanford University, did find that SIG schools in California outperformed similar schools in the same districts. But, as researcher Robin Lake of the Center for Reinventing Public Education pointed out to me, we don’t know whether diverting resources (such as top-notch principals) to SIG schools led to dips in student achievement elsewhere in a district.

Luckily, there will be an opportunity to look at whether the money made a difference, because states that have waivers from the Elementary and Secondary Education Act (that’s almost all of them) have to use strategies that are really similar to the most popular SIG model in turning around their lowest performing schools. That means we’ll have a sort of natural experiment ... what kind of a difference does the money really make?

2. The department gave us a summary of the increase in student nationally, but we don’t know the scope of the individual gains—did really stellar performance by a handful of schools push up the averages? We don’t know what kind of a trajectory these schools were on prior to SIG. A lot of states, particularly in SIG’s first year, choose schools that they thought would be able to make good use of the money, meaning that they already had a team or strategy in place that would ensure the SIG dollars weren’t wasted. So without going back and looking at several years of student achievement data prior to SIG, we don’t know whether the money helped further homegrown turnaround efforts, or whether SIG and its models were the responsible for the improvement.

3. There are a lot of lessons and information to be gleaned from the schools that slid backwards. Which ones were they? Are they concentrated in particular districts or states? Did they consult with outside organizations that weren’t much help? (We know, for instance that schools in Colorado that partnered with Global Partnership Schools ended up with some pretty iffy outcomes.) A lot of states required schools to choose “lead partners” to help in turnarounds, often outside for-profit and not-for-profit organizations, so which ones were most and least effective?

4. There weren’t any sort of breakouts in the summary data for how subgroups, including students in special education and English-language learners, are doing. So it’s impossible to tell whether the program is meeting their needs, even in schools that made gains. Relatedly, the data also weren’t broken out by state or even region. So we have no idea how different proficiency standards in different states contributed to the overall (fairly modest) gains in math and reading.

5. How much of SIG’s success—or failure—can be attributed to the program’s design and funding, and how much to the actual implementation? We don’t know whether schools that made gains (or lost ground) implemented the models with fidelity, for instance. And the department’s overall implementation of SIG, at least in the first year, was also controversial. The department had to push back the deadline for schools to have new teacher evaluation systems in place (a key ingredient of the most popular of the four models). And, last year, the Government Accountability Office, Congress’ investigative arm, took the feds to task for their slapdash implementation of the program, particularly during its first year.

So what’s the status of the rest of the data? The U.S. Department of Education released school level data for the first year of the program, the 2010-11 school year, but it’s bedeviled even skilled analysts. And there’s a full scale evaluation of the program due out in 2015—but researchers fear that may be too late for practioners and policymakers to learn the lessons that could be gleaned from the program.

Related Tags: