The New Jersey Supreme Court issued a milestone ruling at the end of May in the Abbott v. Burke education finance case, culminating 35 years of litigation by approving the constitutionality of the state’s new education funding formula.
Holding that the new formula provides every New Jersey student a “thorough and efficient education,” the court lifted remedial requirements for special funding to the state’s 31 very-high-poverty urban school systems, the so-called Abbott districts. In doing so, the justices also noted that the litigation has resulted in “measurable educational improvement” for the thousands of low-income and minority students in those districts.
With a total of 44 other states now having been the scene of similar legal challenges to historical inequities and inadequacies in the funding of public education—and with the vast majority of those cases having been won by the plaintiffs—this decisive development in the New Jersey litigation provides an appropriate occasion for taking stock of what these cases in general have achieved.
Virtually all analysts agree that school finance litigation has substantially increased the overall level of spending on schools in the United States, and that it has also resulted in a clear reduction in funding disparities among school districts in most states. But there has been far more contention—supported by far less analysis—about the effect of these judicial decrees on student achievement, especially on the achievement of the low-income and minority students who historically have been shortchanged by state school finance systems.
What convinced the New Jersey justices that the additional funding provided to poorer urban districts resulted in better student outcomes? From 1999 to 2007, student scale scores shot up by 26 points on the statewide 4th grade mathematics assessment, with the greatest increases occurring in the Abbott districts. As a result, the achievement gap between those districts and the rest of the state declined by more than one-third. Analogous gains have been reported in Kansas, Kentucky, Massachusetts, New York, and other states where plaintiffs have won increased school funding in recent years.
Still, there are those who refuse to accept this forceful evidence. In a recent book, Schoolhouses, Courthouses, and Statehouses, and in an Education Week Commentary of June 10, 2009, Eric A. Hanushek and Alfred A. Lindseth conclude that reading and math scores on the National Assessment of Educational Progress tests did not significantly improve over a 15-year period in four states—Kentucky, Massachusetts, New Jersey, and Wyoming—that had court remedies requiring increases in school funding. Hanushek and Lindseth focused on NAEP because it provides a national benchmark not subject to the manipulations that occur with some of the state testing programs. The authors have, however, misused the NAEP data. A proper analysis of NAEP scores shows that considerable gains have in fact been made in those states.
Hanushek and Lindseth use 1992 as the base year in all of their NAEP analyses, contending this is a good starting point for considering the impact of these litigations because it was “prior to the commencement of their respective remedies.” This may be the case for Massachusetts, which decided its case in 1993. Kentucky, however, had instituted its reforms two years earlier, while Wyoming, where a major case was decided in 1995, did not fully implement the court-approved remedy until 2001. In New Jersey, the court did not order the critical program and funding remedies until 1998-2000.
Moreover, the range of years chosen by Hanushek and Lindseth is not appropriate for NAEP analysis because the extension of NAEP to state-level data was in a trial period from 1990 until 1996, and full implementation of state reading assessments did not occur until 1998. In addition, in 1996, NAEP changed its rules for permitting accommodations in the administration of its tests to students with special needs. For that very reason, officials who administer NAEP caution about making long-term-trend comparisons from before 1996 to after 1996.
We looked, therefore, at NAEP reading scores from 1998 to 2007 and math scores from 1996 to 2007. Our review indicates that, overall, in 12 of 13 instances (the analysis included 13, rather than 16, testing instances because New Jersey did not administer any NAEP tests except in 4th grade math until 2002), gains for all students in these states exceeded gains for all students nationally, and in nine of 13 cases, gains for students from backgrounds of poverty in these states exceeded gains for poverty students nationally. In sum, focusing on the more appropriate years, the NAEP test results, like the available data on the state assessments, indicate that school finance litigation does, in fact, result in measurable gains in student performance.
Nevertheless, we are reluctant to conclude definitively, based on this limited testing data, that all or any of these litigations are a “success.” As many commentators have noted, there are substantial questions about the validity and reliability of many of the state tests, and also about the NAEP tests, which are not aligned with the state curricula students are learning and which are administered only to a small sample of students in each state.
Furthermore, studies have shown that the high-stakes concentration on English and math required by the federal No Child Left Behind Act actually reduces time, effort, and student accomplishment in science, social studies, the arts, and other subjects that essentially “don’t count.” This runs directly counter to the courts’ stipulation in many adequacy cases that states are constitutionally obliged to provide a “thorough and efficient” or a “sound basic education” in all content areas, and not just in English and math.
The New Jersey Supreme Court, for example, determined that students are constitutionally entitled to an educational opportunity that is needed in the contemporary setting to “equip a child for his role as a citizen and as a competitor in the labor market.” Although plausible survey instruments and other mechanisms for assessing these broad-based skills are available, they are not often used, and the data they generate are rarely analyzed.
In short, judging the effect of these court decisions on student achievement is a complex business. The test statistics may be useful indicators of important trends, but to really understand the impact of the education finance litigations, one must also look to underlying patterns and take into account a wide range of educational, political, and economic variables that, over time, affect the outcomes of the reform process. For example, with the NAEP statistics we have cited, lesser gains for poverty students in Wyoming come as no surprise, since in that state funding was leveled up significantly, yet disparities between districts were largely retained. It also is significant that Kentucky’s relative scores have declined in recent years, a time period when funding levels in Kentucky began to lag behind neighboring states’.
Accordingly, to gauge accurately the extent to which lasting reforms that prepare students to become capable citizens and productive workers were achieved, we should look to discerning case studies of developments and broad-based outcomes in Kentucky, Massachusetts, New Jersey, Wyoming, and many other states. Studies of this sort, in contrast to the kind of back-of-the-envelope “analyses” offered by Hanushek and Lindseth, would not only answer questions about how to define “success,” and how much “success” was achieved, but would also provide important recommendations and guidance on how to further enhance meaningful student success in the future.