The recent flurry of attention to high school completion rates has revived interest in early warning systems designed to identify students at risk of dropping out of high school. The idea behind these early warning systems is that, through the analysis of administrative data, schools and school districts can develop models of risk factors which predict a high probability of dropping out of high school. If the models successfully distinguish probable dropouts from probable graduates, students at high risk of dropping out can be identified, and support resources can be focused on these students identified as at risk of dropout.
A good early warning system will have high sensitivity and high specificity. High sensitivity means that the early warning indicators will identify a very high percentage of those youth who will eventually drop out (i.e., a high percentage of “true positives”). High specificity means that the indicators will not identify many youth who are not destined to drop out (i.e., a low percentage of “false positives”.) Phil Gleason and Mark Dynarski of Mathematica Policy Research showed in the federally-funded School Dropout Demonstration Assistance Program evaluation that most dropout prevention programs had disappointingly low sensitivity and specificity: they failed to serve youth who would eventually would drop out, and they frequently served youth who would likely have graduated in the absence of the program.
Early warning indicators have been developed in Chicago, by Elaine Allensworth and John Easton, and in Philadelphia, by Robert Balfanz and Ruth Curran Neild, as well as other cities. The Chicago indicator is an indicator of being “on-track” for high school graduation; a student is “on track” if he or she earns at least five full-year course credits and no more than one F in one semester in a core course during the first full year of high school. The Philadelphia measure relies on sixth-grade measures of academic performance and behavior. A student with at least one of the following four characteristics had at least a 75% chance of dropping out of high school: (a) a final grade of F in math; a final grade of F in English; attendance below 80% for the year; and a final behavior mark of “unsatisfactory” in at least one class.
It’s not exactly rocket science to show that students who fail courses and have low attendance have an elevated risk of dropping out of high school, but the architects of these systems argue that the specific indicators that students manifest warrant different responses. Low attendance may stem from a different set of sources than poor behavior, for example, and a key feature of these indicator systems is that they frequently rely on administrative representations of students’ behavior in the school and classroom (especially the incidence of failing a core course), rather than more distal status measures that are less amenable to a programmatic response. Finding that low-SES youth are more likely to drop out, for example, would not give a school or district much to work with.
One issue to consider is the way in which early warning indicators are used in medicine. They’ve become controversial in instances in which the indicators don’t prescribe a reliably successful course of treatment. In the absence of an effective treatment plan, critics argue, indicators of the heightened risk of conditions such as prostate cancer or breast cancer may simply upset patients and not improve outcomes. In contrast, cholesterol tests are much more valuable as early warning indicators for heart disease because the use of statins to reduce cholesterol levels is recognized as an effective treatment that improves cardiovascular outcomes.
The question we might ask about dropout prevention is: If we knew that particular students had an elevated risk of dropping out of high school, what would we do differently? The problem here is that we do not have a dropout prevention wonder drug that has shown to be reliable in lowering dropout rates in multiple contexts. The history of dropout prevention research is littered with poorly-designed, small-scale research studies that have failed to identify a set of program elements that consistently work. Moreover, the best-designed of such studies have found modest program effects on the probability of dropping out.
None of this is to say that local efforts to reduce dropping out are ineffective. Many talented and motivated people lead and staff such programs, and they may in fact reduce the risk of dropping out for some groups of youth. The problem is that we don’t really know if they work or not. And in the absence of such knowledge, skoolboy is just not sure that early warning systems to identify potential dropouts are all that useful.
The opinions expressed in eduwonkette are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.