Opinion
Education Opinion

Does Head Start Fade Out?

By Steve Barnett — May 19, 1993 7 min read
  • Save to favorites
  • Print

These new findings come at a time when increased funding for Head Start is being debated, and the ability to generalize the findings to Head Start and other public preschool programs has been questioned. Those opposed to increased funding argue that Head Start has not been found to produce the same benefits as the Perry Preschool program. More generally, they claim that the research literature as a whole indicates that the effects of Head Start and other preschool programs “fade out’’ after a few years. The Perry Preschool results are held to be an isolated exception, perhaps resulting from unusually high-quality preschool experiences that can’t be duplicated on a large scale by public programs.

To assess the evidence, I reviewed in detail 22 studies of preschool programs for children ages 3 and 4 that followed children at least through grade 3. This length of follow-up was required to provide time for fade-out to be observed. These studies include all the relevant studies from the Head Start Synthesis Study that is commonly cited as evidence for the fade-out. Comparison with the much larger number of short-term studies revealed that these 22 long-term studies were no different from others in their short-term results. In eight studies, the programs were researcher-initiated (the High/Scope Educational Research Foundation’s Perry Preschool study was one). The other 14 studied large-scale public programs, including 11 Head Start programs.

The results of these studies are remarkably consistent in at least two respects. First, preschool programs commonly raise children’s IQ-test scores initially, but the effects decline after program exit and eventually disappear. The Perry Preschool program is no exception in this regard--despite an initial gain of 12 IQ points, there is no significant effect on IQ left by 3rd grade. Second, many of the preschool programs were found to improve school outcomes such as grade retention, special education, and graduation rates.

The juxtaposition of these two findings has been considered something of a mystery. If the kids aren’t any smarter in the long run, why do they perform better in school? One response is that IQ is a poor measure of intelligence. Another is that intelligence is only one ability important for school success. Children who go to preschool really do perform better academically in the long run; IQ just doesn’t measure it. A less hopeful explanation is that schools treat children who went to preschool differently, but the children don’t really perform better academically.

A third type of outcome measure, achievement-test scores, provides some clues to solving this mystery. At first glance, the achievement-test results seem hardly more encouraging than the IQ-test results. Of five studies of researcher-initiated programs with achievement-test data, only the Perry study found persistent increases. Of 10 studies of large-scale public programs with achievement-test data, only two found persistent increases. In the other studies, the achievement effects appeared to fade out during the elementary years. I stress the words “appeared to,’' because the fade-out in effects on achievement tests turns out to be an artifact of attrition and poor research design.

The reason that effects on achievement appear to fade out over time in most longitudinal studies is that these studies gradually excluded from their samples children retained in grade or placed in special-education classes. In some cases this was intentional. In others, it was inadvertent. Frequently, it resulted from the use of standardized achievement tests routinely administered by the schools. Routine test administrations are done by grade level, automatically excluding children retained in grade, and tend to exclude children in special-education classes. The percentage excluded from testing increases year by year as the frequencies of grade retention and special-education placement increase with grade level and are cumulative.

Thus, in most studies, grade retention and special education produce increasingly selective attrition in achievement-test scores over time that gradually equates the tested groups of preschool and no-preschool children on academic ability. As a result, initial differences in achievement-test scores appear to fade out over time. This apparent decline in effects on achievement-test scores is not only consistent with the finding of persistent effects on grade retention and special-education placement, it results from these effects. Long-term preschool studies show that as the differences in grade retention and special-education placements between preschool and no-preschool groups rise over time, the differences in test scores between children in the two groups remaining at grade level in regular classes decline.

Three examples may help clarify the nature of the problem. The first is the Educational Testing Service’s longitudinal study of Head Start first reported in 1976 and recently reanalyzed. These analyses found that Head Start’s initial effects on achievement-test scores declined by 1st grade and continued to decline through grade 3. The ETS study differs from many in that the researchers administered their own tests. However, the ETS tested only children in classrooms where at least 50 percent of the children had been in the original study sample. This effectively eliminated children retained in grade and may well have eliminated those placed in special education.

The second example is the well-known Westinghouse evaluation of Head Start which (despite repeated reanalysis) found that initial effects on achievement decline and disappear. The Westinghouse sample was constructed by identifying children in grades 3, 2, and 1 who had attended Head Start and then selecting matches for these Head Start children within grade as a comparison group. This inadvertently equated the two groups on grade level (effects of special education on the sample are less certain). An interesting result of this design flaw is found when ages are compared by grade. At 1st grade, the Head Start and comparison groups are roughly the same age. At 2nd grade, the Head Start group is significantly younger, and the age gap widens at 3rd grade. Apparently, older children who had been retained in grade were included in the comparison group.

The third example can be thought of as an exception that proves the rule. This study of Head Start in Cincinnati found persistent effects on achievement even though it relied on the achievement tests routinely administered by schools. It was also the only long-term study of a large-scale public program that found no effects on grade retention or special-education placement. These two results are related. The lack of statistically significant school-success effects can be attributed to the unusually low base (comparison group) rates--only 12 percent retained and 11 percent in special education by grade 8--which left little room for preschool to have large effects. Even if preschool had cut these rates by 25 percent to 50 percent (the estimated-effect sizes, though not statistically significant), the percentages involved are so small that this would have little effect on the estimated achievement-test effect.

In sum, there is considerable evidence that preschool programs of many types--including Head Start--have persistent effects on academic ability and success. There is no convincing evidence that these effects decline over time. By contrast, effects on IQ fade out in every study, but this is irrelevant. The Perry Preschool’s results are not atypical, and Head Start programs have been found to produce the same kinds and magnitudes of results. What is atypical about the Perry Preschool study is the strength of its design, lack of attrition, length of follow-up, and broad range of real-life outcome measures.

None of this means that Head Start nationwide produces exactly the same results as the Perry Preschool. Results can be expected to vary with the characteristics of the children served, programs provided, and broader social environment. Head Start is funded at nowhere near the level required to replicate the Perry Preschool. The impact of this is unknown, but until more is known the most prudent and conservative strategy would be to replicate the more expensive program rather than risk losing benefits. In addition, most poor children remain unserved. This alone costs the nation $50 billion annually in lost benefits. Compare this to the $2.2 billion spent on Head Start in 1992, roughly 15 hundredths of 1 percent of federal spending--"chump change’’ for the federal government.

What should be done? First, federal funding for Head Start or similar programs should be increased to $14 billion, the amount required to provide Perry-quality programs to all poor children. Head Start need not be the only option: We could give poor families vouchers to spend at the approved (based on quality) nonprofit preschool or child-care center of their choice. Second, Congress should create an office of early-childhood policy research to evaluate the results of current programs and fund experiments to investigate the long-term impacts of alternative program designs on costs and benefits so that program efficiency can be improved.

A version of this article appeared in the May 19, 1993 edition of Education Week as Does Head Start Fade Out?

Events

School Climate & Safety K-12 Essentials Forum Strengthen Students’ Connections to School
Join this free event to learn how schools are creating the space for students to form strong bonds with each other and trusted adults.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being Webinar
Reframing Behavior: Neuroscience-Based Practices for Positive Support
Reframing Behavior helps teachers see the “why” of behavior through a neuroscience lens and provides practices that fit into a school day.
Content provided by Crisis Prevention Institute
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Mathematics Webinar
Math for All: Strategies for Inclusive Instruction and Student Success
Looking for ways to make math matter for all your students? Gain strategies that help them make the connection as well as the grade.
Content provided by NMSI

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Education Briefly Stated: March 20, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read
Education Briefly Stated: March 13, 2024
Here's a look at some recent Education Week articles you may have missed.
9 min read
Education Briefly Stated: February 21, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read
Education Briefly Stated: February 7, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read