'Positive Science' or 'Normative Principles'?
As the popularity of effective-schools programs has increased in this decade, so, too, has skepticism over the concept's empirical underpinnings.
And though few in the research community question the validity of this approach to school improvement, many are beginning to identify weaknesses in the original studies and point out questions that the literature has yet to answer.
In a critical article for Phi Delta Kappan two years ago, in fact, two researchers charged that unfounded or meagerly supported claims for effective schools had been made with "disappointing regularity." John H. Ralph, associate professor of education at the University of Delaware, and James Fennessey, associate professor of sociology at the Johns Hopkins University, concluded that effective-schools research was ''really a rhetoric of reform."
"In the guise of positive science," they wrote, "what we find is a set of normative principles."
Reviews of the research, they said, had failed to distinguish between studies conducted on a representative database, using specific measurement tools and incorporating control variables, and impressionistic studies with no statistical controls.
And, the two scholars added, "the self-congratulatory tone of these reviews sometimes approaches the intensity of evangelism."
That attack was followed in 1984 by a cautionary assessment in The Elementary School Journal by Stewart C. Purkey, assistant professor of education at Wisconsin's Lawrence University, and Marshall S. Smith, professor of education-policy studies at the University of Wisconsin at Madison.
Wrote those researchers: "The thinness of the research base on school effectiveness, the infancy of existing effective-schools projects, and the small, almost nonexistent scholarly literature analyzing those projects, make theoretical development risky if not premature."
Cause and Effect
Such "thinness" results, some observers say, from the nature of the early research. Based on case studies of already effective schools and correlational analyses of large numbers of variables, it merely identified certain characteristics that appeared to be present in schools with high test scores. It did not verify a cause-and-effect relationship that could assure schools that developing those characteristics would improve learning.
Nor did the original studies determine which variables were of greater or lesser importance, or which should be implemented first, second, or third.
At present, researchers in the field agree generally that all of the effective-schools characteristics probably interact in some fashion to produce positive school change. But as Larry Cuban, an associate professor of educational administration at Stanford University, explains, "No one knows how to grow effective schools. The fact remains that no studies have yet shown which policies, independently or in combination, produce the desired effects."
William J. Gauthier Jr., chief of the bureau of school and program development in the Connecticut Department of Education, says that the completion of "path analysis" studies, which trace the relationship between the various characteristics, and "empirical models," which suggest the order in which schools should implement them, would make such causative links clearer.
And, says Wilbur B. Brookover, professor emeritus at Michigan State University, longitudinal studies are also "desperately needed," to see whether changes in certain aspects of the school's social system do in fact lead to gains in achievement.
For now, however, the research findings seem to some to be clouded by a confusion of terms, procedures, and objectives.
One major problem, suggests Joseph D'Amico, a program developer with Research for Better Schools in Philadelphia, has been that different researchers have identified a differing number of effective-schools characteristics. In 1982, he reviewed four of the major studies on effective schools and found that "although these authors' conclusions about the characteristics of effectiveness seem similar, they do not match."
"Not only is the number of characteristics different in each study,'' says Mr. D'Amico, "but some characteristics seen as 'indispensable' by some authors are not included at all by the others."
He adds that the characteristics themselves are "vague and subject to interpretation," and that research provides little practical advice about "how to do these things."
Other critics have also noted that researchers define such terms as ''effectiveness," "school climate," and "instructional leadership" in disparate ways, and that most fail to define them in behavioral terms that schools can use.
Says Mr. Gauthier, who chairs the American Educational Research Association's special-interest group on effective schools: "Someone has got to do an analysis of the major programs in the country and the characteristics they use and look at those in some kind of matrix to show that people are really looking at some of the same things."
Despite such criticism, however, most investigators argue that research findings to date are basically sound. "Most of the criticisms of the effective-schools research," says Matthew B. Miles, senior research associate at the Center for Policy Research in New York City, "come from properly skeptical researchers rather than practitioners."
"You can find fault with a lot of specific studies," he adds, "but there's enough common ground there to build something. Practically speaking, it doesn't make that much difference whether there are five factors or seven or nine. What matters is that there's reasonable concordance between them and a well-developed set of procedures for doing something."
Use of Test Scores
But even the process of identifying which schools are effective schools has been obscured by inconsistencies in the research. According to Brian Rowan, an analyst with the Far West Laboratory for Educational Research and Development in San Francisco, the various methods using test-score results to make that determination have a low correlation with each other.
In an article in Reaching for Excellence: An Effective Schools Sourcebook, published by the National Institute of Education last May, Mr. Rowan said the various identification techniques bear so little resemblance to one another that they tend to select different institutions as effective.
Test-score measures also are extremely unstable over time, he wrote. In his own research, Mr. Rowan and colleagues found that schools identified as instructionally effective one year had only a 50 percent chance of remaining effective the next.
Test-score difficulties are compounded, he noted, by studies that fail to control for student demographics and previous achievement. In addition, he said, most studies have based determinations of school effectiveness on test results from only one or two grades and one or two curriculum areas. Even within curriculum areas and at a single grade level, wrote the researcher, schools are often not uniformly effective for all types of students.
Analyses of school effectiveness, he concluded, need to do a better job of examining data across the entire range of curricula, grade levels, and types of students, and over long periods of time.
Joan Shoemaker, who headed the evaluation of Connecticut's effective-schools program, agrees that tests are not "as valid as we'd like them to be." Test scores, she says, "simply don't rise in nice straight lines the way we'd like to picture it."
"If we broke the scores apart for any one school in terms of grade level," she adds, "we would see something different happening in 3rd grade than in 4th grade than in 5th grade."
The observation is borne out by a 1985 evaluation of the Milwaukee school-effectiveness program, known as Project rise. It found that over a period of seven years, rise schools showed dramatic gains in reading and mathematics test scores. But in terms of test-score changes for individual rise schools during one year--1982-83 to 1983-84--results were much less clearcut.
Individual schools showed sizable gains in reading or mathematics for some grades but sizable losses for other grades. Moreover, although some of the rise schools exceeded citywide levels in reading or mathematics for grades 2 or 5, none of the schools exceeded citywide levels in both subjects and in both grades.
Parental participation is also a controversial factor in the effective-schools research. The initial research found little evidence that parent involvement contributed to school effectiveness. And, according to the Institute for Responsive Education's A Citizen's Notebook for Effective Schools, observations to date support the notion that "parents and citizens are not very much involved in most school-effectiveness projects."
But the Boston group's notebook also says that empirical evidence exists to show that such parental involvement makes a difference. The authors caution, however, that efforts to add parental participation to the list of effective-schools characteristics "no matter how intuitively plausible or politically attractive" would only weaken the research base.
Sometimes, the impetus for effective-schools programs has come directly from parents. Designs for Change, a nonprofit research and children's advocacy group in Chicago, last year began "schoolwatch," a project specifically to train parents and others to assess schools based on the characteristics of school effectiveness and to work with principals and teachers toward change.
According to project officials, a number of positive changes in individual schools have been brought about largely due to the parent participation. Designs for Change now has published a book, All Our Kids Can Learn To Read, to help parents focus specifically on improving reading instruction in schools using effective-schools strategies.
But such researchers as Michigan State's Mr. Brookover still maintain that effective-schools studies have failed to verify that, in general, parent involvement makes a difference. Programs that include it as an effective-schools characteristic do so primarily for "political reasons," Mr. Brookover says.
How broadly the effective-schools findings can be applied is another area of dispute among researchers. The initial research focused on inner-city elementary schools, relying on statistical "outliers"--schools that were extremely effective or ineffective--and ignoring the vast majority of schools in between.
Whether or not effective-schools characteristics can be equally useful to "average" schools is not clear, say most investigators. Nor, they say, is it clear how findings from urban elementary schools translate to suburban and rural schools, middle schools, and high schools.
Connecticut's Mr. Gauthier notes that high schools are far more complex institutions than elementary schools. Not only are they bigger and more bureaucratic, he says, but their mission is harder to define. There is an assumption that students have already mastered the basic skills when they arrive.
Teachers in high schools often view themselves as subject-matter experts, he adds, and are less likely to modify instruction to suit identified needs. Staff also may be more divided along departmental lines, making it harder to develop a sense of trust and cooperation among them or a feeling of responsibility for the total school.
Basic organizational structures that draw sharp lines between college-bound and noncollege-bound students may have to be changed, he says, before high schools can even begin to become "effective."
"The effective-schools research, as difficult as it is to translate generally speaking," agreed Mr. D'Amico of rbs, "becomes immensely difficult in high schools."
Hindering better analysis of such problems, says the Center for Policy Research's Mr. Miles, is the fact that few of the existing effective-schools projects give careful attention to documenting and evaluating their own strengths and weaknesses.
Says the Connecticut education department's Ms. Shoemaker: "Over the last few years, people have been working very hard to put effective-schools characteristics in place. Now it's time that we began to monitor these programs a little more closely. The evaluation designs are not in place and people are just beginning to be concerned about that."
Those programs that have included careful evaluations have raised some serious questions about the nature of effective schools. Connecticut's evaluation, for example, noted that "[t]here is harmony in some schools where everything improves--achievement, the presence of the characteristics, and the internal capacity for self-renewal among the staff. But there is dissonance in other schools--achievement may go up for one group and down for another. The characteristics which improve may not be the ones which were given the most attention."
Such evaluations also raise questions about the longevity of effective-schools efforts.
A 1983-84 evaluation of New York City's School Improvement Project, for example, found that almost all of the schools that were in their fourth or fifth year of the project had institutionalized the program, making it part of their regular school structure. All but one of the third-year schools, however, had failed to institutionalize their programs because of inadequate support from their principals.
The study concluded that the major problem to be resolved is how to maintain the programs once intensive central-office support--in the form of liaison staff, resources, and technical assistance--is withdrawn.
In a federally funded 1983 survey of effective schools, Mr. Miles and his colleagues noted that "it often appeared that programs were tacitly expected to last for a school year or so, after which the school would direct its attention elsewhere." In only a handful of the programs, they found, did officials explicitly state their intention to institutionalize the process or build in continuing self-change and self-monitoring capacities in schools.
The study concluded that, in general, programs were too new to know what their long-term staying power would be.
"We need more research, we need better research, but we also need better practice," argues Dale Mann, professor of education at Teachers College, Columbia University.
"If you're satisfied with the current school system for children at risk," he says, "then you can wait for professors to get their act together; you can wait for research. But if you think there are important gains to be had, you will use what research is available to make improvements now."
Vol. 05, Issue 18