A Response to Our Critics
Over the past few months, the research reported in our recent book, Politics, Markets, and America's Schools, has been sharply criticized. Indeed, two prominent critics, Albert Shanker and Bella Rosenberg, both of the American Federation of Teachers, have all but dismissed our research as worthless. It is so flawed methodologically and weak statistically that it "proves" nothing, they write. It certainly cannot support our recommendation that public education be restructured around school autonomy and parental choice.
These criticisms, echoed by others in The Public Interest, The Atlantic, Education Week, and elsewhere, are based on an unpublished paper by the University of Wisconsin political scientist John Witte that evaluates a portion of our book's statistical analysis. (See Education Week, Nov. 14, 1990.) The paper is a very careful though seriously flawed attempt to examine many of the technical issues that we confronted in analyzing a very complicated set of data, including 400 schools and over 20,000 students, teachers, and principals. We welcome such scrutiny; it is a healthy and productive part of the process by which knowledge develops.
We are deeply troubled, however, by the way in which the issues raised by John Witte have been interpreted and sometimes exploited by critics of our work. Some critics are plainly looking to damage the credibility of our book by whatever means possible. When the book first appeared, critics charged that it promoted schools of astrology and Satanism and that it encouraged schools to offer parents free trips to Disney World. Now our critics have turned to the facts--which is encouraging. But they show the same familiar disregard for getting the facts straight. For those who are interested in rational debate, we would like to offer a few key facts about the research in Politics, Markets, and America's Schools.
The first and most important fact is that our conclusions and recommendations are not primarily based--as Mr. Witte and his interpreters imply--on already widely analyzed test scores from the federal longitudinal study High School and Beyond. Most of the data employed in our book are from the Administrator and Teacher Survey, or ATS, which we helped design. These have never been reported on in any comprehensive fashion and supplement the anemic High School and Beyond data with detailed information on how schools actually operate from over 10,000 teachers and principals. Our analysis of the integrated ATS-HSB data set considers some 200 different variables and includes three distinct but logically connected parts.
Only the first part of our analysis looks at test scores. It shows that schools are one of the three most important influences on student academic achievement--along with the initial ability and family background of the student--and that school organization is the major source of school influence. The second part of the analysis looks at school organization. It shows that among the many factors that influence the quality of school organization, the most powerful is bureaucracy: The more autonomous the school--from rules and regulations constraining vital personnel and policy decisions--the more effective the school organization.
The third and most crucial part of our analysis looks at bureaucracy. It shows that while bureaucracy is caused by many things--for example, poor student performance and weak parent involvement--the primary cause appears to be politics. The institutions that control public schools, directly and from the top down, are driven by the pressures of politics to create policies and programs that, despite the best intentions of everyone involved, tend to burden schools with excessive bureaucracy and, in turn, to undermine their organizational effectiveness. We document this tendency in statistically controlled comparisons of bureaucracy and organization in schools that are subject to direct political control--public schools--and schools that are not subject to political control but are controlled indirectly through markets, namely private schools.
Our most important and controversial conclusions hinge on the third part of our analysis and on the theory that supports it. It is this portion of our book that develops the logic of political and market control and that explores the empirical consequences of these methods of control for school autonomy and organization. It is also this portion of our book that leads us to recommend a restructuring of public education around principles of market control.
Yet even our severest critics have not raised a single question about this portion of our analysis. No one has suggested that politics and markets actually have similar consequences for school autonomy--that with different measurement techniques, statistical controls, or sampling methods we would have found that the public and private sectors do not have highly disparate propensities for bureaucracy. Our critics have also had little or no quarrel with our analysis of bureaucracy itself. Mr. Witte pays this section of our work a small amount of attention, but none of his promoters have suggested that we even might be wrong. No one has argued that a different analysis of the facts would reveal that bureaucracy actually has no effect on the quality of school organization. To the contrary, many of our critics--including Mr. Shanker, most prominently--agree that bureaucracy is a serious problem.
On what grounds, then, are the critics attempting to dismiss our entire analysis? The truth is, on the grounds that the first--and only the first--part of our analysis is methodologically weak. We claim in this part of the analysis that effective school organization is vital for successful student achievement. Our critics argue that our evidence for this claim is utterly unconvincing.
But what exactly are our critics saying? Are they saying that they do not believe that school organization is academically important? Or are they saying that while school organization is important, our particular analysis of it is not compelling? Either way, they are in trouble.
If they believe that school organization is generally of little importance, they are specifically denying the value of rigorous and focused school goals, strong educational leadership by principals, true professionalism among teachers, and ambitious academic programs for all students. These, after all, are the key components of our measure of effective school organization. These are what we find to be the main ingredients of academically successful schools. Are the leaders of the American Federation of Teachers arguing, then, that teacher professionalism is unimportant? We doubt it, but this is what their criticism implies.
Do they and other critics reject the large body of research on effective schools that the first part of our analysis simply reaffirms? If they do--and this is what their criticism plainly implies--it is they, not we, who have taken a position of controversy.
Perhaps our critics do not really disagree that effective organization promotes student achievement, but merely dispute our demonstration of this. If so, they can hardly dismiss our entire book by discrediting its particular analysis of organization and achievement. If effective school organization is in fact important--whether we have demonstrated it or not--critics of our final conclusions must address the rest of our analysis if they are to have a case. They must show that bureaucracy is not a substantial impediment to effective school organization and that the control of schools through politics is no more conducive to bureaucracy than the control of schools through markets. Our critics have not even attempted to do either.
But what about our analysis of student achievement? Is it in fact as weak as our critics claim it to be? Let us consider the major charges that have been lodged against it.
The first is that our sample of schools is biased, that it overrepresents non-Catholic and elite private schools. This charge is simply groundless. All of the analyses reported in our book are conducted on weighted data sets, which are representative of high schools open in the United States from 1980 to 1984 and of the students attending them. In the weighted sample, the elite schools, which we are accused of grossly overrepresenting in our analysis, are virtually nonexistent.
One additional fact about the representation of private schools in our study: Although we include private schools and students in each part of our analysis in proportion to their actual numbers in the population, we repeat the first two parts of our analysis--which do not involve public-private comparisons--on a representative sample composed exclusively of public schools. We wanted to determine whether the presence of private schools in the sample had somehow distorted what we found about the determinants of student achievement and school organization.
As we report in an appendix of our book, however, excluding private schools from our sample has virtually no effect on our results. Regardless of the sample, school organization is important for student achievement, and autonomy from bureaucracy is important for effective school organization.
Critics have also charged that our analysis is meaningless because it explains so little of the variation in student achievement-gain scores. This would be a valid criticism if the variation in gain scores that we do not explain were systematic or non-random--if we were failing to explain some of the variation in gain scores because we were omitting important explanatory variables from the analysis.
We do not believe this is the case, however. Our analysis of student-gain scores includes reliable and comprehensive measures of the factors that research has most often linked to student achievement: the initial ability of the student, the family background of the student, the socioeconomic composition of the student body, the economic resources of the school, and the organization of the school (which in our study includes not only the intangible qualities stressed in effective-schools research but concrete factors such as homework and coursework). If our analysis omits any variable that is likely to account for the variation in achievement-gain scores, we are not aware of it, and none of our critics have suggested what it might be.
Until we see evidence to the contrary, we are convinced that the unexplained variation in the High School and Beyond student-gain scores is random. The hsb tests we use in our analysis, though individually quite reliable, are rather short, including only 116 questions in all. The average sophomore answered 60 of these questions correctly; hence, the average senior had only 56 remaining questions to reflect all that he or she had learned during the final two years of high school. The average senior cannot be expected to answer all or even most of these remaining questions correctly, moreover, because the High School and Beyond tests must also discriminate among the achievement levels of all seniors. Only the very best seniors can be expected to score close to 116; average students must score much lower.
Unfortunately, the academic progress that most students make in high school must therefore be gauged with a very small number of items--a procedure that is subject to considerable error. Fortunately, this error does not appear to be systematic--overestimates and underestimates of academic progress are equally likely--and statistical analyses of these gain scores should not be biased. Analyses will not, however, explain lots of gain-score variation: the variation is random.
How important, then, is school organization for student achievement? Our estimates show that it is roughly as important as the initial ability and the family background of the student. Since virtually everyone--except perhaps our critics--would acknowledge that the preparation and background of the student are major influences on learning, we think that any factor that exerts an influence on achievement comparable in magnitude to these two widely recognized influences ought to be considered important itself. This is the case with school organization.
Let us be clear about one additional point. We are not claiming, as some critics charge, that our data analysis "proves" our case for educational choice. We stand behind our analysis, but we recognize that nothing is ever proven in social science and that the facts may ultimately show us to be wrong. We therefore welcome debate about the facts and our analysis of them. If recent criticism of our research is any indication, however, there is more interest in terminating debate about many of the fundamentals of America schooling than in engaging in such a debate rationally.
Vol. 10, Issue 22, Pages 24, 26