On Adding Religion To the Curriculum
To the Editor:
Taking Religion Seriously Across the Curriculum, by Warren A. Nord and Charles C. Haynes, has various shortcomings not mentioned in your recent article "Public Schools Should Treat Religion More Seriously, Book Argues," Sept. 30, 1998.
First, the authors fail to define what they mean by religion when, in the modern world, it is, as they write, "one aspect of life among many." Their three-page "What is Religion?" section is not helpful. After quoting various contradictory definitions, the authors say, "We will not attempt any further effort at defining religion here--other than to suggest three generalizations about these major world religions that will be relevant to our discussion." Their "generalizations" are that religion is important, that it "can't be compartmentalized," and that each major world religion "discerns a richer reality than does modern science."
The religions relevant to their discussion, they say, are Judaism, Christianity, Islam, Hinduism, Buddhism, and Taoism. However, Messrs. Nord and Haynes mostly discuss Bible usage in the classroom. There are whole sections devoted to it. They write about "The Bible as History," "The Bible as Literature," "Bible Courses," and "Moral Education and the Bible," and there is a chapter on "The Bible and World Religions." Other faiths are mentioned. "Buddhism" has two single-page notations in their index. The Bhagavad-Gita--the only Hindu holy book mentioned--has one, and the Koran isn't even listed.
Taking Religion Seriously Across the Curriculum is sprinkled with anti-bias platitudes such as "Public schools must be religiously neutral--neutral among religions, and neutral between religion and nonreligion." However, people generally want their own faith taken seriously--not religion in general, and especially not opposing belief systems. On the basis of their book, Messrs. Nord and Haynes identify with Christianity.
The Teachers' Press
To the Editor:
Warren A. Nord and Charles C. Haynes are right that public schools could do more to teach appropriately about religion. However, their new book, Taking Religion Seriously Across the Curriculum, does not come close to proposing ways to guarantee that such instruction will be fair, balanced, objective, accurate, and comprehensive.
The main reason that religion is not taught adequately is that there is no broad agreement on precisely what to teach, at what grade levels, and how much. There is no "new consensus" pressing to have more about religion in the curriculum, as Messrs. Nord and Haynes assert, only a theoretical agreement by a dozen or so organizations that such instruction might be a good idea. There is no public clamor for it.
Messrs. Nord and Haynes claim that public school religious neutrality is really hostility. Not so. It relates to caution and to a justifiable fear of probable problems.
The difficulty of teaching properly about religion is highlighted by a book of which Mr. Haynes is one of the authors, Living With Our Deepest Differences: Religious Liberty in a Pluralistic Society. As a former social studies teacher, I would not consider using such a text in a public school classroom.
If Messrs. Nord and Haynes had their way, new material about religion would add substantially to the curriculum in social studies, language arts, science, and elsewhere, but they do not suggest what should be shoved aside to make room for it.
Increasing the amount of teaching about religion in public schools should be done only if and when it can be done right and with adequate safeguards.
Americans for Religious Liberty
Silver Spring, Md.
'Marketing' Dollars Should Go to Reform
To the Editor:
I am following with great interest the development of private scholarship foundations that target poor-performing school districts in low-income areas. Your Sept. 30, 1998, issue reported that, in San Antonio's Edgewood district, officials now expect 580 student/parent defections from the public schools as a result of scholarship offers from the Children's Educational Opportunity Foundation of San Antonio--up from their initial projected loss of 300 to 350 students ("San Antonio Voucher Offer Costs District 580 Students," "News in Brief: A National Roundup," Sept. 30, 1998).
According to your report, "District leaders say they will respond by redoubling their community-outreach and marketing efforts." How telling. It goes to the heart of the problem with poor-performing public schools--a refusal to focus on substantive changes to meet the expectations of parents. Have Edgewood officials learned nothing from watching the empire crumble around them? Have they considered improving student academic achievement, school safety, and accountability so that they actually have something to tout through "community outreach" and marketing?
Maybe if they engage in substantive reform, their product will speak for itself and they can avoid wasting more tax dollars on marketing--if they are willing to accept that parents should determine the value of the product they sell.
When Textbook Glitz Ignores Effectiveness
To the Editor:
The sad fact about the textbook-adoption frenzy ("More Dollars for Textbooks Draws Sellers," Sept. 30, 1998) is not the dollars spent but the general irresponsibility of decisionmakers in adopting curricula that have been tested and shown to be effective. Had California paid attention to the scientific research literature that has existed for over 20 years, it wouldn't have embarrassed itself by adopting the whole-language approach and the materials supporting it.
One of the most consistently supported empirical-research findings is that systematic, explicit phonics instruction is more effective than an embedded-phonics approach, yet school districts persist in ignoring this research and continue to adopt reading series with this key component missing. They cave in to effective marketing techniques and ignore efficacy issues. Scholastic Inc. was recently adopted by a large (26,000-student) K-8 district in my county, despite the fact that Scholastic uses an embedded-phonics approach, and most of the kids in the district are of low socioeconomic status and need explicit phonics training.
The focus of this issue should be more on curricular effectiveness and less on cost or marketing issues. But this won't happen until districts pay more attention to the empirical research surrounding curricular effectiveness and less attention to how "pretty" a curriculum is or the perks that are proffered by the publishers.
How Will We Pay for Teacher Time?
To the Editor:
Harold W. Stevenson is absolutely correct in once again asserting the unquestionable need for more out-of-classroom time to be built into the daily schedules of American teachers ("Guarding Teachers' Time," Sept. 16, 1998). Teachers need extra time to acquire new knowledge and skills, collaborate with colleagues, and perfect their lessons. Teachers need time to visit students' homes and meet with parents, not only when their children have done something wrong, but also when they've done things well.
The big question, however, is how will the nation pay for this extra time? Would we consider slightly larger classes as a possible trade-off for an hour or two every day devoted to these critical activities, as is common practice in Japan? Should we use alternatively certified professionals for some key subject areas (science, art, music) to free up time for teachers? Another option would call for large numbers of trained volunteers who can work with larger classes, which could be combined for an hour under one teacher to alleviate other teachers' schedules.
Mr. Stevenson is right: Teachers need more time in order to work effectively. Now, let's start the challenging debate about how we are going to pay for it.
Steven H. Goldman
The Ball Foundation
Glen Ellyn, Ill.
Disputed Choice Data on World Wide Web
To the Editor:
In "More Advocate Than Scholar?" (Letters, Sept. 9, 1998), Gerald W. Bracey notes that a 1997 report by Paul E. Peterson, Jay P. Greene, and Jiangtao Du provides clearer evidence that students learned more in Milwaukee's private choice schools than the authors initially reported in 1996.
While Mr. Bracey says the initial finding was methodologically flawed, and implies that the authors intentionally erred when they adjusted the original data, he supplies no evidence whatsoever for his surmise.
The Milwaukee test-score data are available on the World Wide Web. The information needed to adjust scores can be obtained from the test's publisher. Mr. Bracey, or anyone else, can ascertain whether errors were made in adjusting the data. Despite widespread availability of the data, I am aware of no one who has found errors of the kind Mr. Bracey suggests.
George A. Mitchell
The Mitchell Co. Inc.
Parents May Want 'Educentric' Testing
To the Editor:
William G. Spady's Commentary concerning the many problems he has with tests in American schools is representative of the disdain so many educational reformers have for the expressed interests of the most important, but typically most maligned group involved in schooling--namely, parents ("Educentric Testing Undermines America's Future," Oct. 7, 1998).
Call it educentric. Call it a "horror." Call it what you will, but many parents in our nation are indeed very concerned about how well their schools perform, and how on earth are they (or anyone else) supposed to make an informed judgment about this if some measures are not employed? There is no doubt that some measures are better (defined as you wish) than others. I think most parents could pretty much care less how educators handle this as long as the test is as fair as possible.
Also, they want the results of whatever test is used to be understandable to them. If they want to examine how their school fares compared to some other, that's really their business, isn't it? If they make unfair comparisons, it is (or should be) up to professionals to help them understand why their comparisons should not be made. But I think too much is being made of this comparison business. Parents want to know how their son or daughter is doing and how the school they send their child to is measuring up. That is hardly unreasonable or educentric.
James H. Quillen Chair of Excellence in Teaching & Learning
East Tennessee State University
Johnson City, Tenn.
Good Motives Aren't Sufficient
To the Editor:
The quality of educational research depends on the methodology employed, not the motives of the investigator. In his recent Commentary on the Cleveland voucher research, Kim K. Metcalf argues that motives are all-important ("Advocacy in the Guise of Science," Sept. 23, 1998). But if good motives are all that is required, then educational utopia would be at hand. The field hardly lacks for people of goodwill; what it desperately needs are scholars well trained in the scientific method.
The analogy with medical research is compelling. When Lewis and Clark, two soldiers of impeccable character, undertook their westward voyage, they made use of the best medical knowledge of their time. But the medical practices of that day were not tested by the rigorous scientific procedures now commonplace. As a result, Stephen Ambrose tells us in his compelling biography of Meriwether Lewis, the two captains bled the soldiers, poisoned them with mercury, and employed other procedures that very likely shortened the lives of their companions.
In the nearly two centuries since the Lewis and Clark expedition, medical understanding has advanced not because researchers were noble--many were greedy, passionate, and of strong conviction--but because the scientific method came to be rigorously employed. Of particular importance in recent decades has been the widespread use of randomized clinical trials, or RCTs, to test the efficacy of medical interventions. In an RCT, patients are randomly assigned to two groups, one of which receives the experimental medical intervention, the other the conventional treatment. Only if significant, positive differences are observed between the two groups can the experimental medical intervention be introduced to a broader population.
Randomized experiments in the field of education are noticeably few in number. One notable exception is the Tennessee STAR class-size experiment, which found that smaller classes have a positive effect on student performance in the first year of school.
Unfortunately, the Cleveland school choice program proved not to be the occasion for a randomized experiment, despite the fact that scholarships were awarded by lot. No baseline data were collected, and the initial lottery gave preference to those with particularly low incomes.
As a result, research on the Cleveland scholarship program has been only indicative, not definitive. Our research found gains in reading and mathematics among 150 students in kindergarten through grade 3 attending two choice schools (the HOPE schools) with the largest number of students coming from public schools (about 25 percent of all such students). But, as we pointed out in our report, these results are only suggestive, because no randomly selected control group was available.
Mr. Metcalf and his colleagues at Indiana University subsequently reported that 94 3rd grade choice students attending schools other than the HOPE schools scored no better on reading and math tests than a group of public school students.
If one does not have a research design in which students are randomly assigned to treatment and control groups, it is especially important that the researchers find a closely matched control group and adjust carefully for prior student characteristics. In this regard, the Metcalf et al. study falls well short of contemporary scientific standards. Their evaluation is limited in the following ways:
- The control group does not closely match the test group. Nor is it a cross section of students in the Cleveland public schools. Instead, it consists of classmates of students applying for a tutorial program, a group that attended better-than-average Cleveland public schools.
- Mr. Metcalf's analysis depends heavily upon 2nd grade test-score data collected by the Cleveland public schools at a time when the system was under intense political pressure. Reported student scores were, on average, at or about grade level. When Mr. Metcalf and his colleagues administered tests to these same students one year later, average scores fell to the 40th percentile, an extraordinary decline in test performance. Either the performance of these students fell dramatically in one short year--regardless of whether they attended a public or a choice school--or else the 2nd grade test scores are simply not valid. Mr. Metcalf's findings are dependent on including these dubious scores; when they are removed from the analysis, the voucher students clearly outperform the control group.
- When we added the scores of the HOPE school students to the evaluation and analyzed the data using a conventional ordinary-least-squares analysis, we found positive results of school choice in two subject areas (but no detectable effects in three others)--even when the dubious 2nd grade test scores were included in the analysis. We think it is appropriate to include HOPE school scores, because the maker of the test says that the test may be administered either all at one time or sequentially without producing noticeably different results.
Contrary to Mr. Metcalf's assertion, these results are significant at standard levels of statistical confidence.
The analytical technique employed by the Indiana University team, known as regression on residuals or stepwise least squares, has been mathematically shown to produce biased results and has been discarded by statisticians. However, as Mr. Metcalf correctly points out, his results do not depend upon the use of this problematic technique.
Though the weight of the evidence suggests that the scholarship students in Cleveland are making academic gains in at least some subject areas, the most certain conclusion to be drawn from this research is that we need to conduct randomized experiments in order to collect definitive information about programmatic effects. Fortunately, school choice is being evaluated by means of randomized experiments now under way in New York City, Washington, and Dayton, Ohio.
We lament the fact that Mr. Metcalf tries to defend the scientific quality of his research by attacking the motives and personal integrity of others. When researchers attack the motives of others, it is often a signal that they are short on scientific procedures. That is the case here.
Paul E. Peterson
Program on Educational Policy and Governance
Jay P. Greene
Assistant Professor of Government
University of Texas at Austin
William G. Howell
Department of Political Science
Test Scores and Computer Use: Longitudinal Studies Needed
To the Editor:
The Education Testing Service concludes in a new study that students who use computers mainly for drill-and-practice had poorer scores than students who used computers for other purposes, such as simulations and applications ("The Link to Higher Scores,"Technology Counts '98, Oct. 1, 1998). However, these results, based on the National Assessment of Educational Progress, may have more to do with which students use computers for drill-and-practice than with instructional effectiveness. The underlying problem is that students who use computers for drill-and-practice are not likely to be as good at math as students who use computers for other math purposes.
The ETS recognizes this potential problem, but its adjustments for student background, such as for low income or race, are highly imperfect proxies of student academic needs. That is, many low-income or minority students do quite well in math, and many higher-income or nonminority students do not. If teachers direct computer-based drill-and-practice activities disproportionately to lower achievers, as suggested by educational research, some residual portion of low achievement would remain unadjusted by socioeconomic factors and would bias results.
This problem, termed selection bias, plagues inadequately designed evaluations. For example, it is frequently observed that students in elementary school who spend more time doing homework do less well in school. The conclusion that students should spend less time doing homework or the more common conclusion that time spent on homework doesn't contribute to student success is typically a pure statistical artifact. Better students do their homework more rapidly and thus spend less time at it.
The ETS finding that the pattern of technology use was more strongly associated with achievement for 8th graders compared with 4th graders is also consistent with the possibility of serious selection bias. Eighth graders who are in Title I, and presumably lower achievers, were three times more likely to receive drill-and-practice than simulations and applications. At the 4th grade level, however, there was virtually no difference in rates of Title I participation in either type of technology application and also no difference in test scores associated with each treatment.
Rigorously designed evaluations are needed to validate the potentially important ETS findings about effective classroom use of computers. Such evaluations would compare academic improvements (pre- and post-test scores) of similar performing students who receive different computer applications. These longitudinal designs are not possible using NAEP, which is administered to students only at a single point in time.
This study illustrates the need for investments in sound longitudinal evaluations.
Director of Planning and Evaluation
U.S. Department of Education
Vol. 18, Issue 8, Pages 37-38Published in Print: October 21, 1998, as Letters