Published Online: January 27, 1999
Published in Print: January 27, 1999, as Letters

Departments

Letters

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints


Drop the 'Campaign' on Class Size

To the Editor:

Education Week has been widely quoted in my state recently, giving grades (A through F) for such things as the level of state spending on education and whether or not there have been reductions in average class size (Quality Counts '99, Jan. 11, 1998).

The assumption you have successfully planted in the newspapers and broadcast news programs is that more spending and smaller classes deserve higher grades. Congratulations on a continuing successful and unchallenged propaganda campaign for the goals of teachers' unions.

Perhaps, in the interest of actual education, rather than a political agenda, you would direct your news consumers to any hard evidence that such assumptions should be universally accepted.

Ralph Bristol
Spartanburg, S.C.

On Teacher Exam: More Frustration

To the Editor:

Whitney Sterling's "What Is the Massachusetts Teacher Exam Really Testing?," Dec. 9, 1998, was an amazing Commentary. He presented the experience excellently. I also took the test on Oct. 3 of last year, and my experience--in West Springfield, Mass.--was much the same as Mr. Sterling's. I would like to share some of it with your readers.

In West Springfield, we started on time, but we were not allowed to enter the testing area until five minutes before 8 a.m. We had to walk through an auditorium, out the back door, up a flight of stairs, down a hall, up another flight of stairs, and then finally down the testing hallway. Throughout these corridors, staff members were stationed to make sure we did not stray from our paths.

Once we entered the classroom where the test was to be administered, we were not allowed to leave until after the test began. So if our nerves were bothering us, or we had to go to the bathroom and hadn't had the chance before entering the room, we had to wait until almost 8:45 a.m., when the instructions were finished. Then, we could use our own allotted test time for a bathroom visit.

I happen to be a fairly tall person and my desk was very small. I had to sit with my legs outside the desk, since they did not fit under it comfortably. (And mine was one of the biggest desks in the room.) Another tall gentleman was able to exchange desks. I don't think it helped.

Though the odds of knowing someone in my testing area were fairly slim, I found a compatriot and was confronted for breaking the instructions against talking--even though no tests were being given during my conversation and we were enduring 20 minutes of anxious silence while waiting for latecomers.

I am currently employed in a private school and have been for about nine years. I did not have to take the test or become certified for any job requirement. I simply wanted to look into getting certified. My anxiety was low in comparison to others'. The anxious energy in the room was palpable. Some people came in with handfuls of pencils all sharpened and prepared. I myself came in with three.

In the afternoon, taking the subject-matter test was a disappointment. I was taking the mathematics subject test. Imagine being in the first test for four hours, taking a 30-minute break, and then trying to adequately pass an intense subject test. Worse yet, try writing a 250-word essay on a mathematical problem. How does one count a variable in an essay? Is it a word or not? Ponder that one.

Mr. Sterling recalls counting his words for each of the essays. I did not. I have no idea if I made the count. I was merely answering the questions or making my points and that was that. I refused to allow anxiety to overtake me.

Mr. Sterling also says he paid only $70 for registration. I paid $200, a late registration fee that included the subject test. Was it worth it? No.

In answer to Mr. Sterling's question, the title of his essay, I believe the Massachusetts teachers' exam is a test on what not to do to students. Many people have asked me about the tests. I always say the same thing: Taking them was the most uneducational experience of my life.

We in Massachusetts need to find a way of making sense out of this process. I wonder if Gov. Paul Cellucci or the rest of the state's education commission might be required to take the test, risking something in the process, should they do poorly. The anxiety component is absolutely crucial to understanding the test-taking experience.

Thanks to Mr. Sterling for so elegantly telling the story of his experience.

He was not alone.

Gregory Steinbach
Director of Recruitment
DeSisto School
Stockbridge, Mass.

No End to Literacy Debate Coles Responds to Foorman

To the Editor:

Because of space limitations, my response to criticisms of my Commentary, "No End to the Reading Wars," Dec. 2, 1998, will be confined to a discussion of the chief literacy issues in Barbara Foorman's letter ("Letters," Jan. 13, 1999).

Ms. Foorman states that I offer no scholarly citations or evidence in my Commentary. Her description is right, but I assumed that since the Commentary was on the debate over literacy and the author's tag line noted my recently published book on the subject (Reading Lessons: The Debate Over Literacy, Hill & Wang, 1998), all readers would readily infer that the Commentary was drawn from the book and that fuller arguments and supportive documentation would be found there. Given the countless newspaper, journal, and magazine opinion pieces similar to mine in form and based on newly published books, this was not an unreasonable assumption. However, I can see now that for some, the connection should have been more explicit.

The remainder of this letter will include citations.

Ms. Foorman implies that I misrepresent her views: "[H]e suggests that I advocate for something he terms 'direct code.'..." Her complaint is puzzling because her publications and research include considerable study of "direct code" instruction and findings of its superiority over other forms of instruction. For example, in her report on her well-publicized "Houston study," one of the three "classroom reading programs" she and her colleagues studied is labeled "direct code," and they report superior reading improvement in "children receiving direct code" over those receiving "implicit code," a term supposedly representing whole-language instruction and its "implicit" teaching of skills (Journal of Educational Psychology, 1998; also, Preventing Reading Difficulties in Young Children, National Research Council). Elsewhere, Ms. Foorman writes about "the importance of explicit instruction in the alphabetic code ... if reading failure is to be avoided" (Learning Disabilities, 1997, Vol. 8). Is she claiming that her repeated conclusion about the superiority of this kind of instruction is merely scientific reporting, not advocacy?

Ms. Foorman's focus on this teaching approach is pertinent to her claims to support "balanced" reading instruction. I never disputed that this is indeed her claim; rather, a key point in my Commentary concerned the contrast between such a claim and its definition: The definition of "balanced" that she shares with other advocates of the direct, explicit instruction of skills in beginning reading is a stepwise progression of literacy learning in which the first stage is comprised of an extensive concentration on these skills. The importance of children's literature in beginning reading is always avowed, but the "balance" that overwhelmingly informs her publications and those of other advocates of similar "balance" is evident in her co-authored statement: "[A] child's level of phonemic awareness on entering school is widely held to be the strongest single determinant of the success that she or he will experience in learning to read--or conversely, the likelihood that she or he will fail" (American Educator, Spring/Summer 1998). This purported "single determinant" is at the forefront of the direction in which "balance" is skewed.

Ms. Foorman quotes my favorable remarks (from my 1987 book, The Learning Mystique) on the effects of phonemic-awareness training as though these comments undercut my current criticism on such training, instead of being proof that I had no agenda and came to this research with an open mind. I propose that my change in views demonstrates the need to be extremely cautious when writing supportively about early, tentative research claims.

My more recent examination of an additional dozen years of research on these training programs has led me to conclude that the initial claims about causal deficiencies and training effects have not held up (see my chapter "Alphabet Sounds and Learning to Read" in Reading Lessons). Even G. Reid Lyon, the head of the division of the National Institute of Child Health and Human Development that funds much of this research, has acknowledged the failure of the better-designed, more recent training studies to produce later advantages in reading outcomes: "In several [NICHD-funded] recent reading intervention studies, differential improvements in the development of phonological awareness and nonsense-word reading have occurred without similarly different automatic transfer to gains in textual reading accuracy and fluency" (Journal of Learning Disabilities, 1997).

On the whole, the phonological-awareness research repeatedly describes correlations between phonemic skills and reading achievement, but does not substantiate a causal connection between the two, and it rarely delves into the question of the preschool life and language experiences that are themselves causal to the claimed "causal" phonemic skills. (See the aforementioned chapter for a review of the research.)

An example of the failure of the research to support claims of the "accumulation and overwhelming convergence of evidence" is in the widely publicized "Houston study" that Ms. Foorman headed. After I obtained the original data and analyzed it more closely than was done for the published report on the study (Journal of Educational Psychology, March 1997), I found, for instance, that the purported superior reading comprehension shown in the overall averaged score for the "direct code" classes that had used the Open Court reading program was due to the inordinately high scores of a group of children in a single classroom.

Conversely, the relatively lower overall averaged score for the so-called "implicit code," whole-language classes was due to the inordinately low scores in a single classroom. These single classrooms skewed the overall average group score--and this overall averaging produced the modest statistically significant difference. Why these two sets of classroom scores differed so much from the others is not clear; perhaps they were a consequence of in-school tracking. Nor is it apparent why the researchers did not call attention to these relatively anomalous test scores.

Nonetheless, if we look at the six schools in which the two kinds of instruction were located, we find that except for the two extreme classes, the children in either "implicit code" or "direct code" classrooms had fairly comparable scores, most of which were below average with respect to the test norms. Thus, not only does the research fail to demonstrate the instructional superiority of "direct code [Open Court]," the generally poor reading scores across the schools lend support to my view that any serious attempt at addressing the educational problems of poor children must be tied to comprehensive social and education measures that include but extend beyond the classroom. (My re-evaluation of this study will be discussed at greater length in my forthcoming book, Misreading Reading: How Bad Science Can Hurt Children's Learning, Heinemann.)

These are some of the substantive issues. Personal attacks on me run through Ms. Foorman's letter (I am described as "parasitic," my ideas are said to be marked by "irrationality," et cetera). Here, I will leave Education Week readers to infer what these remarks suggest about their author and to consider the extent to which these comments advance the current debate over literacy.

Gerald Coles
Ithaca, N.Y.

Brain Surgeons and Podiatrists On Cross-Domain Research Comparisons

To the Editor:

Diane Ravitch shares her proper appreciation for the good medical research and practice applied to her recent illness and praises it for its rigor and accuracy ("What if Research Really Mattered?," Dec. 13, 1998). She does this by contrasting her perceptions of medical research with her perceptions of educational research, a historian's use of analogy to make a point, something my history professors warned me about.

I can remember from my undergraduate years that analogies are helpful in analyzing arguments, but they are not logical, and they certainly play loosely with rules of evidence. After my history professors' good teachings, I followed through as a high school teacher by almost systematically asking any student who used analogous argument to show how the analogy is like or unlike the issue at hand and to consider the extent to which the analogy helps or detracts from our understanding. Many times the analogy, or part of it, helps clarify matters, but the analogy cannot stand as a criterion for deciding the character or worth of an argument.

Ms. Ravitch's analogy is interesting and perhaps a good debating piece for an educational-research or philosophy class. Here, I think she really had some other purpose in mind, and I must react.

She states that she was thankful for the fact that her medical treatment was based on medical, not educational, research. Well, I should certainly hope so! I would be disturbed as well if I were being medically treated on the basis of research from some unrelated field of study.

She praises medical research for its certainty, but there may be more of a problem here than she is aware of or cares to admit. Like Ms. Ravitch, I wish to offer my own experience with medical certainty, an experience not particularly unique or unusual. A few years ago, I was under medical treatment for arthritis, which had become particularly painful and debilitating. My doctor, whose competence I did not question, prescribed significant doses of nonsteroidal, aspirin-like drugs, drugs systematically tested and sold with the full approval of the Federal Drug Administration. What doubts could I possibly have about certainty? The drugs helped me feel better; I resumed my tennis, golf, and skiing activities and generally felt as if a problem had been conquered.

On the day of my college of education convocation, in full academic regalia, I practically collapsed on the floor of the gymnasium and ended up in the hospital with severe internal bleeding--bleeding induced by these very certain and medically valid anti-inflammatory drugs. A month later, I entered the hospital once again for more transfusions, upper- and lower-gastrointestinal investigations, et cetera. Since that time, I have read in the Wall Street Journal that several thousand people die each year from taking these nonsteroidal anti-inflammatory drugs. Apparently, the drug companies never anticipated ulcers as an "iatrogenic" outcome.

I have since relied on some very nonmedical solutions, such as daily stretching and exercise, and I now have little pain, no bleeding, and play tennis and golf as much as I can. As a matter of interest, I have followed the arthritis issue and note that new, non-ulcer-producing drugs are now becoming available. But interestingly, while in China last summer, I obtained a medical diagnosis suggesting strongly that acupuncture could also help. I haven't followed this path, but here we have a treatment thousands of years old, followed by many in Western countries, a treatment for which there is no scientific explanation, at least within the framework of Western inquiry. My HMO now accepts the validity of such treatment and provides reimbursement, should my Western-trained doctor so recommend. Much the same seems to be true for chiropracty, another mysterious but apparently effective form of treatment for many problems.

Of course, Ms. Ravitch's Commentary really isn't about medical research; it is really about educational research. In one paragraph, she suggests that educational research is hampered by the bandwagon effect, in which the latest hot issue attracts attention. I could be off base on this, but I believe medical professionals do much the same. Not all medical specializations receive equal emphasis; the rewards for pursuing "this" can be substantially greater than rewards for pursuing "that." I play tennis with a medical researcher, and he reads federal and foundation research-funding protocols just as religiously as many of my colleagues in education and eagerly follows the money trail wherever it might lead. Our university research reports regularly tell us about the great successes of the medical school in garnering research funds.

Ms. Ravitch suggests there was strong consensus among her doctors regarding treatment, whereas educational researchers might debate endlessly about the problem at hand. Perhaps her case made debate unnecessary, a case with clear symptoms and well-honed protocols for treatment. This generalization misses the fact that many illnesses puzzle doctors, that many doctors "experiment" on their patients because they have no specific remedies at hand. My father's doctor quite explicitly experimented with my father as he sought to prolong his life, rather successfully, as it turned out.

Ms. Ravitch assumes that medicine, as opposed to education, consists of "men and women who have a common vocabulary, a common body of knowledge, a shared set of criteria, and clear standards for recognizing and treating illnesses." By implication, she suggests that educators do not have these things. By making this assertion, she ignores the immense specialization in medicine and education in which common vocabularies and bodies of knowledge barely exist. In the same sense that an elementary teacher may find dialogue with a high school mathematics teacher difficult, we can probably assume limited discourse between brain surgeons and podiatrists.

All professional realms specialize and in so doing develop specific languages and bodies of knowledge. Research paradigms follow and become highly specialized and, occasionally, very politicized. I suspect Ms. Ravitch was treated by a group of highly trained specialists, just as we would hope for in any area of need, be it medicine, law, education, or car repair. I can assure her that in education we can provide highly specialized researchers of appropriate skill and expertise.

Much of medical research, with its incredibly sophisticated technological base and massive political and economic support systems, attempts to determine causality as it relates to observable symptoms. In many medical sectors, causality can be determined and influenced by specific treatments, such as drug therapy, surgery, and, increasingly, gene therapy. But even medical knowledge has its black holes. In a vast array of situations, medical knowledge is such that only symptoms can be treated, and patients must live with the pathologies at hand.

Education, with its comparatively limited technological support system and resource base, must contend with symptoms for which causes remain primarily inferential and for which the larger educational system may not be particularly receptive to altered practice. If the educational profession had the equivalent of multinational drug companies and the political and economic autonomy of the medical community, its research might seem quite different.

I do not wish to make a case for educational research, nor do I wish to argue the speciousness of some medical research. What I do want to suggest is that research in all professional fields is complex and beset by confounding circumstances and conditions that make it virtually impossible to compare it with research in other fields. Invariably, some research will be better than other research, no matter what the field. The possibility that the research will lead to contradictory recommendations for practice, which seems to be Ms. Ravitch's chief concern with educational research, may suggest something far deeper and more significant about the domain under study than the quality of the research itself.

William A. Kline
Professor and Director
Division of Language, Literacy, and Sociocultural Studies
College of Education
University of New Mexico Albuquerque, N.M.

Vol. 18, Issue 20, Pages 55-57

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Recommended

Commented