Published Online: May 31, 2000
Published in Print: May 31, 2000, as Letters

Letter

Letters

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints

Recertification Plans Retrace Old Mistakes

To the Editor:

Your front-page article "States Stiffening Recertification for Teachers" (May 3, 2000) tells how Wisconsin and other states are increasing the hours and specifying the courses required for recertification. While these policies begin to address how states are better aligning professional development with school needs, they still perpetuate an "input" model that focuses on seat time, instead of an "output" model that focuses on changes in practice and improvements in student learning.

For too long, our system of recertification simply asked educators to document hours of "sit and get" in university classes or workshops without ensuring that the content improved teaching and student learning. The lack of focus on results is a fundamental flaw in the system. Teachers would be more enthusiastic about professional development if it were directly linked to the results they seek for their students.

Recertification should not be viewed in isolation, but as part of a program of high-quality professional development that is results-driven, standards-based, and job-embedded.

Just because it is easy to count the number of hours spent in recertification courses does not mean that this is the best strategy for professional development, even filtered through an approval process. Instead, we must find a way to award "credit" for the kinds of professional development that enable teachers to successfully reach all their students with higher standards, even if this learning takes place in unconventional ways, such as study groups or mentoring.

Instead of having teachers take courses to be counted for recertification requirements, we should require those seeking recertification to demonstrate high levels of competence in standards-based teaching. To reach these high standards, teachers could voluntarily participate in professional development that meets their needs and those of their school.

In the meantime, we need to create recertification systems that support teachers in showing the relationship between better professional development and changes in practice that improve results for students.

Stephanie Hirsh
Dallas, Texas

The writer is the associate executive director of the National Staff Development Council in Oxford, Ohio, and a member of the Richardson, Texas, school board.


Balance Self-Esteem and Academic Rigor

To the Editor:

It is disturbing that certain educators find it problematic that schools try to develop healthy self-esteem in students, even if that may be somewhat detrimental to academic rigor ("The Burden of Faulty Attitudes," Commentary, May 10, 2000).

Low self-esteem, also known as self-worth, may be what caused the two Columbine High School students, Eric Harris and Dylan Klebold, to go on their shooting rampage in Jefferson County, Colo., last year. Perhaps Columbine is an academically rigorous school, but I doubt that would provide any consolation to the families and friends of those who were killed there.

Matthew Jones
Chandler, Ariz.

On Internships and Aspiring Leaders

To the Editor:

I appreciate the attention Education Week has been giving to the issues of school leadership and leadership preparation in recent months. I am prompted to write in response to "Building on Experience" (May 3, 2000). That article described, and perhaps promoted, the point of view that learning via internships and immersion in schools is preferable to "traditional" administration-preparation programs.

As noted in the article, experimenting with the theory-practice balance has been a continuous process over the years and across the country. I am concerned that your article implied a false dichotomy of radical school-based preparation on the one hand vs. out-of-touch university programs on the other. This fails to recognize—and offer recommendations for overcoming—some very real challenges that universities, schools, school boards, professional associations, legislators, and other policymakers need to engage if we are to seriously improve the quality of administrator preparation.

One challenge is the perception that school administration is a series of skills. Although researchers have developed list after list of behaviors attributed to successful leaders of schools (and of other organizations), teaching these behaviors alone (whether in the university classroom or in the field) is not enough. At its worst, this simply reproduces already unsatisfactory approaches to leadership.

Research on thinking, reflection, caring, trust, and other difficult-to-measure attributes has shown us that the exceptional leaders we all seek for our children's schools are more than a sum of behaviors. Thoughtful redesign of preparation programs, regardless of where those programs are based, includes rehearsal of current best practice, exploration of less common practices, and provision of critical perspectives on what schools and leaders do and should do.

A second challenge is the perception that university preparation programs are static. As is the case with schools, some programs are changing more than others. Certainly, the development of standards for the preparation and certification of school leaders has earned the attention of academic programs whose reason to be is administrator preparation.

Unfortunately, the standards movement has resulted in a plethora of administrator-related standards and reviewing and accrediting organizations. Like many K-12 schools, university programs find themselves repeatedly changing to address the next new set of standards (much to the consternation of students). Constantly changing expectations ignore the real time needed to make changes, and invite the attitude of "It's one thing after another."

Perhaps the most critical challenge, though, is the perception that universities do not want to make a significant investment in internships. As your article acknowledged, full-time internships are difficult for students in most preparation programs. These students are hard-working full-time teachers who cannot engage in full-time internships, due to the demands of their jobs. While a few districts invest in "grow your own" administrator programs (even at the risk of having those whom they prepared go to work for the district down the road), most will not or cannot release full-time teachers from the classroom so that they may "practice" being an administrator.

If educators believe that a major shortcoming of administrator preparation is the lack of meaningful full-time internships, then we need to work together to make something change. Until we do, universities will continue to accommodate reality, rather than require what cannot happen.

Professional associations invested in national standards for administrator preparation should work with Congress and their state affiliates to find fiscal support for full-time internships. State professional associations, universities, legislatures, districts, and school board associations should lobby for state financial support for internships. Philanthropic organizations that purport to be concerned for school leadership should work with state departments of education or collaboratives of professional associations and universities to provide long- term, full-time internship scholarships.

University programs find themselves in the same position as our K-12 system: under attack and struggling to change fast enough. Like the K-12 system, university-based preparation programs do not live in a vacuum; they, too, live in a community. If substantial and lasting changes are to be made, the educational community has to work together, rather than point fingers at one another.

If we are serious about the importance of school leader preparation, we need to work toward a clear set of standards. And we need to get serious about making full-time internships possible.

Diane Ashby
Professor and Chair
Department of Educational
Administration and Foundations
Illinois State University
Normal, Ill.


Demographic Divide, or Freedom of Choice?

To the Editor:

So the experts can't figure out why Anglo students in Safford, Ariz., went to Triumphant Learning Center, and Hispanics to Los Milagros Academy ("Charter Schools: Choice, Diversity May Be At Odds," May 10, 2000)? Strip these experts of their titles.

It is not "troubling" that charter schools may result in "demographic divides," as your article suggests. Society results in demographic divides. Work environments result in demographic divides. Churches and social clubs result in demographic divides. Politics results in demographic divides.

We call this choice. We call it freedom of association. We call it our right. It is not for government schooling to try and change it.

Michael E. Tomlin
Professor of Adult Education
University of Idaho
Boise, Idaho


Reading Panel: A Member Responds to a Critic

To the Editor:

In Stephen Krashen's recent letter about the National Reading Panel report, he writes, "Despite the panel's repeated claims of rigor and completeness, the report contains numerous errors and omissions" ("Reading Report: One Research's 'Errors and Omissions,'" Letters, May 10, 2000). However, nowhere in the report—nor in any of our public presentations—has the panel claimed "completeness." The report as published and delivered to Congress is candid about the topics that were not addressed and the inclusion criteria used to identify studies for analysis.

The rigor of the analysis can be judged only by an examination of the research procedures themselves. However, we did not have the luxury of being able to arrange the research evidence to support a predetermined outcome—by disparaging or ignoring studies that contradicted a position and championing those that supported it. There has been far too much selective use of research to sell books, methods, and programs to schools, and that was why Congress asked that this disinterested analysis take place.

More specifically, Mr. Krashen cites the following six "errors":

(1) "Of the 14 studies of silent reading that the NRP said met its criteria, two were actually studies of the effectiveness of the accelerated-reader program, and should not have been included."

The National Reading Panel did not study the effectiveness of silent reading, nor did it study the effectiveness of sustained silent reading, or any other particular method for encouraging reading, per se. The NRP report analyzed any efforts to encourage kids to read more and the impact of these upon achievement. The Accelerated Reader program is such an initiative and, indeed, belongs in this set of studies.

(2) "Eight of the remaining 12 studies of sustained silent reading, SSR, had very short treatments."

Most studies that we examined were brief. That is one of the problems with the literature in this area: There are not a lot of published studies on this issue, and what is there is not of sufficient rigor or quality to justify conclusions as to whether such programs work. That is why the NRP had no findings in this area. We used systematic procedures for identifying studies for this analysis and for ensuring that quality research was the basis of our findings. We could not deviate from this in order to shop for findings that we might have preferred to report.

(3) "Ronald Carver and Robert Liebert's study (Reading Research Quarterly, 1995) should not have been cited as evidence for or against SSR."

As with the first claim, the National Reading Panel did not study SSR, though clearly the majority of studies on encouraging more reading are SSR studies. The Carver and Liebert study is quite appropriate for the analysis that the panel set out to do. It provided students with a large amount of additional self-selected reading time, and this was not time taken away from regular classroom instruction or other reading activities.

(4) "In Judith Langford and Elizabeth Allen's research (Reading Horizons, 1983), the NRP claims that while the SSR group did better, the difference was small in terms of educational importance. Not so. ... The NRP also claims that the researchers did not report the duration of the study. They did."

When it was evident that the quantity and quality of published research did not justify a panel determination of findings on this topic, no attempt was made to conduct a meta-analysis or to thoroughly analyze each study. Instead, each study was briefly characterized in terms of some of the major findings, design features, or strengths or weaknesses. The suggestion that the outcome was of limited educational value was based not on a calculation of an effect size, but on a consideration of the overall quality of research.

In this case, the nature of the measure used (a word reading test with 5th and 6th graders) and the inappropriateness and unreliability of the statistical treatment of the data led us to be cautious in our interpretation of the educational value of the findings. Mr. Krashen is correct that we erred concerning the duration of this treatment. The study clearly indicates that it continued for six months.

(5) "The NRP claims that the advantage shown by readers in JoAnne Burley's study (Negro Educational Review, 1980) was small. ... It is not clear how the panel concluded that the difference was small. ... "

The problem here was not with the statistics, but with the design of the study. Each of the four treatments was offered by a different teacher, and students were not randomly assigned to the groups. It is impossible to unambiguously attribute the treatment differences to the methods.

(6) "In Zephaniah Davis' study (The High School Review, 1988), according to the national panel, sustained silent reading helped medium- level readers but not better readers. This is exactly what one would expect. ... Gains for medium-level readers were quite impressive. ... "

The Davis study examined the reading of 49 students across two classes for an entire school year. It is impossible to tell how many of these students were in the control group and how many in the SSR group. The study found no overall difference for SSR, though it provided none of the statistics for that analysis. It did divide students into low, medium, and high levels of ability by some unreported criterion and did find a difference in one of the three comparisons (this one with only 19 students, roughly half of these in the treatment group, one presumes). Given the very small sample size, Mr. Davis was justifiably cautious in his claims about the generalizability of the findings, more so, apparently, than Mr. Krashen.

Mr. Krashen's letter also lists the following "omissions":

"The NRP report missed a number of important studies. ... Some spectacular omissions include the Fiji study by Warwick Elley and Francis Mangubhais, published in the Reading Research Quarterly (1983), and Mr. Elley's Singapore study, in Language Learning (1991)."

Mr. Krashen is correct that we did not address either of the Elley studies. These studies were identified in the literature search but were omitted because they were of foreign-language and second-language learning. The panel did not attempt to address second-language issues—as the report clearly states. It is quite possible that encouraging students to read more in a language that they are not yet proficient in would have a different impact than having students do this in their native language, but we did not do such an analysis.

Mr. Krashen is correct that there are many studies of SSR with native English-speakers that were not included in the NRP analysis. Most of these studies are unpublished doctoral dissertations that have not gone through a scientific peer review and, therefore, could not be used by us. It is true that many of the omitted studies concluded that sustained silent reading worked, though few actually found higher reading achievement due to the practice. Most of the unpublished studies have found no statistical differences between SSR and what is often labeled as "traditional instruction," such as having students complete random workbook pages, a practice for which there is no research support.

In other words, those unpublished studies have concluded that SSR works no better than poor or unsubstantiated instructional practices. That, however, neither proves nor disproves that SSR—or other approaches for encouraging more reading—works.

"Finally, it is of interest that the National Reading Panel's report devotes only six pages to pleasure reading. In contrast, 66 pages are devoted to phonemic awareness and nearly as many to phonics."

Nothing more should be inferred from the different lengths of these reports than that there were different amounts of research to be summarized. For phonemic awareness, we identified 57 studies, representing nearly 900 pages of original research, but we found only 14 studies of encouraging kids to read, and these took only about 100 pages to report. The differences in the lengths of reports in these two areas (nearly a 2-1 ratio) illustrates the differences in quality of research design and thoroughness of reporting evident in these two research areas. Under the circumstances, we were able to make determinations of what worked with regard to phonemic-awareness instruction, but we could only note the need for research in the area of encouraging students to read more.

Timothy Shanahan
College of Education
University of Illinois at Chicago
Chicago, Ill.

The writer is a member of the National Reading Panel.

Vol. 19, Issue 38, Pages 38-39

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Commented