Opinion
Education Letter to the Editor

Letters to the Editor

April 29, 1998 12 min read
  • Save to favorites
  • Print

Slavin Responds to Essay’s ‘Ad Hominem’ Critique

To the Editor:

In an April 8, 1998, Commentary, Herbert J. Walberg and Rebecca C. Greenberg criticize research on our “Success for All” program (“The Diogenes Factor”). They claim that it is biased because it has been carried out primarily by ourselves, the developers. They cite two “independent” evaluations that were less positive toward the program.

The critique by Mr. Walberg and Ms. Greenberg is itself selective in its citations and misleading in the extreme. Success for All is without any doubt the most extensively researched whole-school reform model in existence. Longitudinal studies in 12 school districts involving about 75 Success for All and 75 control schools have been published in the most rigorous journals in education, including three articles in the American Educational Research Journal. Dozens of American Educational Research Association papers and technical reports have been made available to anyone who wants to examine the evidence. Most of this research has been done by researchers who are not at Johns Hopkins University, although they were funded by us, including Steven Ross and his colleagues at the University of Memphis and Marcie Dianda and her colleagues at WestEd. We arranged to have these evaluations done by external researchers precisely because we wanted objective, independent evidence. A number of school districts have done their own evaluations.

The two reports cited that were critical of Success for All are indeed worthy of mention. One, carried out by Gary and Denise Gottfredson, researchers who were at the time colleagues at Johns Hopkins, followed a single elementary school in Charleston, S.C. The school never implemented the program adequately. The article itself notes that the school did not follow the prescribed buy-in procedure, never implemented the family-support program, and experienced considerable internal turmoil. Then Hurricane Hugo ripped the roof off the school and forced it to close for several months. Even with these disasters, the students did well in some grades on some measures.

The other critical report was a Baltimore evaluation by Richard Venezky. As far as outcome data are concerned, this was not really an independent evaluation, as the paper primarily reported on data we had made public previously (and have always included in our data summaries). Mr. Venezky did criticize the outcomes because the Success for All students were not, on average, at grade level by 5th grade, but he also notes that they were substantially ahead of matched controls. Mr. Walberg and Ms. Greenberg ignore this essential fact.

Beyond the context of their critique, its ad hominem nature is outside the bounds of professional ethics. The authors assume that because we are building a large organization to serve the many schools that want to implement Success for All, we are automatically suspect in our research on the program. This would be a scurrilous charge even if we did profit personally from the program, but since we do not, it is outrageous.

At the end of their Commentary, Mr. Walberg and Ms. Greenberg make their purpose clear. Their intention is not only to discredit Success for All, but to destroy Title I. Title I is in need of reform, but it is essential to thousands of high-poverty schools. Mr. Walberg and Ms. Greenberg attack Success for All precisely because it offers hope to Title I schools.

Success for All is not magic. It does not work in every school, and it does not get every child to grade level. In particular, it does not work if it is not implemented. However, it does work in hundreds of schools, among them some of the most impoverished in the nation, and it does, on average, make a substantial difference in students’ performance. We have been honest and forthright about our findings, and have made them widely available for anyone to examine. I fervently wish there were more third-party evaluations in education, but as Mr. Walberg and Ms. Greenberg know, most educational programs lack even first-party evaluations of even the most limited quality. Let’s work on that problem before tearing down one of the few programs that is willing, over and over again, to subject itself to rigorous evaluations in comparison to control groups.

Robert Slavin
Co-Director
Center for Research on the Education of Students Placed At Risk
Baltimore, Md.

Disputing ‘Lenient’ Charge on Grading State Standards

To the Editor:

I take exception to the statement in your front-page article on state rankings that the evaluations done by the Council for Basic Education were the most “lenient” of the evaluations of state standards (“An ‘A’ or a ‘D': State Rankings Differ Widely,” April 15, 1998).

While the American Federation of Teachers did not grade states, 17 states were deemed by the AFT to have standards that are clear and specific. By contrast, when grades were assigned, the Council for Basic Education gave four states an A in math and one state an A in English. The Thomas B. Fordham Foundation report gives three states an A in math and one state an A in English. I do not believe that this makes the CBE more lenient.

In addition, readers need to understand that each organization assigned grades using different criteria and with different objectives. The CBE is passionately committed to the importance of strong standards. Our work in this area is meant to advance the efforts to raise expectations for and performance by all students. Efforts to achieve that goal through a thorough examination of the value of standards should be welcomed by all who share that goal.

Christopher T. Cross
President
Council for Basic Education
Washington, D.C.

Remember Those Data Gaps When Criticizing Schools

To the Editor:

Your article “Trends in Urban Achievement Tricky To Prove,” April 1, 1998, states, “Owing to a dearth of testing data that are comparable across districts in different states, establishing that students in America’s cities are in fact performing better is a tricky proposition.”

Please remember that this same “dearth of data” has been used for years to “prove” that urban schools are not performing. You can’t have it both ways.

Linda Leddick
Director of Research, Evaluation,and Assessment
Detroit Public Schools
Detroit, Mich.

Test Numbers: It’s Not OK To Exaggerate With Caution

To the Editor:

My recent Commentary described how outspoken critics of standardized testing miscount their number in order to exaggerate the student test-taking “burden” (“Test-Basher Arithmetic,” March 11, 1998). Test-bashers count up all parts of tests, such as subject-area subtests, and call them “separate tests,” they double-count state tests (at the state level and again at the district level), and so on. They arrive at an estimate of between 140 million and 400 million standardized student tests administered annually. It’s not really a secret that they do this. In my essay, I included a lengthy quote from a test-basher monograph where the parts-as-wholes counting method was explained.

I also asserted that public television’s “Merrow Report,” among other media sources, was “duped” by the test-bashers’ exaggerated numbers, believing them to represent the actual numbers of complete individual standardized student tests administered annually. With this assertion, I was giving the show’s producer, John D. Tulenko, the benefit of the doubt. He had told me over the telephone that it was “ridiculous” that his estimates for the annual amount of standardized student testing were based on counts of parts of tests rather than complete tests. So, it seemed, he didn’t know, and he made an honest mistake. We all make mistakes.

In an April 8, 1998, letter, however, Mr. Tulenko claims that “we were aware of the counting methods used and opted for the more conservative number: 140 million” (“‘Merrow Report’ Misquoted in Test-Basher Analysis,” Letters). He now seems to admit to complicity in misleading the public. That number, 140 million, is only the more conservative of two test-basher estimates, both of which were gross exaggerations. It is more than three times the actual number of standardized student tests given annually. It is 100 million tests too large. The “conservative” decision of Mr. Tulenko was to use an estimate three times the actual number rather than one 10 times the actual number.

Mr. Tulenko divides his “conservative” estimate of 140 million “by the student population--45 million--and you get roughly three standardized tests per year per child.” He says, “Three tests ... is just an average [and] ... some schools and some children are tested far more often than others.”

Mr. Tulenko is wrong in suggesting in his letter that it is OK to use an average that is exaggerated so long as it is combined with a caution about the nature of averages. The problem at issue is not the variation of individuals around a group average, it is with the group average itself. An average of three is not a very conservative estimate for an actual average of one. The number 140 million is not a very conservative estimate for an actual number around 40 million.

There are several ways one can calculate an accurate estimate of the total annual number of individual, whole standardized student tests. One can tease out the actual number of whole tests from the calculations made in the test-basher study, without using their parts-as-wholes conversions. Or, one can divide the test-basher estimate for the total amount of time students spend taking standardized tests (20 million school days) by the number of students (45 million) to get an average of less than a half a day per student per year. Given that only one-quarter of standardized student tests have high stakes, this hardly seems like much of a burden.

Other ways of getting an accurate estimate include looking at studies other than test-basher studies; I mentioned several in the Commentary. Finally, one can count up the number oneself. One could spend less than a day and telephone five test-development firms and the Council of Chief State School Officers (which conducts an annual survey of state tests) and calculate a rough estimate. The answer would be about one standardized test per year, or a half-day per year, per student, and most of that testing would be low- or no-stakes.

I was personally involved in one of the aforementioned studies. While working at the U.S. General Accounting Office, we conducted a survey of state and local education agencies regarding the amount of student testing administered systemwide. Contrary to what Mr. Tulenko writes, we estimated an annual average of 36 million. As we cautioned in the passage of the GAO report that Mr. Tulenko reproduces in his letter, however, “systemwide tests” represent only a subset of all tests, even of all standardized student tests.

I arrived at the estimate of 42 million standardized student tests annually by adding the numbers for nonsystemwide tests. Most of these numbers, for SATs, ACTs, Title I exams, and so on, are known and do not need to be estimated.

With this addition, which was conducted in an appendix of the GAO report, we have an estimate based on the GAO study for the exact same set of tests to which Mr. Tulenko was referring--(to use his words) “the number of standardized tests given in public schools [in a] year"--only the GAO-study-based estimate is 42 million and Mr. Tulenko’s “conservative” estimate is 140 million.

Richard P. Phelps
Washington, D.C.

Tougher Tests Shouldn’t Discourage New Teachers

To the Editor:

After reading your article “States Raising Bar for Teachers Despite Pending Shortage,” March 25, 1998, I must say that I agree with the setting of higher standards for teachers. Score requirements for the Praxis tests should be raised.

As an education major, I too will have to take the Praxis exams eventually. But it would not bother me if states in which I was interested in teaching were to raise their requirement scores. This would only make me strive to do better in school in order to pass the exams and become a successful teacher.

I also feel, not only as an educator but as a future parent, that I would want my children to be taught by teachers who are dedicated to teaching children to the best of their ability, regardless of the licensure tests they are required to take. Raising test-score requirements will only result in better-prepared teachers and, as a result, better-educated children.

Megan Trumpler
Elon College, N.C.

Best ‘Sick Building’ Detector Is Nature’s Own Equipment

To the Editor:

The message in your article “Sniffing Out School Illness: Is It in the Air?” March 18, 1998, was excellent and very timely. As I see it, one of the major crises education faces is the degradation of indoor air caused by poor maintenance, lack of knowledge about potential problems, and lack of a proactive policy of dealing with air quality before problems ever arise.

But when you quoted Kenneth Green, the director of environmental studies at the Reason Public Policy Institute, you immediately lost my attention. Even a cursory reading of that group’s primary publication, Reason magazine, lets you know that they are not going to be on the side of “reason” or scientific evidence if it conflicts with their agenda.

The truth is that we can build safe, healthy buildings if we choose to do so. We also have the technology to diagnose a “sick building” regardless of the population.

When we get ill while spending time in a building, there is a reason. Nature, in its infinite wisdom, gave us the ability to protect ourselves by equipping us with senses that detect when there is a problem. The best voice of “reason” is right between your own ears--use it.

Barbara Herskovitz
Tallahassee, Fla.

Security Is the Issue, Not Who Delivers It

To the Editor:

Regarding your article “Plan To Put NYPD in Charge of School Force Is Revived,” April 1, 1998: The issue is security that is appropriate for the location, and to a lesser extent, who provides it.

Police officers, properly trained and attired, represent one course of action. Security officers, properly supervised, are another. Perhaps a new branch of police work could be “school security,” a branch with special training and supervision, under the purview of the city’s police department.

It is important for students and faculty to feel safe and not threatened by those responsible for protecting them. The issue is still security, no matter who is delivering the service.

Marjorie Rush
Voorheesville, N.Y.

Events

Mathematics Live Online Discussion A Seat at the Table: Breaking the Cycle: How Districts are Turning around Dismal Math Scores
Math myth: Students just aren't good at it? Join us & learn how districts are boosting math scores.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Achievement Webinar
How To Tackle The Biggest Hurdles To Effective Tutoring
Learn how districts overcome the three biggest challenges to implementing high-impact tutoring with fidelity: time, talent, and funding.
Content provided by Saga Education
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being Webinar
Reframing Behavior: Neuroscience-Based Practices for Positive Support
Reframing Behavior helps teachers see the “why” of behavior through a neuroscience lens and provides practices that fit into a school day.
Content provided by Crisis Prevention Institute

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Education Briefly Stated: March 20, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read
Education Briefly Stated: March 13, 2024
Here's a look at some recent Education Week articles you may have missed.
9 min read
Education Briefly Stated: February 21, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read
Education Briefly Stated: February 7, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read