Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints

On NAEP Results, Other Questions

To the Editor:

The recently released 1998 National Assessment of Educational Progress reading results pose, once again, questions of whether test-driven schooling is effective in improving student learning. While NAEP is, like all exams, limited in what it can measure and should not be taken as a sufficient assessment of reading, it does provide some insight about improvements or the lack thereof. You claim in a recent article that states with standards are the ones that improved ("States Committed to Standards Reforms Reap NAEP Gains," March 10, 1999). But other interpretations of the results should be considered, and other questions should be asked.

By and large, the states with the heaviest emphasis on testing, for example those which have had high school exit exams in place for some time, neither showed significant improvement nor were the high scorers. Many states with a heavy emphasis on testing did not show gains. If test-focused schooling produces real learning gains demonstrable as NAEP-score increases, why have we not seen gains more consistently in those states? Why after years of test-driven schooling do so many of those states still do poorly, and why are such states taken as models? Should we be more impressed that Mississippi rose 5 points since 1992, or that Maine, also a very low-income state, consistently scores high even though its scores have been statistically unchanged since 1992?

Value-added assessment is now making a big splash. But Tennessee, the one state that has had this approach, showed no gain from 1992 to 1998. Is it because the state test that is the measure of "value" does not have much value?

Another question is whether states that are showing gains have made particular efforts to align their state exams to NAEP. Kentucky openly expressed its intention to do so, and it showed significant gain. Or did its use of portfolios and similar approaches induce the gains? Academics in Texas have reported that Texas also has taken pains to align its tests to NAEP. If gains are more common in states that have tried to align with NAEP, can those gains be viewed as real or as just another case of score inflation?

While some of the state leaders who discussed their states' efforts at improvement acknowledged that the standards are too new to have made any real difference, others were clearly trying to give credit to standards that could not feasibly have contributed to score gains. Your reporting seemed to be grasping to reinforce a conclusion--the positive effect of the standards-and-tests approach--that cannot be proven from the NAEP results.

The actual NAEP report once again showed that reading practices commonly recommended in progressive or whole-language approaches are associated with higher scores. These findings include correlations between higher NAEP scores and students reading more, explaining their understanding and discussing their interpretations of reading, writing long answers to questions about their reading, and (at grade 4) having time in class to read books of their own choosing. As the report cautions, correlation does not show causation. But given repeated NAEP reports showing these same correlations, there may well be more to this finding than to claims that recently introduced standards and tests (that are in any event often poorly aligned to those standards) are to be credited with score gains.

This is an area NAEP should investigate in more depth. However, the National Assessment Governing Board seems intent on reducing, not expanding, the sorts of data-gathering that could better illuminate the question of what really does lead to improved reading.

Monty Neill
Executive Director
National Center for Fair & Open Testing (FairTest)
Cambridge, Mass.

Study Middle East, Reader Suggests

To the Editor:

In your article "Guardians of the Faith," Jan. 20, 1999, you refer to areas of Asia and the Middle East from which highly educated immigrant families choose to send their children to Crescent Academy International in Canton, Mich. You cite "Palestine" as one of the areas from which the highly educated immigrants have immigrated.

No longer is there a country called Palestine. Please review your information about the Middle East, especially about the establishment of the State of Israel in 1948. If I were to use this article for students to read, I would be disseminating inaccurate information.

Mickey Metviner
Adjunct Professor
SUNY-Nassau Community College
Merrick, N.Y.

Teacher Recruitment Won't Solve Shortage

To the Editor:

Policymakers have proposed a variety of solutions to the teacher shortage, many of which are designed to recruit additional teachers. But as your series about the supply of and the demand for teachers has illustrated, attrition contributes significantly to the shortage ("New Teachers Abandon Field at High Rate," March 17, 1999).

At least 30 percent of beginning teachers leave the profession within five years. Clearly, recruiting more teachers will have a limited impact on the shortage if we continue to lose large numbers of teachers through attrition.

A primary reason for the shortage is that the teaching profession is seen as increasingly unattractive by potential and practicing teachers. If, in addition to recruiting new teachers, policymakers would concentrate on improving the working conditions of teachers, we would be able to attract and retain enough teachers. If we do not make the profession more attractive, the shortage may be perpetual.

Carl O. Olson
Cary, N.C.

Children as Machines To Be Programmed

To the Editor:

Your feature on Direct Instruction ("A Direct Challenge," March 17, 1999) encourages readers to conclude that while this rigid, scripted method for drilling children in basic skills may seem unpalatable, there's no denying its effectiveness. It's the educational equivalent of Listerine: You may hate it, but it works.

The relevant question, though, is "Works to do what?" A careful review of the relevant research suggests that Direct Instruction and other classroom applications of behaviorism are effective mostly at increasing temporary retention of facts and low-level skills. Such tactics are "effective" only if we don't care about three other goals: long-term retention of those facts and skills; real understanding of ideas, along with critical thinking, creativity, the capacity to apply skills to different kinds of tasks, and other more sophisticated intellectual outcomes; and students' interest in what they're doing, representing the likelihood that they will come away with a continuing motivation to learn.

Consider the Follow Through study that is cited as the strongest evidence to support Direct Instruction. Never mind that outside evaluators found so many problems with its basic design and analysis as to cast doubt on any conclusions that emerged. (See Ernest House et al., Harvard Educational Review, Vol. 48, 1978.) Never mind that the variation in results from one location to the next of a given model of instruction was greater than the variation between one model and the next. The study's authors conceded that their main finding, rather unsurprisingly, was that when you methodically prepare kids to take a low-level test of skills rather than helping them to explore ideas, you'll sometimes get better scores on a low-level test of skills. But is this a vindication of Direct Instruction or an indictment of the measures on which claims of its success are based?

Omitted from your article (and from the summaries offered by Direct Instruction proponents) is a raft of research demonstrating that the most effective learning takes place when students are able to make choices about what they're doing, when they're able to learn with and from one another, when they're able to play an active role in making sense of ideas, and so on--all features that are decidedly absent from a program that seems to regard children as machines to be programmed or pets to be trained.

You quote one teacher as saying that Direct Instruction is good for "this group of kids." In practice, "this group" disproportionately consists of children of color from low-income households. That fact is offensive on its face, but the premise that such students need or benefit from low-level, mechanical instruction is also contradicted by good data. Among many other relevant research projects is a three-year, in-depth study of 140 1st through 6th grade classrooms with high concentrations of poor children. When teaching that emphasized "meaning and understanding" was compared with basic-skills instruction, students in the former condition not only acquired advanced skills more effectively but also did better on tests of basic skills.

The researchers noted that even though "schooling for the children of poverty ... emphasizes basic skills, sequential curricula, and tight control of instruction by the teacher," their research actually demonstrates that "alternative practices work at least as well for low-performing as [for] high-performing students in all three subject areas"--that is, reading, writing, and mathematics. (See Michael S. Knapp et al., Academic Challenge for the Children of Poverty, Summary Report of the Study of Academic Instruction for Disadvantaged Students, U.S. Department of Education, 1992; read highlights of the report.)

In short, we can say more than that Direct Instruction and its ilk seem disagreeable or disturbing. By meaningful measures, behavioral training that passes for teaching is as unproductive as it is unappealing.

Alfie Kohn
Belmont, Mass.

Vol. 18, Issue 30, Page 42

Published in Print: April 7, 1999, as Letters

Back to Top Back to Top

Most Popular Stories