Student Achievement: What Is The Problem?
Five different views of the _student achievement_ problem_each suggesting a different course of action.
Recent National Assessment of Educational Progress scores confirm what most people believe: Despite enormous expense and effort, the country has not yet solved its student-achievement problem. There is less agreement, however, about the real problem or its true symptoms. In fact, the symptom on which our current strategies are based—a faltering economy attributed to poor training in science and math—was last seen a decade ago. Given this circumstance, it is hardly surprising that our strategies are not working.
To solve the country's educational problems, we need to employ a basic problem- solving framework: Identify the symptoms, seek their underlying causes, develop strategies that directly address these causes, monitor progress, and modify our strategies if the symptoms don't improve or others appear. This may sound fundamental, but current practice is to lump a range of symptoms into a single "problem," and to address this problem with one main improvement strategy—high standards and rigorous testing. As results have disappointed, state and national leaders have been unwilling to re-examine the symptoms or review their methods. Instead, when rigorous testing has not led to higher scores, teachers and students are accused of not taking the test seriously. And the most often proposed solution is more rigorous testing. Little attention is paid to the possibility that our strategies may not address the real issues. Perhaps this explains why so much effort and money have yielded so few improvements.
Defining problems based on symptoms and causes is still the best starting point for solving those problems. What follows are five different views of the "student achievement" problem. They are all connected, but each problem definition suggests a different course of action.
- Absolute Student Achievement. One common understanding of student achievement is absolute achievement on a defined scale, as when all students of a certain age are ranked according to their scores on a state test. This approach has the virtue of simplicity, but tends to produce predictable results that track demographically.
In this definition, the problem is poor overall performance, either nationally or statewide, against a set of standards. This definition helped launch "world-class standards" and was rooted in two primary symptoms. First, in 1983, the business-oriented federal report, A Nation at Risk, famously described the "rising tide of mediocrity" that was washing over our schools and destroying our economic competitiveness. At that time, the U.S. economy was in the doldrums, while Japan's was soaring. The argument that our economic woes were linked to poor training in math and science made sense (and gave business someone else to blame for the economy).
This analysis was bolstered by the second symptom, when the Third International Mathematics and Science Study showed American students comparing poorly with their international peers in math and science. While it was recognized that some American students do better than others, the main problem was defined as the woeful underperformance of our students in relation to those from countries with which we compete economically. To solve this problem, much higher standards were proposed for all students, with tests to enforce them. Not surprisingly, many students fared poorly on these harder tests. A crisis was born.
As the 1980s became the 1990s, however, these symptoms faded. Most importantly, the economy improved dramatically. In addition, the comparability of TIMSS results among countries was questioned, with some analysts reporting that other countries test students who are older or exclude their lower- performing students. More recent analyses show that white and middle-class American students excel on TIMSS, but that urban and poor students, and students of color, tend to fare worse. Thus, while lessons may still be learned from comparisons with other countries (such as in curriculum focus and textbook quality), the justification for much higher national standards has largely disappeared.
- Relative Achievement. The national economy no longer figures in most education discussions, for obvious reasons, and few still argue that all American students are behind. The new problem is relative achievement: the gap between students who do well and others who do not. White students, middle- and upper-class students, and those in the suburbs score well on TIMSS, excel on standardized tests, and get into the best colleges. The concern is that some students, particularly in low-income areas, do poorly on these tests.
There are many potential causes for this gap, possibly including low standards, the quality of instruction, family background, student aspirations and effort, the curriculum, and testing bias. But the point is that when new symptoms are identified or a problem is redefined, possible causes should be re- examined and strategies revisited. Unfortunately, no such policy review has taken place.
If the problem is international competitiveness, raising standards for all students may help. But if the goal is to close an achievement gap among students, a very different problem, raising the bar for all students may be counterproductive. My own observation is that students in high-performing areas are working harder than ever to meet the new standards, taking advanced courses earlier, and doing much more homework. But students who did not meet the old standards are held back by the same conditions—whether at home, in school, or of their own making—that held them back before. In raising the bar, we have placed it further from the reach of urban students and schools than before, creating the perverse effect of increasing the achievement gap—driving high-achieving students to do more, while leaving lower-performing students further behind. Recent NAEP results appear to support this theory.
- Student Progress. A better way to consider achievement is to examine student progress. Many state tests look at test scores in certain grades, say 4th, 8th, and 10th grade, but few consider the students' starting places or measure their progress. Most compare one class to another, either this year's 4th graders to last year's or to 4th grade classes across schools. These are measures of absolute or relative achievement, but despite frequent references to school improvement or decline, they do not describe the progress of actual students.
Take two 4th grade students, for example: one suburban, one urban. The suburban 4th grader left 2nd grade reading at the 4th grade level and now reads at the 6th grade level (two years' progress in two years). The urban 4th grader left 2nd grade reading at the 1st grade level and now reads at the 4th grade level (three years' progress in two years). The suburban child has attained a higher score, but the urban child has shown more progress in the same amount of time. Which is doing better?
I have compared this approach to a race without a starting line. If we know the finish line of a race without knowing the starting line, we know who finishes first. But we don't know how fast or how far the racers have run. Under such conditions, we do not know which runner is fastest or has the greatest endurance, or which running coach is most effective.
- School Effectiveness. Relative student achievement is most often used to indicate school effectiveness. But, as in the last example, we can't tell an excellent school from a poor one unless we measure the growth of students from their individual starting places. If one school is taking students from a low ability level and helping them make large strides, is it a worse school than one that takes students who start with many advantages and advances them only a minimum amount?
In a famous 1986 example in London, the school that was the second best in overall performance out of a field of 18 was the worst in terms of progress. Its "success" came from enrolling children from middle- and upper-class families. We are repeating this kind of "success" in every state that measures schools on absolute or relative student achievement.
To know whether a school is effective, we need to know how far it advances different students on the measures we consider important. While some schools and districts attempt to track this kind of progress, states rarely measure school effectiveness by looking at student progress. Thus, states condemn some schools that excel, and reward others that do poorly, largely based on the students they enroll. We may have ineffective schools in this country, but we won't know which they are until we know how far they advance individual students.
- School Purposes. Finally, in considering school effectiveness, we have to ask: effective at what? Because most people think of learning in core subjects as the main function of schools, and because it is easier to measure, we tend to use narrow academic assessments to judge school effectiveness. But such a formulation denies the complexity of the human experience and ignores the clear truth that students, parents, and society want more from their schools than a narrow band of facts.
The drive towards higher student "performance" even at young ages has led to a significant backlash among parents, most of whom do not want their children pressured to perform at the expense of physical, social, or emotional growth, or even, in some cases, at the expense of actual learning.
My local school system, for example, is attempting to learn what its constituents see as indicators of quality. In describing the competencies a high school graduate should have, more than 100 people in 11 focus groups, including some closely connected with the schools (teachers, parents, students) and others not closely connected (citizens, community leaders, realtors), produced essentially the same priority list, in this order:
1. Personal competence: problem-solving, social interaction, decisionmaking, respect for others and themselves.
2. Metacognitive skills: critical thinking, learning how to learn, research and study skills.
3. Oral and written communication skills.
4. Traditional academic skills.
Parents and citizens want children to accomplish more than high test scores. We want good citizens and clear thinkers. We want young graduates able to use their abilities to their own advantage and the benefit of the community. We don't want all doctors or lawyers; we also want journalists, teachers, librarians, plumbers, salespeople, artists, and mechanics. When considering the problems in schools, therefore, we should consider what parents, students, and citizens want our schools to achieve.
Though schools cannot be responsible for everything, I still look back to the days when we spoke of helping each child reach his or her fullest potential. This would include solid basic skills, but not necessarily world-class standards, for every student. If we could achieve this goal, not all students would excel on state tests, but all would be productive members of society.
In a recent speech, U.S. Secretary of Education Rod Paige recalled an observation by H.L. Mencken that "for every complex problem, there is a solution that is simple, neat, and wrong." Unfortunately, the problem of student achievement is enormously complex. Simplistic solutions, particularly those based on ideology rather than evidence, are likely to consume money without helping anyone.
One of the biggest problems in education, in my view, is that policymakers align themselves with particular problems and strategies, and then interpret all results according to these beliefs. Thus, when "high stakes" testing fails to produce the desired result, the problem is that teachers and kids aren't trying, and the solution is more of the same. In other fields, continued failure suggests the need for new strategies. Why not education?
In response to the recent NAEP reports that higher-performing students are gaining ground in reading while lower-performing students are falling further behind, Kati Haycock of the Education Trust was quoted as saying: "It would appear that in a deeply misguided response to demands for higher achievement, schools are focusing their efforts and resources on those students most likely to succeed, while neglecting the students who most need help."
In other words, it is not the strategy but those who are implementing the strategy who are at fault. If schools are doing that, it is indeed misguided. But without measuring student progress, I doubt there is evidence for such a conclusion. Is it not just as likely that this increase in the achievement gap is an unintended consequence of increasing standards and high-stakes tests for all students, including those who have excelled in the past?
If we truly want to solve our "student-achievement problem," we will have to clearly articulate our goals and identify problems related to these goals. I fully support educational accountability, and have seen how thoughtfully implemented standards can help both high- and low-performing schools. But without a clear definition of its educational problems, and the willingness to change course if its strategies aren't working, the country's call to "leave no child behind" while it raises the bar further from their reach is hypocritical—as likely to harm some students as to help others.
We should not be surprised, under the circumstances, if the "student-achievement problem" does not seem to go away.
Donald B. Gratz is a senior associate and the coordinator for national school reform of the Community Training & Assistance Center in Boston. He also serves as the vice chairman of the Needham, Mass., school board.
Vol. 21, Issue 1, Pages 62, 80Published in Print: September 5, 2001, as Student Achievement: What Is The Problem?