The “No Child Left Behind” Act of 2001 requires that states annually test students in reading and mathematics in grades 3-8 and once in high school, beginning no later than the 2005-06 school year. The law requires that states use tests aligned with their academic-content standards, either by building assessments specifically designed to reflect those standards or by modifying commercially produced, off-the-shelf tests.
According to Education Week’s Quality Counts 2004, all 50 states and the District of Columbia now have some statewide test in place. The report also found that in the 2003-04 school year, 20 states and the District of Columbia will test in both English and math in all the required grades. Of those 21 jurisdictions, 15 use the same standards-based (tests aligned with state standards) testing program in each grade, permitting results to be compared across grades.
The logic behind such assessment systems, one of the centerpieces of the push for standards-based school improvement, has been to find a more accurate way to measure both student and school progress as well as to establish measures against which to hold schools accountable for results. Testing policies, however, have caused considerable controversy. Testing proponents see statewide assessments as a way to raise expectations and help guarantee that all children are held to the same high standards (Gandal and McGiffert, 2002). But critics say that such testing programs narrow student learning to what is tested-and that what is tested is only a sample of what children should know (Schmidt, 2002). Furthermore, tests often focus on what’s easiest to measure, not on the critical-thinking skills students most need to develop (Webb, 1999).
For example, many reform-minded testing experts advocate what are called performance-based assessments, designed to elicit critical-thinking, problem-solving, and communication skills. These typically are more open-ended tests, on which teachers judge students on written essays, on the process they use to solve a math problem (rather than on the result alone, for instance), or even on portfolios of their work over the school year. In reality, most state assessments may be better at gauging rote learning than at evaluating thinking skills. Concerns with the costs of performance-based tests, the reliability of scorers’ judgments, and the difficulties of covering the breadth of standards all may be reasons why performance-based assessments have been in short supply among state testing programs. Forty-nine states and the District of Columbia include multiple-choice questions on their state exams. Only about half administer performance-based assessments in subjects other than writing, and just two states use portfolios-compilations of student work-to judge student performance (Quality Counts, 2004).
A major concern related to holding students, teachers, or schools accountable for student performance on tests is the extent to which tests are actually “aligned” with state standards (AERA, 2003). For the 2002-03 school year, just 15 states reported to Education Week that they were assessing students with tests aligned to state standards in all of the core subject areas-math, English, social studies/history, and science-at the elementary, middle, and high school levels (Quality Counts, 2003).
Some states have invested in devising tests to match their state standards. Others have opted for partial alignment or “hybrid” tests that combine elements of standards-based tests and norm-referenced tests (tests that judge student performance in relation to other students’ performance rather than to set standards). Still others have decided to assess students using only national off- the-shelf tests that do not necessarily reflect state standards. According to Quality Counts 2004, for the 2003-04 school year, 42 states are using tests custom developed to match standards; 12 states have adopted augmented or hybrid tests; and 21 states will employ a norm-referenced test as part of their state assessment systems.
Some parents, teachers, and other critics worry that schools may be spending too much instructional time preparing for tests. An Education Week survey in 2000 showed that 66 percent of teachers thought state tests were forcing them to concentrate too much on what was tested to the detriment of other important topics, and nearly half said they spent “a great deal of time” helping students prepare for tests (Quality Counts, 2001).
The Cambridge, Mass.-based National Center for Fair and Open Testing, FairTest, suggests that assessment practices are fraught with other contentious issues. While African-Americans and students from most other minority groups have shown both relative and absolute gains in standardized-test scores over the past several decades, they still score much lower than white students as a group. Some educators believe that many standardized tests are culturally biased, drawing primarily upon the experiences of middle-class white students.
Critics also question the “high stakes” many states attach to tests. State assessments are being used not only to hold schools accountable for results, but also, increasingly, to determine whether students should advance to the next grade, attend summer school, or earn a high school diploma. By 2004, nine states are expected to base grade-to-grade promotion decisions on test results (Quality Counts, 2004). Twenty states now require students to pass a test in order to earn diplomas. But several states, including Arizona and California have delayed attaching consequences to such “exit exams” after complaints about the fairness of the tests.
Detractors are concerned that decisions about students’ graduation or promotion from grade to grade are now being made based solely on performance on one multiple-choice test (Heubert and Hauser, 1999). Some argue that high-stakes testing undermines learning and hurts struggling students (Amrein and Berliner, 2002). Critics also question whether tests should be used to hold individual students accountable when it is not clear whether schools are providing students with the tools they need: high-quality teachers, strong curricula, and extra time to master what’s expected on the tests.
But two recent studies find that high-stakes testing, may, in fact, bring about academic gains, particularly for minority students. The first study, by Martin Carnoy and Susanna Loeb, finds that students in states where high stakes are attached to tests performed better on nationwide tests (Winter, 2003). The second study concludes that students in states with student accountability systems in place score higher on the National Assessment of Educational Progress in math than students in states without such consequences (Raymond and Hanushek, 2003).
Testing advocates agree that making sure students and teachers have the resources and tools they need to meet the expectations of state standards and tests is a priority. But they also argue there ought to be a way to ensure to graduates, future employers, institutions of higher education, and the public that a high school diploma means that students have the skills they need to succeed.
Despite continuing debate, solid reasons for testing remain. With public schools under major pressure to show results, testing may be helping to raise the expectations for schools—especially for the lowest-performing ones. Many schools, districts, and states that have seen achievement levels rise in recent years attribute their success to higher expectations for students, as embodied in state tests, and how test results have been used to improve classroom practice. Tests can provide data that show what students are lacking, and give educators the information necessary to tailor classes to student needs.
Education Week, Quality Counts 2004: Count Me In Jan. 8, 2004.
Education Week, Quality Counts 2003: “If I Can’t Learn from You ...,” Jan. 9, 2003.
Gandal, M., and McGiffert, L., “The Power of Testing.” Educational Leadership, 60 (5), 2003.
Merrow, J., Interview with William Schmidt, “Frontline,” PBS, April 26, 2001.
Webb, N., “Alignment of Science and Mathematics Standards in Four States” (Research Mongraph No. 5) University of Wisconsin-Madison, 1999.
Olson, L., “Standards and Tests, Keeping Them Aligned,” Research Points, 1 (1) 2003.
Education Week, Quality Counts 2001: A Better Balance, Jan. 11, 2001.
Heubert, J.P., and Hauser, R.M. (Eds.), High Stakes: Testing for Tracking, Promotion, and Graduation, National Research Council, Washington, D.C.: National Academy Press, 1999.
Amrein, A.L., and Berliner, D.C., “High-stakes Testing, Uncertainty, and Student Learning,” Education Policy Analysis Archives, 10 (18), 2002.
Raymond, M., and Hanushek, E., “High-Stakes Research,” Education Next, Summer 2003.
Winter, G., “New Ammunition for Backers of Do-or-Die Exams,” The New York Times, B9, April 23, 2003.
How to Cite This Article
Editorial Projects in Education Research Center. (2004, September 21). Assessment. Education Week. Retrieved Month Day, Year from