Teachers Found Skeptical About Revamped Tests
In two of the states that have most dramatically altered the way they measure student performance, teachers and principals say they support the new tests but question whether they really gauge what students have learned.
The findings come from two surveys of educators in Kentucky and Maryland conducted last year by the RAND Corp., which released the results here last week at the American Educational Research Association's annual conference.
The two states were among a small vanguard that broke ground earlier this decade when they replaced their traditional fill-in-the-bubble student-achievement tests with performance-based testing programs. The new tests require students to solve problems, participate in group activities, answer open-ended questions, and write more--in other words, to show what they can do with what they know.
Researchers at RAND, a Santa Monica, Calif.-based think tank, surveyed a total of 692 principals and teachers in Maryland and Kentucky in the spring of last year.
In both states, a majority of principals and teachers said the new assessments had improved instruction in their schools. In Maryland, for example, 81 percent of the principals surveyed said the state's testing program, known as the Maryland School Performance Assessment Program, had helped encourage resistant teachers to change their approaches.
But most teachers--87 percent in Kentucky and 73 percent in Maryland--also said they believed some schools had found ways to improve test scores without improving the quality of teaching.
Moreover, when asked to pinpoint reasons for test-score gains in their own schools, only 15 percent of teachers said scores were higher because students were learning more. Most said their schools' scores had improved because teachers and students were more familiar with the test, students had worked with practice tests and other test-preparation materials in class, and students' test-taking skills had improved.
"If they're right, then it's very bad news for these testing programs because it means the gains reported probably don't represent generalizable gains in student learning," said Daniel Koretz, the senior social scientist at RAND who led the studies. "It also suggests gains should be reported differently to the public."
But state testing officials said they were not discouraged by the findings.
"These are suggestive; they're not conclusive, and much of itis teachers' perceptions," said Steven Ferrara, who is Maryland's state assessment director. "I say, 'Well, we made some progress but we've still got some problems. But we're about where we should be right now.'"
Ed Reidy, Kentucky's assessment director, agreed. "This is a continuous-progress effort," he said, "and the only way we can continue to make progress is to participate in surveys of this sort."
The RAND researchers conducted the Maryland survey under the auspices of the National Center for Research on Evaluation, Standards, and Student Testing at the University of California at Los Angeles and with funding from the Pew Charitable Trusts. Pew and the Ford Foundation provided funds for the Kentucky study.
The survey sample included 5th-grade teachers in Maryland, 4th-grade teachers in Kentucky, and 8th-grade mathematics teachers and principals in both states.
On the bright side, the researchers found that in both states large majorities of principals said the assessments were worth the extra burden they imposed on schools.
And in Maryland, the survey also revealed an abundance of professional-development opportunities to help teachers cope with the changes called for in the tests. Almost every teacher surveyed in that state had participated in at least one such activity.
But most of the teachers in both states--70 percent or more--also said the new testing programs had caused them to put more emphasis on subjects and topics that were tested and less emphasis on those that were not. For example, most teachers said students were given more instruction in math and writing--sometimes too much writing.
In elementary schools, though there was a shift away from other subjects, relatively few teachers said they had substantially reduced the amount of time devoted to specific subjects, such as music, art, and physical education.
More than one in four teachers in both states reported cutting down on recess and free-choice activities to make more time to prepare students for tests.
In writing and math, the new tests seemed to be achieving the goals of the reformers who designed them. Teachers said they had shifted away from teaching "mechanics" such as grammar and spelling, for example, to emphasize communication, analysis, and other higher-order skills.
'Cause for Concern'
However, contrary to the commitment of both programs to the belief that all children can learn at high levels, the survey clearly suggests that many teachers still believe some students can learn more than others.
In Maryland, for example, only 18 percent of teachers said they had increased their expectations for special-education students, and only 15 percent had done so for low-achieving students.
But 36 percent said they expected more of their high-achieving students. Kentucky teachers expressed similar views.
"This is a cause for concern," Mr. Reidy of Kentucky said. "On the other hand, over three years, we've also seen a drop in the percentages of students in the novice category," which is the lowest of the achievement levels set by the state. That, he said, shows that low-achieving students have improved their performance.
Twenty-eight percent of Maryland teachers and 17 percent of Kentucky teachers said they believed the testing programs had been harmful to low-achieving students. But much larger proportions--more than half the teachers surveyed in each state--deemed them helpful for those groups.
The researchers also discovered that some of what replaced the lost subject-matter instruction in classrooms might have been test preparation.
The median Kentucky 5th-grade teacher spent about 15 hours during the school year on specific types of test-preparation activities. These included giving students practice on old test items, administering practice tests, and reviewing student work.
The median 8th-grade teacher spent about 6.5 hours--or eight class periods--on such activities.
Their counterparts in Maryland reported spending a little more than eight hours on the same sorts of activities.
Mr. Koretz questioned the value of devoting so much time to such activities, but added that "not all test preparation is necessarily bad."
Mr. Reidy said the time teachers devoted to test preparation--and the fact that they cited practice as a cause for test-score gains--was not unexpected.
"I don't think that's very different than what teachers do with any tests," he said. Also,"If you think, 'What did I do to prepare for the exam?' you're going to think of things that are the most proximate or immediate. And practicing for the exam is one of those things."
Kentucky fielded its new tests for the first time during the 1991-92 school year. Ultimately, the state plans to use the tests as a basis for financially rewarding schools that show improvement, and imposing sanctions on low-performing schools.
The state also includes portfolios of student work in its testing program. Teachers, however, were somewhat evenly divided on the value of that practice.
Maryland's testing program has brought less dramatic change. The state does, however, use the test results in deciding which schools should be taken over, or "reconstituted."
Researchers said the higher stakes attached to the Kentucky tests may account for slightly higher numbers on some of the negative findings.
More teachers in Kentucky than in Maryland, for example, said they knew of instances when teachers gave too much coaching while their students were taking the tests. They either rephrased questions, told students to revise their answers, or gave helpful hints.
But the overall percentages of those occurrences were relatively small in both states.
Small percentages of principals in both states also admitted that they had shifted good teachers to targeted testing grades.
Researchers said it is not unusual for test scores in states with new testing programs to improve in the each of the first few years, as test-takers grow more familiar with the tests.
"The real test," said Mr. Koretz of RAND, "is whether there will continue to be gains from here on out."
Vol. 15, Issue 30