National Research Panel Tepid Over Tests For Licensing Teachers
A report on teacher testing issued last week by a panel of the National Research Council throws cold water on a new federal law requiring colleges and universities to report their students' passing scores on licensing tests.
|"Tests and Teaching Quality: Interim Report" can be read online at the National Academy Press. Hard copies can be ordered for $12 (plus $4.50 shipping) from NAP, 2101 Constitution Ave. NW., Washington, DC 20418, or by calling (202) 334-3313.|
In general, the report is lukewarm toward teacher testing, pointing out the many limitations inherent in the current system and cautioning that differences in passing rates for members of minority groups "pose problems for schools that seek to have a diverse teaching force."
"Well-designed licensure tests provide information that states consider necessary, but the information is not sufficient to know whether a teacher will be successful in the classroom," David Z. Robinson, a former executive vice president of the Carnegie Corporation of New York and the chairman of the committee, said in releasing the report. "And when it comes to reporting passing rates to the federal government, the system is too fragmented to allow for meaningful state-by-state comparisons."
The research council convened the 17-member Committee on Assessment and Teacher Quality at the request of the Department of Education. The interim report covers the first nine months of the 20-month study, which may ultimately re-commend model systems for licensing beginning teachers.
The interim report also concludes that there is little evidence about the extent to which widely used tests distinguish between those who are minimally competent and those who are not.
Policymakers and advocates were quick to complain last week that its generalizations offered little useful guidance.
"I thought there would be a closer examination of the tests themselves," said Terry Knecht Dozier, the senior adviser on teaching to Secretary of Education Richard W. Riley. "Where's the meat to this?"
Advocates of teacher testing and of the new federal reporting requirements, contained in the 1998 Higher Education Act, also expressed dissatisfaction.
Rep. George Miller, a California Democrat who co-wrote the accountability provisions, said in a statement that high failure rates "should be a red flag that a school is falling far short of what it needs to do to prepare prospective teachers. But we will never know unless we collect and publish the results."
Mr. Miller noted that many of the panelists hail from education schools. "I applaud those who contributed to the report for being so frank about the weaknesses and flaws in the system they helped create," he chided.
Stephen P. Klein, a senior social scientist at the RAND Corp. in Santa Monica, Calif., and a member of the panel, said he was disappointed that the committee had not actually examined teacher tests.
By definition, Mr. Klein said, licensing tests are designed to see whether candidates possess certain skills and knowledge, but not to say whether they would be good teachers—just as drivers' tests don't predict whether people will be safe drivers.
Therefore, the conclusion that the tests don't assure that people will be good teachers answers the wrong question, argued Mr. Klein, one of two testing experts on the panel. "I am one voice off to the side here. This is not a professional look at [the issue.] These folks don't have a clue what a licensing test is all about."
Michael J. Feuer, director of the NRC's Board on Testing and Assessment, called the criticism "overstated," noting that the panel is in the early stages of the $1.08 million study.
White candidates pass teacher-licensure tests at higher rates than do members of minority groups—a phenomenon also found in tests for other professions, the report says. But the panel sees the gap as creating particular problems for schools because of the emphasis on attracting more diverse staffs.
The Education Trust, a Washington advocacy group that issued its own report on teacher testing last year, took strong exception to what it said was the report's "disturbing implication" that minority students couldn't pass the tests.
"What we ought to be doing is to ensure that all candidates have the education to pass what would ideally be a much more rigorous test," said Amy Wilkins, a policy analyst for the organization.
While she agreed that comparing passing rates on the tests by different states would not be very useful, she argued that in-state comparisons are "absolutely critical" because the vast majority of teachers teach in the states in which they are educated.
On the issue of the federal reporting requirements, which have caused considerable concern in higher education circles, the report notes that states give different tests to prospective teachers. They also set different passing scores, use the tests in different ways, and serve different student populations, it adds.
Those variations, it concludes, mean that comparisons of states' passing rates "are not useful for policy purposes." Even within a state, institutions' different missions, admissions criteria, and testing policies are likely to make comparisons problematic, it adds.
The report's conclusions are similar to arguments higher education groups made as the federal government has tried to develop the mandated "report card" to hold teacher education programs more accountable for the quality of their graduates. ("Teacher Colleges, States Granted Report Card Extensions," Feb. 2, 2000.)
The committee itself, Mr. Klein said, was split on whether comparisons of education schools within a state would be valuable.
Milton D. Hakel, a psychologist at Bowling Green University in Ohio and a panelist, said each university in his state has a distinct mission that could make comparing the pass rates on teacher tests "easily misleading."
The report "reinforces what we have tried to tell the Congress when they were drafting the legislation—that this is going to be difficult," said Diane Hampton, a legislative analyst for the American Council on Education.
Vol. 19, Issue 27, Pages 1,16