Hundreds of colleges and universities around the country, compelled by Congress to submit report cards profiling their teacher-preparation programs and the students who complete them, are scrambling to pull together data before the April 9 deadline.
Many officials say that efforts to comply with Title II of the Higher Education Act of 1998, the law that takes effect this spring and is aimed at making teacher-preparation programs more accountable, have been hampered by the very people and technology that should be helping them. Moreover, they charge that the reporting problems ultimately stem from a federal accountability system that has turned out to be far more complex than its architects intended.
The information Congress is requiring is difficult to obtain for many institutions, said David P. Wright, the associate director of teacher education for the California State University system.
Many states had to convert the data they had on hand to fit the Department of Education format. Or they began crunching numbers and found the data weren’t reliable. Worse yet, many institutions had to collect information they had never before sought, a costly and time- consuming project.
“We’re flying by the seat of our pants to meet this deadline,” Mr. Wright said. “There is a tremendous amount of data being assembled ... for the very first time throughout the state.”
The goal of the accountability law is to illuminate passing rates on state teacher-licensure exams and other information deemed vital for public inspection. The mandate came after years of complaints that teacher-training programs were unwilling to make the changes necessary to prepare greater numbers of highly competent graduates to staff American schools.
Data gathered for the 1999-2000 academic year will be placed into two types of reports. The first, to be completed by the nation’s 1,300 colleges of education and alternative teacher-preparation programs and due for submission to their states next week, are to contain students’ passing rates on state tests, the number of students in each program, and faculty-to-student ratios. Those report cards will also document whether a program has been approved by the state or been labeled “low performing.”
Schools will hand over their report cards to states, which, in turn, will write second report cards ranking teacher-preparation programs statewide. All programs will be placed into one of four quartiles based on their passing rates.
States will then submit the data to the federal Education Department, which will forward them to Congress next year.
The federal government may levy a fine of $25,000 on institutions that do not comply with the mandate, but the department does not expect that situation to occur in any state or in the District of Columbia, said Maureen A. McLaughlin, a deputy assistant secretary for the department.
Ambiguity in Regulations
At the outset, it appeared that colleges and universities had been given plenty of time to construct their report cards.
But when the National Center for Education Statistics, the arm of the Education Department charged with implementing the 1998 accountability law, missed a deadline to complete report card regulations, NCES extended the institutions’ deadline from April 2000 to April 2001. The states’ report card deadline of October 2000 was also extended a year to October 2001. (“Teacher Ed. Riled Over Federal Plan,” Aug. 4, 1999.)
Despite the extension, many schools started later than they had hoped because of further delays—this time at the state level. Officials, they said, took longer than anticipated to set the criteria that colleges use to judge their teacher-preparation programs.
To give states some flexibility, the federal government intentionally left terminology in the regulations for carrying out the requirement ambiguous.
For example, the NCES mandated that states provide passing rates on “program completers,” but it permitted the states to spell out what that term meant. To some state officials, such students had finished their coursework but had yet to do their student teaching. To others, it meant they had already passed state certification requirements.
College administrators in Arizona, meanwhile, won’t submit passing rates this year on any state tests or data related to them because the state board of education has yet to define the minimum scores prospective teachers need in order to qualify.
Arizona was in the process of overhauling its teacher-licensure regulations when Title II was put into place, and now, the state board is refusing to rush its decisions simply because of the federal mandate, said Corinne Velasquez, the board’s executive director.
No institution in Arizona will be penalized for the state’s failure to provide the information necessary to write the report cards, Ms. McLaughlin said. According to the law, no college is responsible for providing data that is unavailable to them.
“In one sense, it really simplified our lives because we don’t have any real data to collect,” said Nicholas Appleton, the associate dean for teacher preparation at Arizona State University in Tempe.
Schools in Maryland, Ohio, and Virginia are in similar situations, said Mari Pearlman, the vice president for the division of teaching and learning for the Educational Testing Service.
The Princeton, N.J.-based test-maker produces many teacher-licensing exams and was hired by 24 states to help coordinate their Title II report cards.
National Evaluation Systems, a company based in Amherst, Mass., that also designs many teacher-licensing exams, also collated data for Title II. It did not provide a detailed account of such projects to Education Week.
The information needed to complete the testing data in Maryland, Ohio, and Virginia is being withheld by the ETS because information submitted to the testing company was inaccurate, misclassified, or omitted by mistake, Ms. Pearlman said. Such flaws contaminated the data results and changed the passing rates.
For example, the score of a student who took most of his or her education courses at one university but took the final class at a second institution might have been attributed to the second school—even though professors there didn’t provide the lion’s share of the training for that individual, Ms. Pearlman said.
Maryland, Ohio, and Virginia weren’t alone in making errors, Ms. Pearlman said. Such problems were common in the data received from most of the 24 states the testing company aided.
Administrators at institutions in Arkansas, Missouri, and North Carolina all stated that their reporting took longer than anticipated due to a backlog of data withheld by the ETS.
“This process is so complicated, it is Byzantine,” she said. “One of the things that Title II has taught us is that institutions of higher education need absolutely pure data. This is wholly unusual for schools and state regulatory agencies.”
Cleansing the data of inaccuracies is only one of the obstacles. Once the testing company returns the information to state officials, they review the data, then ship it to the colleges and universities. The institutions, in turn, are responsible for verifying that the information is correct, a labor-intensive effort that costs staff members hundreds of hours.
“It requires that we go through each individual entry,” William I. Burke, the senior associate dean for the school of education at the University of North Carolina at Chapel Hill, said. “We verify that each person was admitted.”
UNC administrators have found errors in about 10 percent of the information to date, he said. Fortunately, the institution has a data bank set up that simplifies the verification process. It could have easily become an unwieldy task, Mr. Burke said, especially because the ETS analysis did not surface in campus mail until March 16—only three weeks before the deadline.
The reporting process in New Jersey, meanwhile, went smoothly until an enormous technical error posted on the ETS-designed Web site was discovered, said Ana Maria Schuhmann, the dean of the school of education at Kean University in Union.
“The Web site showed that everyone who took the general- knowledge test for early-childhood and elementary education passed the test,” Ms. Schuhmann said.
It turned out, however, that the passing rates had been incorrectly posted. Ms. Schuhmann said she spent eight full days recalculating the scores for her students.
But Is It Meaningful?
Some critics, though, maintain that the report cards—even when accurate—won’t provide meaningful information to Congress, the public, or teacher-preparation programs themselves. They also worry that state rankings, when announced next fall, will be distorted by the news media, and that higher education officials will become so preoccupied with the pecking order that they will make it hard for many students to enter their programs or will require exit exams to graduate.
National comparisons will be impossible, given that states use different types of licensing exams, contends C. Emily Feistritzer, the president of the National Center for Education Information, a private research group based in Washington. Nor will state-to-state comparisons be feasible, she argues, because even those states that use the same exams set different cutoff scores to determine passing rates.
“If the intent is to demand greater accountability or to punish colleges of education, I think [the federal government] could have found a better way to do it,” Ms. Feistritzer said. “The report is going to be relatively meaningless.”
Ms. McLaughlin of the Education Department pointed out, however, that Congress never intended the report cards to be used to compare states. Instead, the report cards are supposed to be a means to compare teacher-preparation programs statewide.
But comparisons within states will be difficult as well, because passing rates don’t tell people if individual graduates from particular colleges excelled on their exams, said Bryan McCoy, an assistant professor of education and the Title II coordinator for Southern Arkansas University in Magnolia. They simply note what percentage of a school’s graduates meet the minimum standard.
“The qualitative data are the most meaningful part of the report cards, but they’re not sufficient indicators that a person is ready to be a teacher,” Mr. McCoy said. “You have to look at the overall program and individual competencies.”
The report cards will eventually become even more meaningless in Arkansas, critics say, when the state phases in a mandate over the next few years requiring prospective teachers to pass exit exams before graduating from their teacher-preparation programs. Such a change will effectively guarantee a 100 percent passing rate for the state.
In no way is the state deliberately attempting to make its programs look good on the report cards, said Suzanne Mitchell, a spokeswoman for the Arkansas Department of Higher Education. Giving exit exams prior to graduation simply ensures that students are competent before they reach the classroom, she said. Already, Ms. Mitchell said, six of the 18 Arkansas colleges that produce teachers have such a system in place.
Ms. McLaughlin emphasized that the report cards will include several categories of information above and beyond passing rates—such as faculty-to- student ratios—that are intended to be useful to students shopping for schools or administrators seeking to hire teachers.
Away From Fingerpointing
Despite the complaints and concerns surrounding the report cards, many observers say the federal mandate is working to open the lines of communication among the various institutional players concerned with teacher training. Though it was not the intent of the law, that achievement is one that few other statutes or programs have been able to accomplish so successfully, supporters of the mandate say.
For years, the various education constituencies blamed one another for the perceived shortcomings of the teaching force. Colleges accused K-12 schools of producing low-performing graduates. Deans of education schools claimed liberal-arts professors failed to take responsibility for the subject-matter knowledge taught to prospective teachers. Schools of arts and sciences pointed a finger at schools of education that, they said, failed to provide a tough curriculum. Institutions felt states were insensitive to their needs; states felt institutions were afraid of criticism and change.
That’s changing in Michigan, said Jerry Robbins, the dean of the college of education at Eastern Michigan University in Ypsilanti.
All around the state, deans of both colleges of education and colleges of arts and sciences have begun meeting to discuss teacher training, he said. Such a session at Eastern Michigan on low teacher-certification test scores has already produced substantial curriculum revisions that are making their way through the approval process, Mr. Robbins said.
Such dialogues are also taking place in Massachusetts.
“I will call up [the dean] at Boston College or [the dean] at Lesley College, and I’ll say, ‘How are you handing this?’ and then, we’ll think through it together,” said James W. Fraser, the dean of the school of education at Northeastern University in Boston.
Such collaborations are more likely a by-product of years of quiet facilitation, said Penelope M. Earley, the senior director of the American Association of Colleges for Teacher Education, a membership organization based in Washington that represents 735 teacher-preparation programs.
“What Title II has done is made the stakes for doing so really, really high,” Ms. Earley said. “Collaboration was moving along gently, a really fragile system that was not fully developed. What Title II has done is force the collaboration process to move faster and, in some cases, to be cemented around one thing—data collection.”
Ms. Earley fears that once the news of the report cards and their rankings becomes public, the blame game might begin anew.
“That will test collaborations,” she said. “It may destroy them.”
A version of this article appeared in the March 28, 2001 edition of Education Week as Ed. Schools Strain To File Report Cards