Md. Test Shaping Classroom Practice, Studies Find
Maryland's new performance assessment is helping to shape instruction in its classrooms, recent studies of the program suggest.
The studies, presented last month to the American Educational Research Association, provide some of the first evidence that the new generation of performance assessments now being used across the country can have a positive effect on classroom practice.
Maryland first administered the statewide test in grades 3, 5, and 8 in 1991. That year, the test measured performance in reading/language arts and mathematics. The following year, social studies and science were added.
The new assessment consists of open-ended tasks that take between 20 minutes and 90 minutes to complete. The entire test is administered at each school for nine hours over a five-day period, but no one student takes every part.
The results are reported at the school, district, and state levels, based on the percentage of students who meet standards for "satisfactory'' and "excellent'' performance.
The studies were conducted by researchers affiliated with the federally funded National Reading Research Center, based at the University of Maryland at College Park.
Researchers in one study interviewed district-level administrators in reading and language arts in 1992, a year after the test was first administered. Representatives from 21 of the state's 24 districts participated in the study.
According to the respondents, the test had led to an increased emphasis on five areas at the district level: the integration of reading into other content areas, the use of general-trade books and literature as a basis for reading instruction, a broader curriculum that included such goals as reading to perform tasks, the use of cooperative learning, and the development of new countywide assessments.
Such changes, said John T. Guthrie, the director of the center, brought the districts' instructional goals and materials into a closer alignment with the state's desired learner outcomes. But he cautioned that the "extent of these changes was not remarkable ... at this early stage.''
Researchers also interviewed teachers and administrators from five schools that district supervisors had identified as making positive instructional changes in response to the assessments. In those schools, they found, "the assessment program had substantial impacts on instructional practice.''
In particular, all five schools reported increased writing by students, a greater emphasis on personal responses to reading, and more student choice in both reading and writing assignments.
All of the schools reported a greater use of authentic literature rather than basal textbooks.
But the authors cautioned against generalizing about the test's impact on instruction because the study was limited to a handful of exemplary schools.
Huge Time Commitment
Two other studies raised some concerns about the assessment.
One asked educators from five districts to identify barriers to implementing the test. Many of those surveyed said the assessment required huge commitments of time, both to observe and evaluate students and to change instructional practices. Yet the test results were not reported for individual students.
"Given the amounts of class and individual time and effort that are invested in teaching to, preparing for, and actually administering'' the test, the study says, "the return on investment is slight and distal.''
But Steven Ferrara, the state director of student assessment, said the "nine hours of testing represents less than 1 percent of the total instructional hours of the school year.''
"We think the benefits outweigh the difficulties that this poses to schools and individual teachers,'' Mr. Ferrara said.
He also noted that test scores for individual students are given to districts but that "school systems have been reluctant to release them to parents.''
"We've sort of been on the fence on this issue,'' he added. "We've told school systems that they should be releasing the scores, but we're not mandating it.''
The researchers also suggested spending more money on staff development and communications to help people better understand the test's rationale and uses.
The second study brought together panels of experts to review the reading portion of the 1991 assessment. They concluded that the test measured a clearly defined domain of content and skills and that it could be used to make judgments about individual schools.
But they also expressed concern that the test would fail to distinguish between students' reading and writing ability, since writing was the sole means of obtaining evidence about reading performance.