Published Online: May 14, 1997

Departments

When Performance Assessment Hits Home

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints

The meeting was like many that have taken place over the past few years. About a dozen parents gathered in an elementary school library to discuss their children's results on a reading test. They were curious about this test, which differed considerably from the tests they were familiar with. And they were anxious about the results: Their children, whom they had thought of as high achievers, had scored at the "below basic" or at the "basic" level.

As a former staff writer for this newspaper and the author of a book on assessment, I had written many times about meetings like this and the issues the parents raised at them. This time, though, I was not attending as an observer. I was a participant. My son, a 2nd grader, had taken the test, and I, too, wanted to learn more about the results.

Much of our anxiety as parents reflected the fact that this new test came about with little advance warning. For many years, the district had used the same test, a traditional multiple-choice, basic-skills test in reading, mathematics, science, and social studies. The old test was administered in this school only at the 3rd grade level. And the schools' students had done very well on the test: Last year, the school's reading scores were the third highest in the district.

What to ask when your 2nd grader scores 'basic.'

This year, though, the district was moving to implement a reform program around high standards for student performance, just as many districts around the country are doing. As part of that effort, the district was looking to change its testing program to reflect its goals for students. The administration also wanted to implement a new teacher-evaluation program that would be based, at least in part, on student performance, so the leadership wanted a new testing program for that purpose as well. The administration elected to start with a reading test, since reading was the major area of concern.

As often happens, things did not go as planned. The district was slow in choosing a new testing program, so they were unable to administer the test in September, as a pretest for the teacher-evaluation program. That idea was scrapped. Meanwhile, principals and teachers could not prepare for the new test, since they did not know for weeks what it would be. Then, when the district selected a test, things moved quickly. The district held workshops to explain how to administer the exam, schools sent home to parents a notice that students would be tested, and all students, from grade 1 on, took the 90-minute reading test in November.

In part because the test was new, and schools were uncertain about how to handle the test materials, the results took far longer than expected to come back--four months. Then in March, after the school sent score reports home to parents, the school counselor held a meeting to explain those reports and answer parents' questions.

We had many questions. For a lot of us, this was the first set of test scores we had seen for our children. Unlike in the past, when only 3rd graders at the school were tested, every child had taken this test. The scores were often the first piece of information, other than teachers' comments, about how our children were performing.

What did the reports say? The first page showed the results of the multiple-choice section. It showed the number of questions on each topic (word attack, reading comprehension, and so forth), the number the student attempted to answer, and the number the student answered correctly. That was it--no percentile scores, no stanines, no grade-level-equivalents, no comparison with "norm groups."

The second page showed the results of the open-ended section. It rated each response, and the overall results of the section, on a 0-to-3 scale, in which a 3 represented a complete answer, a 2 represented a partially correct answer, a 1 represented an incomplete answer, and a 0 indicated no response. For each item, the report suggested what the student was able to do in his or her response.

The third page showed the overall results, on both the multiple-choice and open sections and a combined response, in terms of performance standards. That is, the testing company converted the raw score into a scale that indicated whether students performed at the basic, proficient, or advanced level (or below basic), the same levels used on the National Assessment of Educational Progress.

For many of us at the meeting, these reports seemed easy to read. We knew what the number correct meant. And most of us had little trouble with the idea of performance standards; we understood that the goal was proficiency, and that when our children were at the "basic" level it was cause for concern.

Nevertheless, several parents wanted some comparative information. They wanted to know how many students in the school performed at each level. They also wanted to know about students at neighboring schools, and in the district as a whole. In large part, they needed reassurance; they sighed audibly when I told them that students everywhere performed relatively poorly the first time they took a performance-based assessment. They wanted to know if their children's scores were aberrant.

We reserved most of our questions for the discussion of the open-ended section of the test. For most of the parents, this was their first exposure to performance assessment, and they wanted to know exactly what this section tested. All of us were most concerned because this was the section on which our students had registered their lowest scores. We wanted to know how our children--my son included--could do so well on the multiple-choice section but end up at the "basic" level on the open-ended items.

Parents said quite strongly that the test results revealed something about their children's education. They started asking some hard questions: What were teachers teaching?

The counselor did a good job of explaining this section. Using materials supplied by the testing company, she showed a sample item, which included a reading passage and three open-ended questions: one that asked students to explain the main idea of the passage, one that asked them to draw inferences, and one that asked them to critique the passage. She also showed how the test included types of writing, such as informational writing, not often included on standardized tests. With my background, I was able to help supplement her presentation.

But a few of the parents remained unconvinced that the test results reflected a true picture of their children's abilities. They said their children had excellent teachers and were quite able to answer the questions on the test--only not under the timed conditions the test imposed. Other parents said they thought the open-ended section was subjective, and that the scores reflected the judgments of the test scorers, not their children's abilities.

Fortunately, most of us didn't feel this way. Several parents, including a college teacher, were convinced about the objectivity of scoring essay questions. And others said quite strongly that the test results revealed something about their children's education. These parents started asking some hard questions: What were teachers teaching? And what can the school do to make sure that teachers begin to teach their children to answer these kinds of questions, so that the results can improve the next time?

I found this part of the discussion heartening. For years, advocates of performance assessment--and I am among them--have been arguing that such assessments would encourage teachers to teach skills, like the ability to use knowledge to solve real problems, that are seldom tapped on conventional tests. Here, my fellow parents not only agreed with that argument, they were demanding that the school make instructional changes to teach the abilities emphasized on the test. And, they said, they wanted these changes not so that teachers "teach to the test," but because they thought such abilities were worth knowing. More than one of the parents at the meeting pointed out that, throughout their lives, their children would have to be writing what they know.

Will the school change its instructional program? We will see. In large part, whether or not it does depends on whether the teachers agree that the test is a valid measure of student abilities, and that it has exposed a gap in their teaching. The school will be conducting an analysis of test scores in the coming weeks, and the principal plans to hold a meeting to discuss the results with the faculty.

But this and other schools, and the districts they are part of, can also do more to inform professionals, parents, and the public about performance tests and their results. The information is extremely powerful, yet my child's district had done little to prepare teachers and administrators for it. And, while the school did hold this parents' meeting, only about a dozen parents showed up (there are slightly more than 200 pupils in the school). I don't know whether other schools have held similar meetings, and I don't know how many parents are asking their principals the kinds of questions we did at our meeting.

This is an opportunity they should not miss. For a century and a half, tests have served as the most important source of information about student and school performance. They have not always communicated as effectively as they can, and schools have not always used the information as well as they could. But as journalists are fond of saying, the public has a "right to know" about student performance, especially when the test provides the kind of information my son's test provided. Staying in the dark does no one any good.


Robert Rothman is a senior associate at the National Alliance for Restructuring Education, in Washington, and the author of Measuring Up: Standards, Assessment, and School Reform.

Web Only

Related Stories
Web Resources

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Recommended

Commented