Published Online:

Commentary

Our Tests, Ourselves

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments
Thoughtful analysis of classroom tests would make us far more self-conscious about issues of teaching and learning than we tend to be at present.

Nothing would better serve educational reform in the United States than for teachers to accept that the results of their tests measure their own performance (including themselves as test-makers) as well as their students'. Thoughtful analysis of classroom tests would make us far more self-conscious about issues of teaching and learning than we tend to be at present--whether we are learner-centered or subject-centered. I will illustrate my contention about "our tests, ourselves" first with an example from my own teaching, then with a study of nearly 500 10th graders in a public school.

Last summer, in an undergraduate business-writing course, I taught uses of colons and semicolons. After instruction, I gave a quiz. Results were miserable: Only about a quarter of the class seemed to have grasped the material.

Before entering the low grades in my roll book, I analyzed the results and my test. I made three troubling discoveries.

The first was that most students had not studied for the quiz. Such a (common?) result may be a great comfort to those who wish to assign all fault for failure to students. But the more I reflected on the matter, the more I wondered if the fault were entirely theirs.

It was near the end of an intensive six-week course, and most of the students were absorbed in a final project that counted 20 times as much as the punctuation quiz. In addition, the students' performance on the project could affect the grades of their co-workers. Further, every student had a full-time job during the day. Some were working 50 or more hours a week. What is a reasonable expectation for quiz-study time under such circumstances?

My second discovery was that many items in the test were not reliable measures of whether students had learned. Before the fact, the test had seemed a wonderful measure. From well-written essays, I had excerpted sentences containing colons or semicolons and I had omitted the punctuation. Surely, students who knew the difference between the marks would be able to insert the correct one.

What I didn't anticipate is that the correct mark of punctuation is far more difficult to determine for an out-of-context passage. Consider these examples from my quiz:

  • The statistics are startling __ the number of youth activists and volunteers has increased steadily in the last decade.
  • The first thing is to admit your condition __ because of some mood or event or whatever, your mind is incapable of anything like thought.
  • Thus it happened to me __ only when I was able to think of myself as an American could I see the rights and opportunities necessary for full public individuality.

Could you say for certain whether a semicolon or a colon was correct in each case?

Just as students must accept responsibility for learning, so must we accept responsibility for teaching.

My third conclusion was that I had not taught the concepts very effectively. This is always a painful discovery. Usually we know--or think we know--perfectly well the material under study. Why then don't we get it across to our students?

Obviously, issues of student readiness and motivation are central here. Nevertheless, there comes a time when certain things have to be learned by people who need to know them, and my college students had reached that point. I think the fault here was mainly mine.

The image of teachers as archers with only one arrow in their quiver helps to explain my failure. We shoot it, and if it does not hit the mark, we do not know what else to do. Often, too, we lack the patience to reinforce difficult concepts. If we teach an item like the semicolon, for example, how frequently thereafter do we observe whether students are using it correctly in their written work? And we retest only in exceptional circumstances.

In summary, I had taken insufficient account of the circumstances of the students' lives; I had given a bad test; and I had not taught the concept well. Who failed that quiz? I did. (And I threw out the results.)

Is this an isolated instance? Not at all. I know that I often fail to teach new information adequately on my first attempt. And ask conscientious teachers of remedial writing whether they have taught students how to distinguish "they're" from "their" from "there," or how effectively they have taught--really taught--the possessive apostrophe, or how successful they have been in helping students to avoid fragments and run-ons. We are haunted by our failures.

I know, too, from regularly analyzing them, that my tests are less than models of perfection. Yet I have had special training in test design--which is distinctly not the norm for most elementary or secondary school teachers. Next, let's examine the high school study.

In a district where I worked as language arts supervisor, the objective, "to learn the difference between active and passive sentences," was an item to be "mastered" in grade 10. As part of a larger experiment, I administered a test to all 10th grade classes (a total of 541 students) in September. This multiple-choice test contained 60 questions, four of which were designed to discover whether students knew the difference between passive and active sentences. (Teachers did not know the nature of this research project.) The same four questions appeared in the final examination in June.

In September, the students scored 50 percent correct. (They would have scored 25 percent by chance, since each question contained four choices.)

In June, the students scored 51 percent correct--a result the teachers themselves found devastating when we revealed it. Say what you will about student apathy and about teachers not teaching the curriculum, even when (as was the case here) they wrote it: Our students failed to learn mainly because of a massive failure to teach.

If testing and refined teaching reveal that an objective cannot be taught to students of a certain age, then for the love of humankind, let's kill it and choose another.

I do not mean to be severely critical of the six English teachers involved in this research. They were all certified, all average or above-average teachers, and all but one were conscientious in the work that they did at a good, suburban public school. Anyone who thinks the results would be much better in her or his school district should replicate the experiment.

Some teachers might argue that the active/passive objective is not appropriate for 10th grade or that it is a very difficult item to teach. But it was there in the 10th grade curriculum guide, and no one forced the teachers to select it. They believed it could and should be taught.

And it can be taught. I have had--with somewhat older students--a 96 percent success rate, with the same four questions used in this research. However, I used a quiver full of arrows rather than presenting the material through the "frontal teaching" model that John I. Goodlad found in 88 percent of the high school classes he and his colleagues visited.

Certainly, teachers and everyone else need to be more careful about choosing objectives, but this is still another benefit that can result from the adoption of the philosophy of "our tests, ourselves." If testing and refined teaching reveal that an objective cannot be taught to students of a certain age, then for the love of humankind, let's kill it and choose another.

Careful choice of objectives is especially important for concrete, measurable objectives like the ones I am discussing. But if we don't also look critically to determine whether we are hitting the target rather than merely "covering" it, we might as well adopt fuzzy stuff like the 1996 English "standards" endorsed by the National Council of Teachers of English and the International Reading Association.

Do not misunderstand me. I heartily agree with Theodore R. Sizer that students have to take responsibility for learning. And I've been around long enough to know that they typically do not. My freshmen students are always amazed to discover that I actually expect them to learn something in my class.

But it's a two-way street. Just as the students must accept responsibility for learning, so must we accept responsibility for teaching. And, sad to say, like our students, we often do not. We shoot our single arrow, and move on to the next objective.

Is there a way around this impasse? No one should underestimate the difficulty of more amply filling teachers' quivers--a far more important and difficult goal than imposing state or national curricula or tests--but I urge that a good beginning would be for teachers routinely to analyze the results of their tests as a check against their powers both as test-makers and as teachers, in the deepest sense of that word.


Edgar H. Schuster teaches English at the Abington College of Pennsylvania State University. He was a K-12 language arts supervisor for 20 years and currently serves on Pennsylvania's state Writing Advisory Committee.

Web Only

You must be logged in to leave a comment. Login |  Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Commented