Assessment Opinion

Performance Assessment 2.0, Part 2: Lessons from Littleton

By Robert Rothman — June 11, 2015 4 min read
  • Save to favorites
  • Print

In my last post I highlighted a new report from the Stanford Center on Assessment, Learning, and Equity that examined performance assessment programs from the 1990s to draw lessons for current efforts in that area. I was familiar with a lot of those programs (the report’s authors asked me to review an earlier draft of their report). I had written about them as a reporter for Education Week and wrote a book about them.

Rereading the twenty-year old book, I was struck by how contemporary the events seem. The educators who developed the performance assessments said they wanted students to be able to use their knowledge to solve problems, reason, and communicate effectively. (Sound familiar?) They wanted assessments that would be indistinguishable from instruction and that would engage students in real-world tasks. They wanted to transform teaching so that students would take responsibility for their own learning.

But the leaders of the assessment-reform efforts also faced the challenges the SCALE report discusses that ultimately doomed the assessments. Those challenges came to the surface in a dramatic way in Littleton, Colorado, in 1993.

The reform effort in that Denver suburb began in 1987, when district leaders authorized each school to restructure itself to focus on what now might be called deeper learning. Although the district had high graduation and college-going rates and scored well on traditional standardized tests, that was not enough, according to Monte Moses, a former elementary school principal:

Our test scores didn’t mean as much when we looked at national statistics and statistics from our own building that showed that kids weren’t good problem-solvers. Those scores also didn’t mean much when we sat together at meetings and bemoaned the fact that the character and civility of children seemed to be eroding, and that when you sat down with children individually and asked them to tell you something they had accomplished that they were proud of, they weren’t able to answer.

In response to the district’s charge, schools developed instructional changes designed around new assessments that asked students to demonstrate their knowledge and skills through performances. At Mark Twain Elementary School, for example, fifth grade students had to complete a research assessment, in which they had to study a topic of their choosing (in consultation with their teacher), write a report, develop a visual representation of their findings, and prepare an oral presentation, all of which were evaluated by teachers using a common scoring guide. The assessment was intended to be a culminating event for the students, much like the graduation portfolio at Envision schools, although the stakes were not as high for the Littleton youngsters.

At Littleton High School, meanwhile, the school staff put in place what would now be called a competency-based system. The staff identified 19 skills students would be required to master--such as the ability to write articulately and effectively, to apply mathematical principles to solve a range of problems, and to use research to make decisions--and developed “demonstrations,” or performance assessments, to determine whether they have mastered them. Students had to demonstrate “proficiency” on all 19 skills, and “excellence” in two, in order to graduate.

Soon after the assessment systems were put in place, they attracted opposition from a group of parents who considered them misguided and potentially harmful. Led by William Cisney, a fabric-store owner whose son attended Heritage High School (which implemented a program similar to Littleton’s), three parents formed a slate of candidates for school board and pledged to scrap the new assessments and go “back to basics.”

After a contentious race that attracted national attention--People for the American Way distributed a questionnaire to ascertain whether Cisney and his allies were aligned with the religious right--the critics won a majority on the board and followed through on their pledge to end the assessment programs, at least in the high schools. A pioneering effort at creating a performance assessment system died.

What does this say about contemporary efforts? The three factors the SCALE researchers examined--technical quality, implementation challenges, and political support--were in evidence in Littleton. The assessments were new and had little evidence of their technical soundness, which allowed critics to question them as “unproven.” The schools in some cases implemented the assessments in a cumbersome manner, such as redesigning report cards in ways that some parents could not understand. And the political support, as shown by the election results, was not there.

But as the example shows, the three factors are interrelated. Technical quality and ease of implementation can shore up political support. But public engagement is key. Although the Littleton schools made efforts to keep parents informed and supportive of their changes, the vast majority of voters did not have children in schools and were persuaded by the critics. Changes of this magnitude need broad levels of support.

As the SCALE report points out, the state of the art of performance assessment in 2015 is well ahead of where it was twenty years ago, and conditions are more favorable for successful implementation now. But a look back is always worthwhile.

The opinions expressed in Learning Deeply are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.