Published Online:
Published in Print: August 10, 2011, as Salvaging Race to the Top Assessment

Commentary

Salvaging RTT Assessment

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

On numerous occasions, I have heard the Race to the Top-funded assessment initiative compared to the Apollo program–one of the most ambitious and successful projects of the 20th century. The federally financed effort to create new assessments tied to the common-core state standards differs, however, from the Apollo program in two key aspects. First, Apollo had a clearly defined outcome (i.e., landing a man on the moon and returning him safely to Earth) and was largely self-contained. By contrast, it is not clear that RTT assessment has such a neatly defined outcome, and it is definitely not self-contained.

The success of the assessment program now being developed by two multistate consortia with Race to the Top money will require extensive infrastructure changes in schools; the active support and involvement of states and local educators across the country; and the active support and involvement of students, parents, and perhaps communities. Additionally, the project will require significant behavior changes and resources for long-term, sustained implementation. In this regard, the War on Poverty may be the more appropriate 1960s comparison than the Apollo project in terms of RTT assessment’s scope and complexity.

Developing an assessment system, even a next-generation one, should be a much simpler task than traveling to and from the moon. After all, it’s not rocket science. However, developing an assessment system is far from the sole, or perhaps even the primary, goal of the RTT assessment program or the two consortia. Rather, the large-scale assessments being developed are simply supporting pieces in a large tool chest of assessments, support materials, professional-development materials, training programs, and data systems being created under the auspices of the Race to the Top.

—Jonathan Bouw

Along with large-scale assessments, these additional components are critical pieces in a program not designed simply to measure college and career readiness, but to improve instruction and student learning to ensure college and career readiness. And, even if there were an agreed-upon definition of “college and career readiness” (which there is not), the amorphous goal of improving instruction and student learning and assessment’s role in that process is much less well-defined than a moon landing.

Beyond that lack of a clear goal and the massive complexity of the charge at hand, the most serious threat to the success of the RTT assessment effort may be the danger of viewing the entire program through the lens of large-scale assessment.

This becomes evident in the struggles of both federally funded consortia to determine how to integrate rich performance tasks into a large-scale summative assessment. It is also evident in the fundamentally flawed premise that the role of the state assessment should be to provide teachers with real-time, actionable data. The problem is not with administering multiple, rich, performance-based assessments throughout the year. The problem is not with the state or consortia supporting that effort with the development and implementation of high-quality performance tasks to be administered throughout the year. The problems arise when one attempts to administer such assessments within the constraints of a large-scale testing program. Assessments which are already successfully implemented at the school level by local educators can be improved with appropriate state support, but they implode when burdened with the external requirements of large-scale assessment such as security and standardization.

In his speech announcing the winners of the Race to the Top assessment competition last fall, U.S. Secretary of Education Arne Duncan stated: “We must stop lying, and we must start telling the truth.” If there is any chance for the RTT assessment program to accomplish its goals and not simply produce a few “pilot projects” or “discrete tests, cobbled together,” as Mr. Duncan put it, all involved must face the truths about how extensive those goals are and what will be necessary to accomplish them. At this point, there are four clear steps that states and the federal government can take to salvage Race to the Top assessment and avoid an epic fail.

STEP 1. Establish an independent board to oversee components of the assessment process that must be applied uniformly by the two consortia. Comparability of results across all states has been the one clearly and consistently stated outcome associated with the common-core state standards and Race to the Top testing. Producing comparable results, however, requires agreement on key issues outside the purview of the consortia. These issues include defining college and career readiness, establishing achievement standards, and arriving at a common interpretation of the common standards to inform assessment development. We cannot afford to repeat past mistakes and allow assessments to define those critical areas.

"There is no question that we are at a crossroads in assessment in K-12 education, and that this is an unprecedented opportunity to advance the field."

STEP 2. Clarify the role of the federal government in the RTT assessment program and state assessments. The reauthorization of the Elementary and Secondary Education Act must be resolved immediately, and the new version must address state concerns about the federal approval process for assessments and accountability under the No Child Left Behind Act. The consortia and their member states cannot function effectively in a state of uncertainty regarding high-stakes accountability.

STEP 3. Clearly delineate the functions of each component of a comprehensive assessment system. The various pieces of a system serve different functions and consequently have different technical, operational, and policy-based requirements. For instance, the components designed to provide formative information at the school level do not require the same level of standardization and security as those designed to provide comparable summative information to the state. Identifying the purpose of each of the components in the system avoids placing unnecessary constraints on all components of the system.

STEP 4. Establish a clear vision for the future of large-scale assessment. Where do we want state assessment to be in five, 10, or 20 years? There are two distinct paths to follow. Path A, which we have been on for 20 years, leads to an ever-increasing role of state assessment and inevitably to an increase in the direct role of the state (or federal government) in K-12 curriculum and instruction. The central question along this path is: In what ways can large-scale assessment be enhanced to provide more information to and about students, teachers, schools, and districts? Path B leads to the development of local capacity and the enhancement of local instruction to the point where external, large-scale state assessment becomes largely superfluous, primarily functioning as an auditing tool. Along this path the central questions are: What information is needed to support effective instruction and student learning, and what is the best way to deliver that information?

There is no question that we are at a crossroads in K-12 assessment, and that this is an unprecedented opportunity to advance the field. True advancement, however, is not possible without a clear sense of which of the paths outlined above we intend to follow. Although some may argue that any improvement over the current state of large-scale testing is a step in the right direction, incremental movement down the wrong path ultimately will lead us further and further from our final destination.

Vol. 30, Issue 37, Pages 25,32

Related Stories
You must be logged in to leave a comment. Login | Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Recommended

Commented