Published Online: August 5, 2009

Experts Hope Federal Funds Lead to Better Tests

No matter where teachers, state officials, and testing experts stand on the debate about school accountability, they generally agree that the United States’ current multiple-choice-dominated K-12 tests are, to use language borrowed from the No Child Left Behind Act, in need of improvement.

Now, federal officials are signaling that they expect the caliber of testing to change.

U.S. Secretary of Education Arne Duncan recently announced that he will set aside $350 million of the $4.35 billion in discretionary aid in the Race to the Top Fund to improve assessments.

Testing experts say that money could serve as a down payment for scaling up tests that would better measure students’ critical-thinking skills and improve teacher and student engagement in the assessment process. The catch, they warn, is that truly achieving that goal may force federal officials to rethink the current parameters around assessment and accountability in the NCLB law.

“Accountability testing is seen as a necessary evil to be minimized. It’s like going to the dentist. You have to do it, but it hurts,” said Randy E. Bennett, a distinguished presidential scholar at the Educational Testing Service, a nonprofit testing and research organization based in Princeton, N.J.

Sample Assessment

The proposed application requirements for the Race to the Top Fund define a “high quality” assessment as one that uses “a variety of item types, formats, and assessment conditions,” including performance-based tasks, to measure student achievement.

College- and Work-

Readiness Assessment:
You advise Pat Williams, the president of DynaTech, a company that makes precision electronic instruments and navigational equipment. Sally Evans, a member of DynaTech’s sales force, recommended that DynaTech buy a small private plane (a SwiftAir 235) that she and other members of the sales force could use to visit customers. Pat was about to approve the purchase when there was an accident involving a SwiftAir 235. Your document library contains the following materials:

• Newspaper article about the accident

• Federal Accident Report on in-flight breakups in single-engine planes

• Internal correspondence (Pat’s e-mail to you and Sally’s e-mail to Pat)

• Charts relating to SwiftAir’s performance characteristics

• Excerpt from magazine article comparing SwiftAir 235 with similar planes

• Pictures and descriptions of SwiftAir Models 180 and 235

Sample Questions:
Do the available data tend to support or refute the claim that the type of wing on the SwiftAir 235 leads to more in-flight breakups? What is the basis for your conclusion? What other factors might have contributed to the accident and should be taken into account? What is your preliminary recommendation about whether or not DynaTech should buy the plane and what is the basis for this recommendation?

“The goal,” he said, “should be to make even the test as much of a learning experience as possible, so the student actually benefits from taking it, and teachers are given some important information for the purposes of instruction.”

Measuring Critical Thinking

The image of fill-in-the-bubble multiple-choice items has become all but inseparable from the NCLB law, which more than doubled the amount of federally mandated testing to grades 3-8 and once in high school.

Multiple-choice items can efficiently discover whether a student has assembled discrete pieces of knowledge across a subject. The results are also typically highly reliable, meaning the error associated with the results is low—a desirable quality for high-stakes tests. And they are are easy and cheap to score.

Such tests, though, are not ideal for identifying whether students can take multiple pieces of domain-specific knowledge and analyze, integrate, and apply them in unfamiliar contexts, Mr. Bennett said. And researchers familiar with international benchmarking argue that those critical-thinking skills are precisely the type that will be in demand as the global economy becomes increasingly knowledge-oriented.

“I think the tragedy is that things that are easy to test and teach lose relevance,” said Andreas Schleicher, the head of the indicators and analysis division for the Paris-based Organization for Economic Cooperation and Development. The OECD sponsors the Program for International Student Assessment, or PISA, which includes performance-based items.

“The feature that is central to PISA is that we’re not that interested in whether students can reproduce content knowledge,” Mr. Schleicher said, “but whether they can extrapolate what they know and apply it in novel situations.”

Performance-based tests designed to measure those abilities are common in specialized fields such as medicine, which requires examinees to diagnose and treat simulated patients, for example. But the exams typically require scoring by humans, and for that reason are costlier than those that use exclusively multiple-choice questions. They also produce results that paint a deeper picture of students’ understanding but are less mathematically reliable than multiple-choice tests.

Issues of both cost and reliability, testing experts say, explain why extended performance-based tasks have not penetrated K-12 assessment under the NCLB law.

Technology as Mediator?

What now seems to be an intractable choice between richer tasks and reliable data, though, could be mediated by advancements in technology that could improve access, cost, and reliability of performance-based testing, some experts argue.

And the federal funding, they say, could be the lever to support that work.

“It’s expensive to put [new item formats] into practice, and to the extent that infusion can help create not only prototypes of promising assessment but support some of the infrastructure needed to deliver them efficiently [it] will be an important legacy,” Mr. Bennett said.

Spotlight on ELL Assessment and Teaching

Federal officials have not yet revealed the details on the funding, which will be awarded to states as part of the Race to the Top fund. But Secretary Duncan has intimated in public appearances that the funding will support assessments aligned to the common core of standards now in development.

Some standardized performance-based examples already exist, such as the College and Work Readiness Assessment, a computer-based test that is given primarily to high school freshmen and seniors in private schools.

The exam, run by the Council for Aid to Education, a New York City-based nonprofit group that works to improve access to higher education, includes a task that requires students to sift through various texts and sources of data and draw conclusions from them to support an argument.

“By and large, the real world doesn’t present itself as nice little abstract tasks with four options that you choose from,” said Richard J. Shavelson, a professor of education at Stanford University who helped design the assessment.

A typical College and Work Readiness Assessment question might present examinees with a dossier of materials relating to a child who had a roller-skating accident at school. The materials could include newspaper articles, technical reports about the skates, data about competitors’ products, sales figures, medical reports, and the number of documented accidents. Then, the student would be asked to analyze those materials and write a memo about whether the skates are truly dangerous, and to justify his or her conclusions drawing from the information.

Mr. Shavelson said he and other researchers have been investigating ways of reducing the complexity of such items for younger students.

The high costs of scoring such a complicated assessment with an almost unlimited number of answers, he added, could be mitigated by advancements in natural-language-processing software­—essentially programming that proponents claim can judge written essays as accurately as human readers and reduce, though not eliminate, the need for costly human evaluation.

In addition, experts say, technology offers the ability to measure student understanding of concepts and processes involving critical thinking that have been notoriously difficult to assess using only multiple-choice items.

For the 2009 National Assessment of Educational Progress in science, officials assessed a subset of students using “interactive computer tasks.” Those items require students to engage in the entire process of scientific inquiry, in which they must participate in a simulated experiment, record data, and defend or critique a hypothesis.

One of the benefits of the computer-based tasks, said Mary Crovo, the deputy staff director of the National Assessment Governing Board, which sets policy for NAEP, is that computers can simulate tools that would be dangerous or impractical to replicate in an assessment context, or processes such as evolution that occur over long expanses of time.

The results, she added, will provide data not only on student aptitude but also on how students approached the tasks—such as whether they were able to deploy the appropriate tools and how many “test runs” they performed in their experiments.

Improving Instruction

Experts add that the infusion of federal cash could also provide more opportunities to devise tests that will better engage teachers in the cognitive science about how knowledge develops over time.

“We know that it’s not only the amount of knowledge that’s important, but the way it’s organized, and we don’t test knowledge organization at all, at least not directly,” Mr. Bennett said. “That’s a significant omission in the way we design our current assessments.”

One potential prototype for such a system is the ETS’ Cognitively Based Assessment of, for, and as Learning. The reading, writing, and mathematics tests are not made up of just one analytical, performance-based item, but incorporate the knowledge and skills that students must master to succeed in the more-complex tasks.

An assessment on fictional reading, for instance, might ask students to diagram the various structures of the plot, such as the conflict, rising action, and conclusion, before moving on to an analytical open-ended question. A nonfiction unit, in contrast, would ask students to weigh the reliability of different sources of information before asking them to integrate information across a series of related texts.

The ETS assessment also will include subunits that teachers can use in a non-high-stakes setting to help students home in on prerequisite content and skills. In Portland, Maine, where the ETS has developed and field-tested the system in collaboration with teachers in three middle schools, officials praised the level of teacher involvement in its design.

“The landmark piece of this whole project is how much teachers have helped design these assessments,” said Tom Lafavore, the district’s director of educational planning. “We are breaking down the bigger skills into smaller ones that we can check along the way.”

Purposeful Approach?

Still, assessment experts express some wariness about the new federal funding, saying it might not improve test design unless U.S. officials also consider the context in which such new assessments might be used.

If measures of higher-order, critical-thinking skills are to be part of an accountability system, for instance, federal officials will probably need to reconsider aspects of the No Child Left Behind law, they said. The law, the 2002 edition of the Elementary and Secondary Education Act, is overdue for reauthorization by Congress.

“If I told you to develop a much more energy-efficient car but you can’t change the materials, the engine, and the fuel it uses, you’re not going to get very far,” said Bill Tucker, the chief operating officer of Education Sector, a Washington-based think tank that has released a series of reports on advanced testing techniques.

“It is an incredible opportunity,” he said of the federal aid, “but we could spend $350 million on the current state of the art and marginally make that better, or we could spend $350 million moving to the next generation of testing.”

Psychometricians point in particular to the constraints on testing placed by the federal law, which requires 95 percent of all students in each grade and each ethnic subgroup to be assessed. For efficiency, cost, and security reasons, each state typically conducts all its testing on the same day, in a narrow time frame.

“I think one thing that’s got to give is the idea of a short test,” said Mr. Bennett of the ETS. “You can’t cover a domain broadly, or enough of a domain deeply, if you give a short test, and you can’t give back information that’s going to be valuable to the teacher or student in terms of what to do.”

It might be possible to administer assessments in parts over the course of the year and to aggregate the results, rather than simply create longer tests, he suggested.

Another possible solution, experts say, would be to move to a system that samples student performance, rather than giving every student the same test form. Each student would take only a part of the exam, with results aggregated at a higher level.

Such a system, already used by NAEP and PISA, could keep costs down, mitigate schools’ technological limitations, and reduce overall testing times. But it has not been used for school accountability purposes, and would contravene the NCLB requirements that all students in a state take the same test, as well as complicate efforts to break out schools’ test-score results by racial or ethnic and income-level categories, among other areas.

“It’s a question of what your purpose is,” said Brian Stecher, the associate director of education at the RAND Corp., a Santa Monica, Calif.,-based research and analysis group. “If you’re monitoring how well the system is performing, you don’t need a score on every kid. I think there is a way to strike a better balance.”

Ultimately, experts say, the federal agenda for the funding will likely determine the utility of the new funding.

“Unless they’re very clear about the uses—accountability, instruction, evaluation—it’s very easy for this to get corrupted,” said Scott Marion, the associate director of the Dover, N.H.-based Center for Assessment, a test-consulting group. “I think you can easily waste this money if you’re not really careful about it.”

Vol. 28, Issue 37

You must be logged in to leave a comment. Login | Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories

Viewed

Emailed

Recommended

Commented