Opinion
Federal Opinion

Let’s Mend, Not End, Educational Testing

By Madhabi Chatterji — March 11, 2014 6 min read
  • Save to favorites
  • Print

The Common Core State Standards and accompanying K-12 assessments have recently sparked a fierce national backlash against testing. Sound educational testing and assessment are integral to good teaching and learning in classrooms and necessary for evaluating school performance and assuring quality in education. Rather than throw the baby out with the bathwater, I propose a more considered, “mend, not end” approach to testing, assessment, and accountability in America’s schools, with validity at the forefront of the conversation.

Mending begins with understanding that most commercial standardized tests are designed to serve particular purposes well, for particular populations, and can support only particular decisions at best. To uphold validity principles in practice, it is worthwhile to ask: Are we using the test for the originally intended purpose, or for another purpose that taxes the tool beyond its technical limits? Multi-purposing a test indiscriminately is not a good idea from a validity standpoint, despite its efficiency.

Validity deals with the meaningfulness of test scores and reports. Technically, validity is determined by the built-in features of a test, including its overall content, the quality of test questions, the suitability of metrics for the domains tested, and the reliability of scores. In addition, how and where a test’s results are applied, and the defensibility of inferences drawn, or actions taken, with test-based information affect the levels of validity we can claim from the scores and reports.

BRIC ARCHIVE

According to testing standards published by the American Educational Research Association, the National Council on Measurement in Education, and the American Psychological Association, once a validated test is taken out of its originally intended context, we may no longer be able to claim as much validity for a new population, purpose, or decisionmaking context, nor with as much certainty.

New proposed uses call for more tests of a test—a process called “validation.” New evidence must be secured to support a new or different action. Too often, this basic guideline is overlooked, particularly under high-stakes accountability policies like the federal No Child Left Behind Act or the common core. Validity oversights also happen with relatively low-stakes international-assessment programs like the Program for International Student Assessment, or PISA.

No Child Left Behind, signed into law in 2002, mandated testing of all students in grades 3-8 to measure progress of schools based on results of annually administered achievement tests. Variable state-set standards toward manifestly unattainable growth targets of “adequate yearly progress” and “universal proficiency” by 2014 stretched many school evaluation systems beyond their technical capabilities. NCLB’s public rewards and sanctions based on school performance led to “teaching to the test,” spuriously raising student test scores without lasting or replicable learning gains. This repercussion, in and of itself, undermined the validity of inferences from test scores, which no longer indicated clearly what students actually knew on tested domains.

Ripple effects of NCLB took hold in other school evaluation contexts, too, threatening validity in additional ways. Even the most enlightened and progressive of districts were pressured into missteps by high-stakes-testing requirements. In 2005, for example, Montgomery County, Md., sought to ratchet up performance and close achievement gaps districtwide by identifying its own model schools and school practices—a laudable goal. However, the county’s selected measure of student achievement, aggregated to serve as an indicator of school performance in “value added” evaluation models, was the combined math and verbal SAT score of high school students.

Recent efforts have sought to align the SAT more with college-readiness and common-core standards, but at the time of the 2005 report, “Value-Added Models in Education: Theory and Applications,” the validity of the SAT as an indicator of school-level outcomes was questionable. A college-entrance exam, the SAT is designed to predict how well students will perform as college freshmen, with limited validity as a curriculum-based achievement test. Variability in the levels and kinds of coursework taken by students could significantly affect the meaning of the scores, weakening inferences about student achievement in K-12 scholastic programs.

See Also

Check edweek.org Monday, March 17, for the launch of a time-limited group blog facilitated by Madhabi Chatterji and James Harvey. Assessing the Assessments will focus on measurement, assessment, and accountability.

Further, because students opt to take the SAT, test-takers are likelier to be stronger academically and inclined toward college, come from wealthier families, or have exposure to stronger schooling experiences. Self-selection biases schools’ aggregate SAT scores, complicating interpretations of what caused them to rise or fall.

Neither the school district nor the SAT is at fault. Rather, punitive accountability measures tied to test results in the larger context of reforms may be called into question. The power of such accountability mandates influences decisions of even trained analysts, regardless of stakes tied to local actions.

In the current context of the common core, a parallel drama is playing out. The common-core tests now being developed have been criticized as too long, superficial or overly narrow, and out of alignment with the curriculum and common-core standards. Educators, parents, and local officials reasonably fear that, yet again, tests are serving as blunt policy instruments to drive top-down reforms with inadequate time and resources for designing deeper curriculum and assessments to match, with little or no professional development of teachers and school leaders and in neglect of critical supports that schools need to succeed.

With ill-prepared schools and students, what will the test results really tell us about student learning and the quality of schooling?

Yet, were the same tests implemented after standards were refined, teachers and schools readied, parents and students oriented, tests validated to measure what students actually learned better, and results freed from external rewards and sanctions, the results might be more meaningful. Further, the anti-testing backlash might well disappear.

No one was celebrating the recently released results on the 2012 PISA, ranking American 15-year-olds below their peers in many other industrialized countries, particularly in math and science. But how meaningful and defensible are the intercountry comparative averages, given the differences in culture, educational opportunity, and backgrounds of 15-year-olds tested from different nations?

Despite popular claims, these sample survey statistics also cannot tell us much about whether particular regional reforms failed or succeeded. Interpreted carefully, PISA results yield useful benchmarks within particular nations, opening opportunities for education systems to improve.

Misinterpretation of PISA’s intercountry rankings, however, reflects a larger syndrome of misuse of educational assessment results and hand-wringing about public education that could easily be avoided.

Most standardized instruments rest on a solid base of scientific knowledge that dates back to the first half of the 20th century. These tools have documented achievement gaps in ethnic, gender, and socioeconomic groups reliably, furnishing policymakers, educators, and our society at large with evidence for improving conditions.

But misuse and misinterpretation of standardized-test results is a pervasive problem in educational assessment that threatens levels of validity, especially in high-stakes testing contexts. Here’s an area where scholars and practitioners; test-makers and test users; educators, parents, and students; and the media could work together to make a difference.

These and other issues will be open for debate and discussion in a time-limited blog hosted by edweek.org, to be launched next week and facilitated by James Harvey of the National Superintendents Roundtable and me. Assessing the Assessments: K-12 Measurement and Accountability in the 21st Century will feature expert commentary from scholars and practitioners, offering a variety of perspectives on today’s critical assessment challenges.

Related Tags:

A version of this article appeared in the March 12, 2014 edition of Education Week as Validity Counts: Let’s Mend, Not End, Educational Testing

Events

Mathematics K-12 Essentials Forum Helping Students Succeed in Math
Student Well-Being Live Online Discussion A Seat at the Table: The Power of Emotion Regulation to Drive K-12 Academic Performance and Wellbeing
Wish you could handle emotions better? Learn practical strategies with researcher Marc Brackett and host Peter DeWitt.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Federal Trump Admin. Says Undocumented Students Can't Attend Head Start, Early College
The administration issued notices saying undocumented immigrants don't qualify for Head Start and some Education Department programs.
7 min read
Children play during aftercare for the Head Start program at Easterseals South Florida, an organization that gets about a third of its funding from the federal government, on Jan. 29, 2025, in Miami.
Children play during aftercare for the Head Start program at Easterseals South Florida, an organization that gets about a third of its funding from the federal government, on Jan. 29, 2025, in Miami. The Trump administration said Thursday that undocumented children are ineligible for Head Start and a number of other federally funded programs that the administration is classifying as similar to welfare benefits.
Rebecca Blackwell/AP
Federal How Medicaid, SNAP Changes in Trump's Big Budget Bill Could Affect Schools
The bill will stress a major funding stream schools rely on, leading to ripple effects that make it harder for schools to offer free meals.
6 min read
President Donald Trump signs his signature bill of tax breaks and spending cuts at the White House on July 4, 2025, in Washington.
President Donald Trump signs his signature bill of tax breaks and spending cuts at the White House on July 4, 2025, in Washington. The bill cuts federal spending for Medicaid and food stamps—cuts that stand to affect students and trickle down to schools.
Evan Vucci/AP
Federal Opinion A D.C. Insider Explains What’s Changed in Education Policy
The biggest thing that people don’t understand about federal education policy? How much the details really matter.
7 min read
The United States Capitol building as a bookcase filled with red, white, and blue policy books in a Washington DC landscape.
Luca D'Urbino for Education Week
Federal What Superintendents Think About a Steady Clip of Federal K-12 Changes
A state superintendent and two district leaders shared their thoughts on the latest changes coming from Washington.
4 min read
From left, Quentin J. Lee, superintendent of Talladega City Schools, Keith Konyk, superintendent of Elizabeth Forward School District, and Eric Mackey, Alabama's state superintendent of education, discuss the latest K-12 policy changes at the ISTELive 25 + ASCD Annual Conference 25 on July 2, 2025.
From left, Quentin J. Lee, superintendent of Talladega City Schools in Alabama; Keith Konyk, superintendent of Elizabeth Forward School District in Pennsylvania; and Eric Mackey, Alabama's state superintendent of education, discuss the latest K-12 policy changes at the ISTELive 25 + ASCD Annual Conference 25 on July 2, 2025.
Kaylee Domzalski/Education Week