Opinion
Teaching Profession Opinion

Probing the Science of Value-Added Evaluation

By R. Barker Bausell — January 15, 2013 6 min read
  • Save to favorites
  • Print

Value-added teacher evaluation has been extensively criticized and strongly defended, but less frequently examined from a dispassionate scientific perspective. Among the value-added movement’s most fervent advocates is a respected scientific school of thought that believes reliable causal conclusions can be teased out of huge data sets by economists or statisticians using sophisticated statistical models that control for extraneous factors.

Another scientific school of thought, especially prevalent in medical research, holds that the most reliable method for arriving at defensible causal conclusions involves conducting randomized controlled trials, or RCTs, in which (a) individuals are premeasured on an outcome, (b) randomly assigned to receive different treatments, and (c) measured again to ascertain if changes in the outcome differed based upon the treatments received.

The purpose of this brief essay is not to argue the pros and cons of the two approaches, but to frame value-added teacher evaluation from the latter, experimental perspective. For conceptually, what else is an evaluation of perhaps 500 4th grade teachers in a moderate-size urban school district but 500 high-stakes individual experiments? Are not students premeasured, assigned to receive a particular intervention (the teacher), and measured again to see which teachers were the more (or less) efficacious?

A value-added analysis constitutes a series of personal, high-stakes experiments conducted under extremely uncontrolled conditions."

Granted, a number of structural differences exist between a medical randomized controlled trial and a districtwide value-added teacher evaluation. Medical trials normally employ only one intervention instead of 500, but the basic logic is the same. Each medical RCT is also privy to its own comparison group, while individual teachers share a common one (consisting of the entire district’s average 4th grade results).

From a methodological perspective, however, both medical and teacher-evaluation trials are designed to generate causal conclusions: namely, that the intervention was statistically superior to the comparison group, statistically inferior, or just the same. But a degree in statistics shouldn’t be required to recognize that an individual medical experiment is designed to produce a more defensible causal conclusion than the collected assortment of 500 teacher-evaluation experiments.

How? Let us count the ways:

• Random assignment is considered the gold standard in medical research because it helps to ensure that the participants in different experimental groups are initially equivalent and therefore have the same propensity to change relative to a specified variable. In controlled clinical trials, the process involves a rigidly prescribed computerized procedure whereby every participant is afforded an equal chance of receiving any given treatment. Public school students cannot be randomly assigned to teachers between schools for logistical reasons and are seldom if ever truly randomly assigned within schools because of (a) individual parent requests for a given teacher; (b) professional judgments regarding which teachers might benefit certain types of students; (c) grouping of classrooms by ability level; and (d) other, often unknown, possibly idiosyncratic reasons. Suffice it to say that no medical trial would ever be published in any reputable journal (or reputable newspaper) which assigned its patients in the haphazard manner in which students are assigned to teachers at the beginning of a school year.

• Medical experiments are designed to purposefully minimize the occurrence of extraneous events that might potentially influence changes on the outcome variable. (In drug trials, for example, it is customary to ensure that only the experimental drug is received by the intervention group, only the placebo is received by the comparison group, and no auxiliary treatments are received by either.) However, no comparable procedural control is attempted in a value-added teacher-evaluation experiment (either for the current year or for prior student performance) so any student assigned to any teacher can receive auxiliary tutoring, be helped at home, team-taught, or subjected to any number of naturally occurring positive or disruptive learning experiences.

BRIC ARCHIVE

• When medical trials are reported in the scientific literature, their statistical analysis involves only the patients assigned to an intervention and its comparison group (which could quite conceivably constitute a comparison between two groups of 30 individuals). This means that statistical significance is computed to facilitate a single causal conclusion based upon a total of 60 observations. The statistical analyses reported for a teacher evaluation, on the other hand, would be reported in terms of all 500 combined experiments, which in this example would constitute a total of 15,000 observations (or 30 students times 500 teachers). The 500 causal conclusions published in the newspaper (or on a school district website), on the other hand, are based upon separate contrasts of 500 “treatment groups” (each composed of changes in outcomes for a single teacher’s 30 students) versus essentially the same “comparison group.”

• Explicit guidelines exist for the reporting of medical experiments, such as the (a) specification of how many observations were lost between the beginning and the end of the experiment (which is seldom done in value-added experiments, but would entail reporting student transfers, dropouts, missing test data, scoring errors, improperly marked test sheets, clerical errors resulting in incorrect class lists, and so forth for each teacher); and (b) whether statistical significance was obtained—which is impractical for each teacher in a value-added experiment since the reporting of so many individual results would violate multiple statistical principles.

Of course, a value-added economist or statistician would claim that these problems can be mitigated through sophisticated analyses that control for extraneous variables such as (a) poverty; (b) school resources; (c) class size; (d) supplemental assistance provided to some students by remedial and special educators (not to mention parents); and (e) a plethora of other confounding factors.

Such assurances do not change the fact, however, that a value-added analysis constitutes a series of personal, high-stakes experiments conducted under extremely uncontrolled conditions and reported quite cavalierly.

Hopefully, most experimentally oriented professionals would consequently argue that experiments such as these (the results of which could potentially result in loss of individual livelihoods) should meet certain methodological standards and be reported with a scientifically acceptable degree of transparency.

And some groups (perhaps even teachers or their representatives) might suggest that the individual objects of these experiments have an absolute right to demand a full accounting of the extent to which these standards were met by insisting that students at least be randomly assigned to teachers within schools. Or that detailed data on extraneous events clearly related to student achievement (such as extra instruction received from all sources other than the classroom teacher, individual mitigating circumstances like student illnesses or disruptive family events, and the number of student test scores available for each teacher) be collected for each student, entered into all resulting value-added analyses, and reported in a transparent manner.

A version of this article appeared in the January 16, 2013 edition of Education Week as Putting Value-Added Evaluation to the (Scientific) Test

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being Webinar
Stronger Together: Integrating Social and Emotional Supports in an Equity-Based MTSS
Decades of research have shown that when schools implement evidence-based social and emotional supports and programming, academic achievement increases. The impact of these supports – particularly for students of color, students from low-income communities, English
Content provided by Illuminate Education
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being Webinar
A Whole Child Approach to Supporting Positive Student Behavior 
To improve student behavior, it’s important to look at the root causes. Social-emotional learning may play a preventative role.

A whole child approach can proactively support positive student behaviors.

Join this webinar to learn how.
Content provided by Panorama
Recruitment & Retention Live Online Discussion A Seat at the Table: Why Retaining Education Leaders of Color Is Key for Student Success
Today, in the United States roughly 53 percent of our public school students are young people of color, while approximately 80 percent of the educators who lead their classrooms, schools, and districts are white. Racial

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Teaching Profession Meet the Four Finalists for the 2022 National Teacher of the Year
The four finalists hail from Colorado, Hawaii, Ohio, and Pennsylvania, and were recognized for their dedication to student learning.
5 min read
National Teacher of The Year nominees
From left to right: Whitney Aragaki, Autumn Rivera, Kurt Russell, and Joseph Welch
Teaching Profession What Happens When Teachers Are Out of Sick Days?
We asked EdWeek's social media followers to share their school policies on COVID-related sick leave. Here’s how they responded. 
Marina Whiteleather
2 min read
Female at desk, suffering from flu symptoms like fever, headache and sore throat at her workplace
iStock/Getty Images Plus
Teaching Profession Explainer: Why Are Chicago Schools, Teachers' Union Fighting?
The issue that caused the most chaos in the roughly 350,000-student district was when and how to revert to remote learning.
3 min read
Members of the Chicago Teachers Union and supporters stage a car caravan protest outside City Hall in the Loop, Wednesday evening, Jan. 5, 2022. Chicago school leaders canceled classes in the nation’s third-largest school district for the second straight day after failing to reach an agreement with the teachers union over remote learning and other COVID-19 safety protocols. (Ashlee Rezin /Chicago Sun-Times via AP)
Teaching Profession Some Teachers Are Running Out of Sick Days, and Administrators Are Hesitant to Help
With a shortage of substitutes and pressure to stay open, administrators are reluctant to extend paid time off for teachers with COVID.
13 min read
Professional male social distancing or self quarantining inside a coronavirus pathogen.
iStock/Getty Images Plus