The William and Flora Hewlett Foundation
is sponsoring the Automated Student Assessment Prize (ASAP) to improve the quality of student assessment in America
and, as a result, increase the quantity and quality of student writing.
The prize competition was designed to address the problem of expensive and slow hand scoring of constructed response tests. Many states have opted for
tests that feature less expensive multiple choice items that are less able to assess students’ critical reasoning and writing skills. ASAP is testing the
extent to which software systems are able to very quickly grade long and short written responses as well as expert graders.
Critical reasoning and effective communications are important college and career ready skills. The Hewlett Foundation makes grants to support development of
these skills, which it calls “deeper learning.” By sponsoring ASAP, Hewlett
is appealing to data scientists worldwide to help focus and accelerate innovation in student assessment.
ASAP was launched with a vendor demonstration in February. Eight existing vendors and a university team scored more than 16,000 essays in eight data sets
that varied in length, type, and grading protocols. In April, Dr. Mark Shermis, author of Classroom Assessment in Action and ASAP academic advisor, released a study summarizing the results. In a press release I stated, “The demonstration showed
conclusively that automated essay scoring systems are fast, accurate, and cost effective.”
Following the vendor demonstration, an open competition drew more than 250 participating teams from around the world. In May, $100,000 in prizes was awarded for
the top essay scoring team. “I am thrilled to win this contest because it gave me an opportunity to think creatively about how we can use technology to
improve scoring software that instantly and inexpensively predicts how an educator would hand-grade an essay,” said Jason Tigg, the British particle
physicist turned high frequency trader who was a member of the first place team.
Last month, Open Education Solutions (where I’m CEO) launched ASAP Phase Two--another $100,000 prize focused
on short answer responses. Like Phase One, it is sponsored by the Hewlett Foundation, hosted on the Kaggle
platform and was designed by The Common Pool. The competition includes a variety of prompts from several subjects. Answers average about 50
words--a more difficult scoring challenge than full essays.
The competition closes September 5 and the $100,000 prize purse will be awarded to five winners by October 1. To date, 105 competitors have made 605
entries. Winners will make their scoring engines available to the public along with a paper describing their methodology for aligning scores with expert
Both phases of ASAP have been developed with the support of and with the intent to benefit PARCC and SMARTER Balanced, the Race to the Top state testing consortia that will roll out new online tests in 2014-15.
Both consortia aim to offer much higher quality assessments than are common today but at much lower prices. To accomplish these objectives, the consortia
will incorporate software scoring solutions similar to those used for years to licence doctors (United States Medical Licensing Examination) and admit students into graduate schools.
Next up for ASAP is a competition for scoring symbolic mathematical and logic reasoning and a series of classroom trials of online literacy tools. The ASAP
sponsors and partners believe that new tools will help teachers promote more writing, better problem solving skills, and deeper learning.
For more, see these Getting Smart posts:
The opinions expressed in Vander Ark on Innovation are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.