The William and Flora Hewlett Foundation announced the winners of its Automated Student Assessment Prize, or ASAP, essay grading competition on Wednesday, after issuing a report last month that found automated essay-graders capable of replicating scores of their human counterparts.
Three teams split the $100,000 in prize money from the Hewlett Foundation, with the trio of Jason Tigg (United Kingdom), Stefan Henß (Germany), and Momchil Georgiev (United States) collecting the $60,000 top award. The 11 contestants comprising the first-, second-, and third-place teams have backgrounds in particle physics, computer science, data analysis, and even foreign service work. But none have educational backgrounds, a departure from the report, in which participants were companies or nonprofits with experience in the educational market.
The Hewlett Foundation, which is underwriting the competition as part of its work to improve assessment methods along with the implementation of Common Core State Standards in English/language arts and mathematics, also funds Education Week‘s coverage of deeper learning.
The concept of using artificial intelligence to grade student writing has long been a polarizing one, even though the practice remains quite limited in K-12 education. Supporters of automated essay-graders say that, when incorporated correctly, they can allow students significantly more writing practice by scoring essays far more quickly than a human evaluator. Opponents contend such tools are still very weak when it comes to evaluating the validity of students’ arguments and that they grade essays based mainly upon structure and grammar.
Tom Vander Ark, an educational consultant who is the co-director of the study and competition, said an important differentiation among the three winners is that they used a combination of predictive analytic strategies to drive their software, and not just natural language processing, the field of computer science studying the interaction between human and computer language.
“We think this is important because it’s an advancement in the field, [and] it is further demonstration of smart scoring to contribute to state tests that incorporate lots of writing,” Vander Ark said in an email.
Developers of the tools themselves at times appear divided over just how contemporary automated essay-graders should be used. For example, British particle physicist Tigg said in a press release that the technology could potentially transform educational delivery methods. But the same press materials also indicated that the winning trio expressed a belief that the technology is still in an early stage of its development.
Hewlett is also planning a competition for automated graders of short answer questions this summer, according to a press release. Like the essay-grading competition, it will be hosted by Kaggle, a platform for data prediction competitions that allows for transparency and discussion of competitors’ work.
A version of this news article first appeared in the Digital Education blog.