A Plausible Near Future
One winter morning, a 5th grader will be awakened earlier than usual by Maestra, a commercially available virtual mentor that curates her comprehensive educational environment. Having monitored the child’s cognitive and emotional development since shortly after her conception, the artificial-intelligence program will accurately anticipate that the morning’s snowfall will add 10 minutes to the child’s typical walk to school. She adores fresh snow. During their morning dialogue over breakfast and the walk, the AI will reference The Snowy Day, a favorite storybook of the child’s, having determined the intervention will induce an optimal psychological state for the school day’s lessons.
A district supervisor’s predawn jog will have just ended when her retina-draping augmented-reality device scribbles adjusted teacher and student attendance rates (-1.5 percent and -2 percent, respectively), modifications to the day’s projected energy consumption (an additional 200 kWh/school), recommended dietary adjustments for seven high-risk student populations scattered across 10 schools (reduced sugars for most, compensating for likely increases in morning stimulants), and last-minute wardrobe tips and talking points for a mid-morning video conference with principals (a blue-centric palette; bullish, data-driven forecasts for next fall’s funding). Attention split, she will almost slip on an ice patch, grumbling, “I hate snowy days.”
In this special collection of Commentary essays, professors, advocates, and futurists challenge us all to deeply consider how schooling must change—and change soon—to meet the needs of a future we cannot yet envision.
This special section is supported by a grant from the Noyce Foundation. Education Week retained sole editorial control over the content of this package; the opinions expressed are the authors’ own, however.
The student and the district supervisor in these fictional vignettes offer two possible scenarios of how the education community could soon be regulated by artificial-intelligence systems and devices. As a society, we must get used to the concept of “technological legislation,” the notion that widely distributed technological systems and devices often govern our lives more effectively than local, state, or federal laws. To cope with the daunting societal implications of accelerating artificial-intelligence adoption, education leaders and policymakers must begin to consider with the questions these new technologies will raise for education.
As with the dissemination of so many new technologies in the modern era, the pace of societal uptake of AI is already quick and will likely quicken yet. And even if there were time to deliberate rationally about the most desirable approaches to integrating AI into primary and secondary education in the United States, we would still likely be encumbered by a major problem: We have an historical tendency to assess most technological innovations as politically neutral, capable of being used as equally for “good” or “bad” ends. In most instances, however, bias-free technological development seems impossible.
We can already see the limitations of this perspective in many discussions of the future significance of driverless automobiles, genetic engineering, or advanced manufacturing robotics—discussions in which a misplaced faith in the neutrality of these technologies has led to many erroneous predictions. Instead of aiming for perfect neutrality, we should find ways to incorporate biases of egalitarianism, robust freedom, and dignity into AI design principles.
Historically, there have been several competing positions toward emergent technologies, which educators will have to confront in the coming years with the rise of AI. There is an eternally returning optimistic camp that can only see a New Jerusalem springing up in the wake of a vigorous adoption of new technology, while equally omnipresent doomsayers imagine that adoption spells apocalypse. Realists representing a third vociferous position often make the case for the inevitability of adoption, usually on pains of losing a technological arms race to a foreign competitor-adversary.
These simplistic viewpoints all offer little insight into new technologies themselves or into their proper forms and places in society. Our subsequent bafflement is compounded by wide-spread ignorance of—even indifference about—the ways that most of our most essential technical systems actually work. Educators will need to resist succumbing to these faulty perspectives that reduce new technologies to “good” or “bad,” and instead genuinely grapple deeply with AI.
Artificial intelligence itself presents K-12 educators, managers of educational institutions, and education policymakers with a range of complex and often paradoxical problems. In the classroom, the questions will be challenging: What must curricula contain to educate students for AI-related work? How can they be taught the social and political implications of a world deeply penetrated by artificial minds?
Additionally, many forms of artificial intelligence based on what AI programmers call “machine learning"—that is, a computer’s ability to learn from experience rather than merely following explicit programming—must be literally taught how to make sense of incoming data. The artificial-intelligence systems that AI researchers and corporations are building today would therefore benefit from an opportunity to tap the wisdom and expertise of our society’s most effective teachers to facilitate this learning. And, at the same time, AI could contribute to the better management of education institutions. Today’s teachers are failing to take advantage of this still-early moment in AI development to determine how best to teach these artificial pupils and to augment their classroom strategies with them. To effectively educate human students in a future where AI has become ubiquitous, it may be that educators need to start teaching artificially intelligent “students” in the present.
Education policymakers and administrators at the national, state, and municipal levels also need to collectively determine how best to integrate virtual assistants—a form of AI very likely to be widely distributed throughout the education system. In the opening vignette, for example, the district supervisor’s AI retina device arguably lacks sufficient privacy protections. And the scenario raises related questions: How can administrators, teachers, students, and parents effectively engage with interactive AIs to achieve mutually agreed upon outcomes? How can these stakeholders be assured that design biases are aligned with their desired political and social goals, as opposed to surreptitiously undermining them? What would it mean for professional educators to rigorously engage in the education of AI systems in ways consistent with our society’s deepest held commitments to liberty, equality, and human dignity?
Ultimately, there remains a disturbing irony for the American education system: The longer the country muddles along accepting overly simplistic descriptions of complex technological systems, the more difficult it becomes to have informed, democratic deliberations about AI. At the same time, well-deployed AI could materially assist us jump-starting such deliberations by augmenting individuals’ and communities’ decision-making capacities. Students, teachers, and administrators can and should play major roles in solving these thorny puzzles of the AI era.
Coverage of science learning and career pathways is supported in part by a grant from The Noyce Foundation, at www.noycefdn.org. Education Week retains sole editorial control over the content of this coverage.
A version of this article appeared in the December 13, 2017 edition of Education Week as It’s Time to Start Taking AI Seriously