School & District Management

If the Impact of Artificial Intelligence on Work is Unclear, What Can Schools Do?

By Benjamin Herold — December 07, 2017 4 min read
  • Save to favorites
  • Print

Artificial intelligence is already reshaping the labor market. Its impact will likely become even more disruptive. But experts have historically been bad at predicting which jobs and tasks will be lost to automation, and public officials have historically been slow to respond to technological advances with smart, effective regulations.

That’s the nutshell of a RAND Corporation report on “The Risks of Artificial Intelligence to Security and the Future of Work,” released earlier this week.

What can K-12 educators and policymakers take away from the work?

Based on my read, there’s plenty of reason to be worried, but it’s much less clear what can be done.

The RAND report is the result of an internal cross-disciplinary brainstorming-and-literature-review-process. In other words, it’s smart people distilling the ideas of other smart people, across fields as diverse as economics, psychology, political science, math and design. But it’s not original new research. And while part of the report focuses on domestic and global security, I’ll limit my focus here to the parts around the future of work.

Schools have long been under pressure to prepare their students for the workforce. It’s no secret that job has gotten considerably tougher with the advent of new technologies such as artificial intelligence (RAND uses the term “artificial agents,” which it describes as “devices and decision-making aids that rely on automated, data-driven, or algorithmic learning procedures.”) The growth of AI has made it increasingly difficult to predict what jobs will be available, and what skills will be most valuable, when today’s students hit the job market.

One thing that can be helpful to the K-12 sector: RAND is toying with a new framework for helping policymakers and others think about those questions. There are two basic ideas:

  • The more chaos workers must contend with (judged by how many tasks they must juggle, how often they must switch between different scenarios, and how complex those tasks and scenarios are), the less susceptible their occupation is to automation.
  • Response time matters, at least in high-chaos environments. Occupations that require juggling lots of tasks and scenarios, and also require quick decisions and actions, are going to be hard to automate.

The good news for K-12? RAND suggests teaching is one of those “high-chaos, short response-time” occupations where humans won’t be replaced by artificial agents any time soon. Politicians, Navy SEALs, and surgeons fit the same bill, according to RAND.

But lots of other jobs, from accountants to factory workers to truck drivers, could be vulnerable.

Many economists and technologists have tried to predict who will be hardest hit (and Education Week reviewed this literature for our recent special report on Schools and the Future of Work.) But RAND researchers Osonde A. Osoba and William Welser IV, who authored the new report, suggest that such analyses be taken with a grain of salt.

“Our historical track record on forecasting susceptible jobs has been pretty abysmal,” they write, and “we are poor at objective estimations of cognitive and processing difficulty,” making any such projections “error-prone.”

The Road Ahead for K-12

So if K-12 educators and policymakers can’t count on reliable predictions of what jobs they should be preparing students for, what can they do?

Across the board, the RAND researchers suggest, we need to recognize the macro-level shifts that are taking place, and we need to consider a broad policy framework for responding and adapting.

It’s undeniable that in this new economy, workers are getting a “lower share of returns from economic growth and increased national productivity” than is capital, Osoba and Welser write. The middle-skill, middle-class jobs that were once the bedrock of the economy are eroding, and the benefits of AI are accruing to “a very restricted group of ‘superstar firms,’” such as Google and Facebook.

At the same time, fundamental notions of privacy, intellectual property, and individual autonomy are being reshaped. As a society, we have become more reliant on algorithms and artificial agents to make decisions in our lives, even as those technologies introduce new dangers and risks that we often fail to see and appreciate.

Given this context RAND lays out, my read is that preparing students to help mold the future can’t just be about making sure they have a specific set of labor-market skills that may or may not be valuable 20 years into a very uncertain future.

Instead, part of educators’ responsibility is about making sure tomorrow’s citizens understand how the new data-and-digital world around them works; making sure they’re not “overly trusting or insufficiently critical” of the role those technologies play in our lives; and preparing them to think well outside of the box when it comes to the social, political, and cultural responses that will be needed to ensure a future of work that actually works for everyone.

Or, as Osoba and Welser put it in their report, “an insufficient response to AI’s socioeconomic impacts can inequitably disenfranchise significant portions of the population and pose risks to national stability...There is a case to be made for designing robust adaptive regulatory schemes to match the pace of technological progress.”

Image: Getty

See also:

Related Tags:

A version of this news article first appeared in the Digital Education blog.