Browsing in a used book store a couple of years ago, I picked up a pristine copy of Steven Pinker’s How the Mind Works. When I got home, I found a place for it on my bookshelves, where it has rested ever since. Until a week or so ago, when I picked it up and started reading. If you haven’t read it, do it.
This is not a new book. It was published in 1997, a very long time ago for a field that is moving very quickly. I sat up straight when Pinker told his readers that he was typing his manuscript on a computer equipped with Word Perfect software (my all-time favorite, but buried long ago!).
I could not put the book down. I have been doing a lot of reading lately on artificial intelligence, neural networks, intelligent agents, robotics, and natural language processing lately and almost every paragraph on every page raised questions I could not answer. The authors of all these books seemed to feel that it was unnecessary to explain the answers to these questions, because any fool should know them. It is a daunting feeling to be at the wrong end of that assumption.
And then I started reading Pinker. His book is not a report on his own research. It is a grand synthesis of research done on many subjects in many fields that he draws together in a masterful way to address his subject. He set out to do something very difficult: to advance the field for the experts in it while at the same time writing a book that the layman or laywoman can understand. And he succeeds. Though the book is twenty years old, it not only contains the answers to many of my questions, but provides a foundation for understanding fast-moving developments in virtually every corner of our lives that will profoundly affect the way we find a mate, the kind of work we do (if we can find any work!), how we learn, what we do with our time, how we get from place to place, perhaps even whether humanity makes it through the next century.
Let me give you a feel for the kinds of questions I could not answer. How could it be that a six-month-old baby could do things that the most advanced artificial intelligence machines could not dream of doing, while at the same time some of the most advanced artificial intelligence machines could beat the Grand Masters at chess and the Chinese game of Go? What is ‘intelligent’ behavior anyway? What does it mean to ‘understand’ something? Is it true that these machines can do only what they are programmed to do? Or is it possible that they will decide to do something they were not programmed to do? Can these machines learn to do things they were not programmed to do and does the word ‘learn’ in that formulation mean the same thing that it means when humans learn? What are the limitations and possibilities for the kind of learning that these machines might be able to do? Can machines have intentions? What would that mean? Do machines have to think the way humans think in order to think as well or better than humans? How can it be that machines can think so much faster than human beings and still have almost no ‘common sense’? What is common sense? Are there some intellectual tasks that machines will never be able to do in principle, or is it only a matter of time before they can do everything humans can do, only better?
These are questions about how to think about thinking and how to think about learning. Pinker answers some of them straight out and he provides better tools for thinking about the answers to others than I have seen anywhere else.
These questions are on my mind because I have been asked to write a paper about how educators should be responding to advances in information technology and the likely implications of these advances for the work that will be required of people in, say, 30 or 40 years. I could, of course, have just Googled any one of many lists of 21st century skills and written my own commentary on what education policy makers need to do to foster them.
But this is just a little too easy. There are some very smart people who think that it will not be long before intelligent agents will decimate the job market, leaving ever-larger segments of the population unemployed and unemployable. Those people are now starting a widening conversation about the need for governments to provide a “universal basic income” for the unemployable. In that world, educators might want to think about what it might mean to educate people for a life of leisure or perhaps for a life of rebellion and revolution against the few rich overlords who don’t need a universal income because they own all the capital and run the government.
And there are others who dismiss this scenario, pointing out that, whenever new technologies have displaced workers in the past, new and better jobs were eventually created. This time they say, not only will there be plenty of good jobs, but the new technologies will enable humanity to avert the impending environmental disaster, feed everyone in abundance and bring an end to the most pervasive and pernicious diseases that bring people to our hospitals today.
And then there are those who say that both worlds will come about as a result of the unwinding of these technologies, that is, that incomes and welfare will be bipolar, with a growing number doing very well and a growing number just barely making it and not much in between, as the middle vanishes.
And then there are those who say that we are approaching the “singularity,” the point at which the machines become smarter and more capable than human beings at virtually everything that matters...and proceed to take over.
These are not the same futures. To put it mildly, they appear to require different educational responses. We cannot, of course, predict the future with much confidence that we will get it right. But we owe it to ourselves and to our grandchildren to think hard about these possible futures and to do our best to create forms of education for young people that will enable them not just to survive the future, but also to create the future they want for themselves.
Which is why those questions I asked above are so important. Educators need to have some idea what these machines can do supremely well and what they are no good at. They need to know why that is the case and whether it is likely to change. They need to have a sense of what it might take to be able to partner with these machines rather than be put out to pasture by them. Educators need to be part of the discussion of what it means to be human in an era when more and more people are trying to program the essence of what it means to be human into machines (including not only the ability to recognize emotions but also to have them).
That’s why I’m reading all these books and articles. When I’m done, I’ll let you know what I think. But, in the meantime, find a copy of Pinker’s book. It may be a few years old and the technology it describes is almost prehistoric by today’s standards, but it is nonetheless one of the best books you can find to help you understand the rise of intelligent agents and the implications of that rise for us, our children and our grandchildren.