I’ve been writing about memory, learning, and expertise. Today, I’m going to wrap things up by talking about what the nature of expertise means for the limits of expertise (to catch up with what’s come before, see here, here, here, and here).
We’re all fond of citing “experts” in order to win arguments or promote policies. But we tend to be remarkably casual about what expertise actually means, or whether and when an expert’s opinion ought to carry outsized weight.
In terms of learning science, note that expertise denotes a very particular thing: enough deliberate practice to move key modules of knowledge or skill into long-term memory, where they become intuitive and fluid. This means that expertise—at least in the sense of mastering skills or knowledge—is very specific in its content and application.
For instance, we don’t think of Yo-Yo Ma as an expert “musician"—we think of him as an expert cellist. I tend to guess he can play lots of instruments quite well, but not with the instinctive muscle memory that he’s mastered on cello. Michael Jordan was an expert basketball player and a phenomenal athlete—but he turned out to be far from expert, pretty mediocre actually, when he tried his hand at baseball.
Experts are most effective when they are applying their reflexive mastery under controlled circumstances to very specific situations. Dentists are expert at identifying cavities. Pediatricians are expert at diagnosing ear infections. When situations get murkier and more complex, however, expertise is harder to come by—and becomes less reliable.
Expertise is not readily transferable or transportable. In fact, the world is rife with evidence that experts are pretty limited when it comes to prognosticating future outcomes or applying their skills beyond their narrow field of mastery. For instance, nearly 90% of professional mutual fund managers consistently underperform the stock market. Yep, generally speaking, you’re better off letting a kindergartener pick your stocks than letting an expert stock picker do so.
In his book Expert Political Judgment, University of Pennsylvania professor Philip Tetlock asked 284 political and economic experts to make thousands of predictions over an extended period of time. In each case, he asked whether they expected the status quo, more of X, or less of X. The result? As journalist Louis Menand put it, “The experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes . . . Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys.”
Why might that be? There are a couple things going on. One is that experts take for granted most of the expertise they’ve committed to long-term memory. This is why an expert poet or athlete is often not the best writing coach—they’ve forgotten many of the crucial skills that they mastered long ago. This unconscious mastery is a huge boon to being an expert, but it’s a big problem when it comes to explaining expertise or applying it in new circumstances.
Experts also tend to get overly confident in their judgment and discount contrary evidence. Hard-wired expectations built on long experience is very useful when you’re dealing with specific tasks over and over. But those same strengths limit our field of vision and color our assumptions when we tackle something new.
The problem is less with the limits of expertise than with how it can distort our judgment. Economist Noreena Hertz has noted, “We’ve become addicted to experts. We’ve become addicted to their certainty, their assuredness, their definitiveness, and in the process, we have ceded our responsibility.” Hertz tells of an experiment in which participants received an MRI scan while making investment decisions. When investors were asked to think for themselves, their brains employed the circuits that calculate potential gains and weigh risk. However, Hertz explains, when “they listened to the experts’ voices, the independent decision-making parts of their brains switched off. [They] literally flat-lined.” As the Wall Street Journal’s Jason Zweig puts it, the “circuits stayed quiet even when the expert’s advice was bad. . . . In the presence of a financial adviser, your brain can empty out like a dump truck.”
So, what’s all this mean for schooling, education, and education policy? At least three things. (By the way, if you’re interested in all this, keep an eye out for my forthcoming book Letters to a Young Education Reformer).
First, it means that “classic” expertise is quite real in education, and it’s most likely to be found in places where educators are engaged in the concrete work of teaching and learning. There is such a thing as classic expertise in building phonemic awareness, teaching multiplication of fractions, devising lesson plans, designing assessments, and crafting educational software. This kind of expertise is much rarer than it should be—in large part because teacher preparation and professional training don’t do the deliberate practice, feedback, and working memory tasks needed to cultivate expertise. But it’s there. And it deserves far more respect than it usually gets.
Second, it means that many people who get presented as experts in education reform and policy are not really “experts” in any substantive sense. Advocates, attorneys, professors, and educators who champion macro changes to policy or systems are not actually expert in these things—even when introduced as such. It’s fairer and more accurate to say that “they know a lot about the issue” or that “they have a strong point of view.” But the habit of conflating “knows something” with “expertise” has left policymakers and media credulous and too often swept along by feckless faddism masquerading as reputable expertise.
Third, when it comes to system change, expertise may only apply to a tiny sliver of what’s involved. It’s usually a mistake to defer to experts on anything that extends beyond that sliver. Expertise in the mechanics of site-based governance, for instance, can yield blind enthusiasm for state policies mandating new site-based governance councils, even if those are ill-conceived or poorly designed. The big problem is when this shortchanges common sense or makes non-experts fearful of asking simple questions like, “Umm, will this work?”
As long as we respect its limits, of course, expertise has enormous value. But let’s also keep in mind that it was “expert” Thomas Watson, the chairman of IBM, who predicted in 1943, “I think there is a world market for maybe five computers.”
Expertise isn’t always all that it’s cracked up to be.
The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.