Potty training is an amazing process. In order for a human being to effectively use a toilet, she needs to develop a marvelous array of skills: an internal awareness of pressure on her bladder, a geo-spatial sense of where the nearest toilet is and how to navigate there, the fine motor skills to manipulate doors and zippers, the executive function to identify the obstacles that stand between her and the successful deposition of waste in an approved location, and the linguistic and social skills to summon adult assistance when those obstacles are insurmountable.
How on earth could we possibly measure those complex neurological and physiological processes that span the entire central and peripheral nervous system, from the most basic awareness of sensory input to the most complex problem-solving that young people do?
It’s pretty easy in my house, actually. I just measure the number of times per day there is pee-pee on my floor.
It turns out that the number of uric acid depositions, the cubic milliliters of uric acid deposited on my floor, and the distance in millimeters of that uric acid from an approved toileting facility prove to be, in combination, a pretty good proxy for the development of the complex cognitive processes happening inside the noggins of the little girls who live in my house.
Now when someone says, “Can we *really* measure learning?” perhaps there is a case to be made that, strictly speaking, the marvelous things that happen inside human brains are resistant to quantification. But to say that we cannot precisely measure every facet of learning isn’t the same as saying we can’t measure anything useful in helping to understand and guide learning.
The potty training measures of learning that I propose are useful in all kinds of real world circumstances. Some of these proxies can guide remediation efforts. For instance, if depositions are in extremely close proximity to approved toileting facility but not actually in a bathroom-defined area, then it can be useful to explain how to ask for help with opening closed doors.
My proposed measures are also reasonable guides to policy-making, such as decisions on M&M distribution and the flow rate of diapering products to third-party childcare providers. My measures have proven reasonablly resistent to corruption through incentives, a common concern. At least in my household, M&M and sticker distribution led to increased use of approved facilities without any regrettable behaviors involving gaming the system. We paid our young people to learn the constellation of potty training skills, and our incentive system, anecdotally, appeared to increase the rate of learning and ultimately of potty-practices adoption.
Those who are interested in learning analytics, educational data mining, and other related efforts at using data science to improve learning are engaged in an analogous process of developing measures for learning. Of course the clickstream logs of online learning environments can’t possibly capture all the wondrous complexity of learning that happens inside young people’s heads. But that doesn’t mean that we can’t “measure learning,” or at least it doesn’t mean that we can’t measure things that might help our students learn. Some of the learning proxies that we will develop for use in these systems might seem strange or ambiguous: the length of a “pause after receiving feedback for a known misconception” is a proposed indicator of learning reported by Ryan Baker in a recent article. Certainly this pause is no more a direct measure of online learning than the cubic millimeters of uric acid on my floor is a direct measure of potty learning. As the case of the potty training shows, however, even unusual metrics can prove useful in guiding complex learning processes.
As we explore new indiciators of learning behaviors in online spaces, I think the question “can we really measure learning?” will prove less useful than “of the things we can measure, which of these can we use to guide and support learning?”
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.