In my previous post, I looked at the recently released results from the 2013 National Assessment of Educational Progress. While much of the press attention to the results focused on fact that scores have stagnated for twelfth graders, a deeper look at the results showed that relatively few students--students who are about to go on to college or the workplace--could answer questions that asked them to interpret texts or come up with a formula for a mathematics equation.
The NAEP results also offer some clues that might suggest reasons for the performance results. Keep in mind: the way NAEP is designed, it cannot assign causality. It does not follow students over time or compare students who have received an instructional treatment with those who haven’t. It is simply a snapshot in time, and can point out factors that appear to be associated with results. But these are interesting and warrant further study.
First, in mathematics, there is a clear correlation between engaging school work and performance. Students who found math engaging never or hardly ever scored on average 145 (on a 300-point scale), while those who found the work engaging “sometimes” scored on average 155, and those who found the work engaging “often” scored on average 165. (Those who found the work always engaging scored 166.)
Again, this does not mean engaging work produces high performance. The causation might run in the other direction: students who do well in math might be more likely to find the work engaging. But it does suggest the possibility that classroom practices that engage students might result in higher levels of learning.
In reading as well, classroom practices are associated with results. There is a clear correlation between discussing interpretations and NAEP results. Students who never or hardly ever discussed interpretations scored on average 275 (out of 500); those who discussed interpretations once or twice a month scored on average 283; those who discussed interpretations once or twice a week scored on average 289; and those who discussed interpretations daily or almost every day scored on average 297.
Here, too, these results do not prove anything, and these are average scores--a third of all students said they discussed interpretations daily, and some of these performed relatively poorly. But the findings do suggest that these kinds of classroom practices might make a difference in student performance.
Whether these differences show up in test scores, of course, depends on the test. Tests can only show whether students can solve complex problems or interpret texts if they measure those abilities. NAEP, because of its design, can include measures of such abilities as well as measures of basic comprehension and number facts. Not all state assessments currently in use measure these competencies. If schools are to use these practices, the assessments must change.
The opinions expressed in Learning Deeply are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.