CORRECTED
Middle school students in the U.S. are terrible at critically evaluating online information about science. The problem is particularly pronounced for boys and students from poor backgrounds.
Those are some of the early findings from University of Connecticut researcher Elena Forzani, presented here at the annual meeting of the American Educational Research Association.
In a study of 1,429 7th grade students from 40 districts across two northeastern states, Forzani found that fewer than 4 percent of students could correctly identify the author of an online information source, evaluate that author’s expertise and point of view, and make informed judgments about the overall reliability of the site they were reading.
Forty-four percent of students in the study did only one of those things correctly.
Girls performed better than boys across the board, and by statistically significant margins when it came to identifying an author and evaluating an author’s point of view.
More affluent students (who were not eligible for free or reduced-price lunch) performed significantly better than their peers on three of the four dimensions of “critical evaluation.”
The results are alarming, said Forzani, a doctoral student in the Storrs-based university’s New Literacies Research Lab.
The ability to critically evaluate the expertise and trustworthiness of source material is critically important when reading online, she said. That’s especially true given the new emphasis on such skills in the Next Generation Science Standards.
“If students want to come away with accurate [understanding] of scientific concepts, they need to be able to evaluate information for themselves,” Forzani said in an interview.
And reading online “is not like a textbook, where you know the information has already been vetted,” she said. “Without good evaluation skills, students develop misconceptions from unreliable and inaccurate texts.”
The New Literacies Research Lab is headed by professor Donald Leu, who recently published a groundbreaking paper on the achievement gap between poor students and their more affluent peers when reading online. That study found that both upper- and lower-middle income students generally do a poor job of locating online information, critically evaluating and synthesizing that information, and communicating online. The gap between more- and less-affluent students in that study amounted to about a year’s worth of learning during the middle school years.
Forzani is working to both widen and focus those results, exploring a larger, more diverse sample of students and focusing on the specific reading skill of critical evaluation. She is also seeking to determine how student differences around income level and gender are related to students’ abilities.
The deficiencies of U.S. students in scientific literacy has been well-established by comparative international exams such as TIMMS, Forzani said. Those problems begin early and get worse over time.
Previous research has generally found that girls do worse than boys in science, but outperform boys in reading. Boys have been found to prefer online reading, while girls prefer reading in print.
Well-off students generally outperform poor students across the board.
In the study Forzani presented at AERA, students were asked to engage in a “collaborative online research task” involving a simulated Internet-like virtual environment. Avatars helped guide the students through their tasks, which involved receiving an email message from a fictional school board president outlining a student-health problem, researching that problem, and summarizing their findings in an email message or wiki post sent back to the board president.
Just 14 percent of the students involved in the study were able to correctly evaluate the overall credibility of the source materials they found, Forzani said.
The lesson for educators?
“We need to focus instruction on critical evaluation, since students are significantly lacking in these skills,” Forzani said. “And we need to support boys and [lower-income] students in particular.”
An earlier version of this post incorrectly characterized how many students scored poorly on the measure used in the study. Forty-four percent of students were able to successfully do at least one of the following: identify the author of an online information source, evaluate that author’s expertise and point of view, and make informed judgments about the overall reliability of the site they were reading.
See also: