There are times when I feel that we are on the same wavelength, and times when I know we are not. Right now, my frustration is multiplied because in the course of your last mini-essay, I found myself alternately agreeing and disagreeing with your assertions.
I said that many people who have spoken out about the recent round of NAEP scores seem not to have read the report in which the scores were embedded. I expressed the wish that the commentators would take the trouble to read the report before characterizing what they read in the newspapers, which is third-hand at best. This observation sent you into musing about how the original sources themselves are “an interpretation of data,” and how we all rely on the writers that we trust—or happen to agree with.
But that was not my point. The NAEP data are an original source for those who wish to discuss the latest round of national tests. They are not an “interpretation of data.” They are the data. I assume that you mean to say that you are unimpressed by NAEP, that you do not like the content of the NAEP frameworks or the methodology of the NAEP assessments. That is fair enough. But that is a different discussion from the one I raised.
Policymakers in Washington and the state capitols are influenced by the every-other-year reports from NAEP about state and national progress. It is your right to dismiss NAEP out of hand, but the people making important decisions about education policy are on a different trajectory. They look at the numbers and they see a reality that you dismiss as trivial and unimportant. Maybe you are right and they are wrong.
My point is that if public policy is going to be affected by NAEP—and I believe it is (and should be)—then at least the people who write about the NAEP scores should read the data and not rely on second-hand or third-hand accounts. Like the tests or hate them, they are the best measure we have right now. As the recent report from the Thomas B. Fordham Institute (“The Proficiency Illusion”) showed, the state tests vary widely and randomly in terms of their expectations and standards.
As I said in my last post, the progress on NAEP in most areas has been slight or insignificant from 2003-2007. I take this to mean that NCLB has had trivial effects on student achievement in reading and math, the subjects tested every other year. Now that the president and the U.S. Department of Education have made it their business to show that federal legislation can and will raise test scores, every release of NAEP data is accompanied by a press statement from the U.S. Secretary of Education that magnifies slight gains as huge achievements.
This is troublesome. It is troublesome because the federal government’s role as the honest, impartial collector and distributor of information gets corrupted when it acts as a cheerleader. And it is troublesome because it is unrealistic to expect test scores to make major leaps in a few years. When they do, one should suspect chicanery of some kind.
NAEP shines a light on state testing practices, as the Fordham report shows. Many states are reporting unrealistic leaps in achievement and high levels of proficiency to satisfy the absurd demand of NCLB for a trajectory that will bring every child to “proficiency” by the year 2014. NAEP shows how unlikely it is that any state will meet that goal and how inflated most of the states’ claims of achievement are.
You make a transition from national testing to the dangers of a national curriculum. We have discussed this often. Like you, I would like to see schools where children have time to build, to create, to explore, to experiment, to play. I would like to see kids in the primary grades building castles and fortresses and stores with blocks. But unlike you, I don’t think this kind of playful learning is at odds with a national curriculum.
What is really frightening today—due in large measure to NCLB—is that we have a national testing mania without any curriculum at all. So now our schools are obsessed with preparing to take tests, getting good scores on tests, and then starting the test prep all over again. Out the window goes any thoughtful or playful engagement with history, literature, or the arts, as well as time for physical education (in many New York City schools, children are lucky to have one period a week for physical education). This is outrageous. This is not good education.
So here is where we find our differences and we find our agreements. Unlike you, I am not frightened by a national curriculum and national testing; I believe we already have both, supplied by commercial publishers of textbooks and tests. And what we have is low-level and antithetical to good education. Where we agree is that we have a vision of what good education is and should be. Even if we don’t agree on every detail, we do agree that what we have now is far from good education.
The opinions expressed in Bridging Differences are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.