Baby Einstein videos were supposed to make babies smarter by playing them classical music for hours on end. But guess what the research showed? The videos actually DECREASED the rate at which the little ones learned words! The Disney Corporation is actually offering refunds to the parents who wasted $15.95 on this miseducative junk. The hours and IQ points the children lost will never be recovered.
But at least Disney is minimally accountable for the dollars wasted on its product. Our government should be so scrupulous. Eight years ago we heard about the “Houston Miracle,” the amazing test score gains that supposedly resulted from the high expectations set by school leaders there. Houston Superintendent Rod Paige was appointed by GW Bush to be Secretary of Education on the strength of these lofty results.
These results were used to justify the NCLB policies that hold schools accountable for test scores and under the belief that this would force them to improve.
But a few years later the Houston Miracle was debunked. It turned out that schools there systematically manipulated the system to generate better numbers. Students in the 9th grade were held back so as to avoid lowering the average scores on tenth grade tests. Thousands of dropouts were hidden. And principals received $5000 bonuses for their great statistics.
Now here we are and it is déjà vu all over again. Secretary Duncan is continuing and in many ways intensifying the practices of NCLB, based on the supposed successes he presided over as CEO of Chicago schools.
As former president Bush once said, “fool me once...shame on you ...you can’t get fooled again.” A report came out today that reveals that the ambitious program of school closures initiated under Duncan did not work. The report states: “there was almost no difference in achievement for students whose elementary schools were closed from 2001 to 2006, mostly because the schools they later went to were among the city’s worst.”
Nonetheless, Duncan has called for the “turnaround” of 5000 of the nation’s worst schools, citing his success in Chicago as justification.
Other reports show that during Duncan’s tenure in Chicago test scores improved very little, and the achievement gap actually widened.
To be fair, I do not believe standardized tests should be used as the only measure of success. However, the Duncan administration has made it clear that test scores will continue to be the primary drivers of reform, so it is only fair to apply this to his own system.
This week another big blow came to the credibility of Duncan’s Race to the Top when the National Academy of Sciences released a strongly worded report questioning the research base of its reform strategies.
The NAS rarely takes such a public stand. The Board on Testing and Assessment (BOTA) made a number of sharp points responding to key elements of Race To the Top:
They warned against the use of the National Assessment of Educational Progress as means of checking achievement data for specific initiatives, because it is not designed to reveal performance at the local school or district level. They also warned that NAEP’s validity flows from the low stakes attached to it. If it becomes important, schools will “teach to the test,” and this will invalidate the results.
They are also very clear about the weakness of systems that rely on a single set of tests to measure achievement:
We encourage the Department to pursue vigorously the use of multiple indicators of what students know and can do. A single test should not be relied on as the sole indicator of program effectiveness. This caveat applies as well to other targets of measurement, such as teacher quality and effectiveness and school progress in closing achievement gaps. Development of an appropriate system of multiple indicators involves thinking about the objectives of the system and the nature of the different information that different indicators can provide. Such a system should be constructed from a careful consideration of the complementary information that is provided by different measures.
The use of the value-added model was also questioned, and the BOTA pointed out numerous specific problems with this approach.
1. Estimates of value added by a teacher can vary greatly from year to year, with many teachers moving between high and low performance categories in successive years (McCaffrey, Sass, and Lockwood, 2008). 2. Estimates of value added by a teacher may vary depending on the method used to calculate the value added, which may make it difficult to defend the choice of a particular method (e.g., Briggs, Weeks, and Wiley, 2008). 3. VAM cannot be used to evaluate educators for untested grades and subjects. 4. Most data bases used to support value-added analyses still face fundamental challenges related to their ability to correctly link students with teachers by subject. 5. Students often receive instruction from multiple teachers, making it difficult to attribute learning gains to a specific teacher, even if the data bases were to correctly record the contributions of all teachers. 6. There are considerable limitations to the transparency of VAM approaches for educators, parents and policy makers, among others, given the sophisticated statistical methods they employ.
They conclude,
Even in pilot projects, VAM estimates of teacher effectiveness that are based on data for a single class of students should not used to make operational decisions because such estimates are far too unstable to be considered fair or reliable.
They point out that the large-scale tests currently used for accountability purposes are very different from the sorts of tests educators should use for more frequent checks on student understanding, and that the Department of Education should be careful not to promote the inappropriate use of such tests. This sentence jumped out at me:
Assessment of complex reasoning and problem-solving skills typically demands assessment formats that require students to generate their own extended responses rather than selecting a word or phrase from a short list of options.
It appears that Secretary Duncan is preparing to spend more than $4 billion of our money on reforms unsupported and even proven worthless by solid research and concrete experience. The hucksters that sold us Baby Einstein videos are giving refunds for their product. But Rod Paige and George W. Bush have not offered us a refund of the billions spent on NCLB. And we are getting ready to spend even more billions on the next surefire cures for our schools.
I do not know what combination of solid research, legal pressure and conscience prompted the Disney Company to offer refunds on Baby Einstein videos. But I think we need to figure it out, and apply the same combination to Arne Duncan and the Department of Education, because it looks as if we have another boondoggle in the making.
What do you think? Should we be demanding a refund for NCLB? How about Race to the Top?
Creative Commons image by eedrummer.