In his book, The Life of Reason: The Phases of Human Progress, philosopher George Santayana wrote, "Those who cannot remember the past are condemned to repeat it" (p. 115).
A confluence of readings and conversations drove me to find that quote. First, I started reading Dr. Yong Zhao’s new book, What Works May Hurt - Side Effects in Education. Though I am admittedly only one chapter in, I have started to wonder how misunderstanding -- or maybe even not knowing -- the history of American education could be an underlying factor contributing to the development of policy such as No Child Left Behind (NCLB).
Next, I stumbled on a 2017 report from Henrick, Cobb, Penuel, Jackson, and Clark regarding Research-Practice Partnerships. They assert that to bridge the gap between research and practice, the entire process should become more collaborative. Instead of academics designing a study in isolation and then implementing it within a different context, researchers and practitioners should form alliances to focus on the needs of specific schools or districts; design research partnerships to collaboratively create, study, and improve new innovations; Network Improvement Communities to support the development of collaborative networks intent on tackling specific problems of practice; or some hybrid of the three. And yet, these partnerships rarely occur. Historically, academic researchers adhere to traditional methods, and Institutional Review Boards drive what is often considered possible to implement.
Finally, over the past few weeks, I have engaged in two conversations about finding, using, and valuing empirical studies that “prove the effectiveness” of various technologies or strategies. In both cases, a desire for empirical evidence -- defined in these instances as student achievement -- served as the impetus of the conversation. However, in both situations, I questioned the logic of seeking a correlation between the strategy and technology with student reading or math scores.
In their discussion of Research-Practice Partnerships, Henrick et al. (2017) also address this logical disconnect. They recognize the challenge of communicating that a particular intervention, strategy, or innovation may not directly correlate with student test scores. For example, a program to improve principal instructional leadership may lead to increased teacher collective-efficacy, but it would be a far reach to then connect that improvement with what has now become viewed as a traditional measure of educational success in the form of standardized assessments.
Returning to Santyana’s quote, I started to wonder what we have not remembered (or not learned) that brought us to this point. Dr. Zhao might argue that these conversations about the need for empirical data connected to math and reading scores stem from the side-effects of the NCLB legislation. He explains that the accountability movement associated with NCLB created several unintended consequences. First, it resulted in a limited view of the purpose of school as the policy predominantly focused on increasing reading and math scores. Next, it fueled a national obsession with testing and standards. Finally, it led to a stifling of innovation as it required decisions to be made based on “scientific evidence” which translated into the establishment of Randomized Control Trials (RCTs) as the gold standard of education research.
This last point, I would like to further address in a future post. The relatively recent emergence of Research-Practice Partnerships and user-centered, problem-specific research processes seem to present an opportunity to reignite innovation while also contributing to the academic literature and effectively informing policy as well as decision making. For the moment, though, I want to remain focused on history.
Since the publication of the A Nation at Risk Report in 1983 under President Ronald Reagan, education has incrementally become more centralized and standardized despite rhetoric about local control (Fusarelli & Fusarelli, 2015). Continuing Reagan’s emphasis on back-to-basics, President George H. Bush introduced the idea of national standards and endorsed the notion of performance-based accountability when he launched Goals 2000. Though President Clinton did not force states to adopt a national policy, he did perpetuate the movement towards student achievement measures, setting the stage for George W. Bush’s signature policy, No Child Left Behind, and then Obama’s Every Student Succeeds Act (Fusarelli & Fusarelli, 2015).
Modern policy may appear to be logically situated within a recognizable pattern based on historical precedent. However, I believe that these policymakers seem to have forgotten three critical events from the 1960s. First, on March 22, 1964, President Lyndon Johnson received a memo from the Ad Hoc Committee on the Triple Revolution warning of the rise of a cybernation that would ultimately restructure the economy and society (Levy & Murnane, 2013). Next, in 1965, Johnson authorized the first Elementary and Secondary Education Act as part of his war on poverty. Rather than view education as solely math and reading, he considered it a civil rights issue.
Finally, the third event seems to be largely forgotten in modern discussions of education policy outside of academia. In 1966, the Equality of Educational Opportunity report, more commonly referred to as the Coleman Report, first documented the presence of the achievement gap. This large-scale, sociological study identified differences in performance based on race, gender, and socioeconomic status, but only attributed 8-9% of the achievement gap to those factors. Instead, they found that students environmental and societal circumstances outside of school contributed more heavily to the differences in educational attainment (Coleman et al., 1966).
Dr. Zhao attributes the design of NCLB to an inherent belief that the problem of the achievement gap lay solely in teachers and schools. Had the policymakers under not only George W. Bush, but also his predecessors and successor, addressed the effects of cybernation, the civil rights issue of poverty, as well as the 91-92% of factors driving the achievement gap and influencing education from outside of school, perhaps we would have a very different education system.
Coleman, J. S., Campbell, E. Q., Weinfeld, F. D., Hobbson, C. J., McPartland, F., & Mood, A. M. (1966). Equality of educational opportunity. Washington, D.C.
Fusarelli, L. D., & Fusarelli, B. C. (2015). Federal education policy from Reagan to Obama. In Handbook of Education Politics and Policy (pp. 1-24). Routledge.
Henrick, E. C., Cobb, P., Penuel, W. R., Jackson, K., & Clark, T. (2017). Assessing research-practice partnerships. Retrieved from the William T. Grant Foundation: http://wtgrantfoundation.org/library/uploads/2017/10/Assessing-Research-Practice-Partnerships.pdf
Levy, F., & Murnane, R. J. (2013). Dancing with robots. Retrieved from Third Way NextI: //s3.amazonaws.com/content.thirdway.org/publishing/attachments/files/000/000/056/Dancing-With-Robots.pdf?1412360045
Santayana, G. (1924). The life of reason: Or, the phases of human progress. [html version] Retrieved August 16, 2018 from //www.gutenberg.org/catalog/world/readfile?fk_files=169068&pageno=115
Zhao, Y. (2018). What works may hurt - Side Effects in Education. New York: Teachers College Press.
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.