Bush vs. Clinton
The current administration's priorities differ from its predecessor's.
Is technology a tool to open new educational horizons, or to connect the dots efficiently on today’s educational map?
States and school districts turning to the federal government for help will find that the Bush administration has decided technology should be used to connect the dots—as tightly defined by the accountability mandates of the No Child Left Behind Act.
Administration officials argue that a decade of federal investments in educational technology have demonstrated too little impact on academic achievement. As a corrective, they say, when budget decisions are made, the value of educational technology needs to be weighed alongside other efforts to improve education.
That is partly why the administration has proposed major cuts in federal spending on educational technology for fiscal 2006. One notable exception in the budget, however, is money for data-management technologies, which the administration wants to maintain. (See "Federal Role Seen Shifting.")
“The idea that technology needs to remain distinct and separate [from other funding streams for education] has become somewhat counterproductive,” says C. Todd Jones, the U.S. Department of Education’s associate deputy secretary for the budget. “The administration’s position is that [educational technology] is really no different from various other attempts to generate improvements in the curriculum.”
Those developments have left some educators looking back wistfully at the years of the Clinton presidency, when, they say, a broader view held sway in Washington.
The years 1993 to 2000 could be called the heyday of educational technology. The attention of the media, industry, and government alike was fixed on the promise of the World Wide Web for business and educational innovations, the power of establishing universal Internet access, and the new ways in which interactive and multimedia software could be used to enhance learning and build businesses.
Observers of the field point out that the main federal funding sources for technology in schools were designed and set in motion under that philosophy.
In the 1990s, the cause of school technology—and related causes such as bridging the “digital divide” between the poor and the rich—were sounded from the highest reaches of the Clinton administration. Technology spending was justified according to social and educational progress, equity in schools, and the expected demand for technology skills in the future workplace.
Specific academic objectives for educational technology were not the goal in those pre-No Child Left Behind Act days.
The Education Department, led by then-Secretary Richard W. Riley, settled on four main themes for its technology programs, based on several years of study, which were written into the nation’s first education technology plan, released in 1996. The “four pillars” held that schools and children needed to have better access to computers, they needed access to the Internet in their classrooms, teachers needed professional development in the use of technology, and schools needed to have access to better digital academic content.
Those themes influenced a raft of federal grant programs launched in that decade, accounting for billions of dollars spent by Washington on educational technology. Awards were made competitively to school districts or under formula funding administered by the states. Consistently, states and local districts had broad latitude in the curriculum goals and methods they would pursue with the federal money.
The second national education technology plan—released in the final days of President Clinton’s administration and quickly shelved by its successor—expanded the idea of access to technology by calling for programs that would increase Internet access beyond the school day, in homes and community centers. The 2000 plan also sought to enlarge technology’s role from being principally a student tool to something teachers could use to help them raise academic achievement. Another goal, prefiguring a theme of the third and latest plan, was to cultivate technology literacy in children as a lifetime skill.
Christopher J. Dede, a Harvard University professor of education who specializes in the development and use of technology in learning, says states have benefited over the past decade from the federal government’s role in helping establish more technological equity and investing in learning innovations.
Inequities in school technology tend to arise across different regions of the country in the absence of a national commitment, says Dede.
Without federal leadership, technological innovation in education also stalls, he contends. “Particularly in research,” he says, “innovation tends to be a ‘tragedy of the commons’—where anyone could benefit from the investment in it, but nobody wants to assume the burden of that investment and let everybody else get a free ride.”
For example, he says, many educators and technology experts believe that hand-held-computing devices have promise as learning tools. “But the state of Arkansas won’t say to itself, ‘Let’s do really terrific, large-scale investment in research on hand-helds, and all other 49 states get to piggyback for free,’ ” he says.
In contrast, a federal government commitment to such a study essentially “aggregates” the costs, spreading the financial risk among the states, he says.
Others echo similar concerns.
Linda G. Roberts, who was the technology adviser to Secretary Riley during the Clinton administration, says the way the No Child Left Behind Act was written could provide a useful academic focus for technology efforts.
But, in the same breath, she says the act “has been implemented in a way that has restricted innovation.”
Vol. 24, Issue 35, Page 15Published in Print: May 5, 2005, as Bush vs. Clinton