Special Report

Taking Technology’s Measure

By Andrew Trotter — November 10, 1997 16 min read

Technology’s stock is flying as high in the nation’s schools as it is on Wall Street.

In poll after poll, parents say technology is essential to a child’s education. Many educators believe it’s the missing linchpin of school reform. Business leaders consider it a mandatory part of a student’s preparation for the workplace. And policymakers at every level of government are spending more money on it each year.

With support for technology so strong, people might assume its value for schools has been proven beyond question. In fact, the dividends that educators can expect from this investment are not yet clear.

There is no guarantee that technology improves student achievement. Research in this area has produced little hard evidence, and few studies have yet examined the kinds of technology use that experts believe are most valuable to learning.

Nor is there assurance that policymakers have spent enough time on questions of how the new technology will be used or maintained. Although all the states and many school districts have technology plans, the quality of those plans varies widely. And precious few resources have been spent on ensuring that teachers make effective use of the new tools.

Many states and districts can’t even say with confidence how much technology they already own. Some state officials have no other source for this data than commercial firms whose surveying methods are flawed.

Nevertheless, spending on educational technology this school year could top $5 billion.

Proponents of technology acknowledge the many questions that surround its use in schools. But they say technology offers so many potential benefits that it would be even riskier not to invest in it.

“It is impossible for me to imagine how school leaders who are focused on more authentic ways of doing math and science, who are developing rich environments for learning, can achieve that without technology,” says Linda Roberts, the adviser on technology to Secretary of Education Richard W. Riley.

“The data may not be perfect,” adds Mary Fulton, an analyst at the Education Commission of the States. “But if we don’t start somewhere and have something to build on, we’re never going to get anywhere.”

In this report, the staff of Education Week has compiled the best data available about the level of technology in the nation’s schools and the latest thinking about how technology should be implemented. Because both are in a state of flux, Technology Counts will serve as the foundation for a series of annual reports in the years to come.

The growing amount of money spent on school technology is not the only factor warranting a closer look at its effectiveness. Technology is also a hot topic in the policy arena.

President Clinton, Vice President Al Gore, and a procession of state governors from both political parties have recently endorsed technology as a necessary tool for education. At last year’s national education summit in Palisades, N.Y., the governors and business leaders who attended made improving education technology one of two main goals for school change.

Among the most important pending federal initiatives are the discounts for telecommunications services that are scheduled to go into effect in January under the Telecommunications Act of 1996. The “E-rate” discounts, which will be administered by the Federal Communications Commission, will be worth $10 billion to schools over the next four years.

Political support for technology has grown, in part, because many policymakers view it as a critical part of broad-based education reform, says Barbara Means, an education researcher at SRI International in Menlo Park, Calif.

“Over the course of five years, a lot of states have come around to seeing technology as an important part of trying to support teachers, trying to get across the new standards,” Means says.

  • In the classroom, technologies that support exploration, such as the Internet and desktop presentation tools, can create a synergy with “constructivist” teaching methods.
  • Providing teachers with high-tech training, personal computers, telephones, and Internet connections is a step toward increasing their professionalism. Technology also can reduce their isolation and lead to professional communities of educators in cyberspace.
  • Technology can help measure academic achievement as assessments of student performance grow more complex.
  • The movement to restructure schools—to decentralize decisionmaking and share it with teachers or school-level councils—can be aided by networks that collect and distribute information. Databases of student information can help teachers target resources and appropriate teaching methods toward individual children.
  • Computer tools that streamline administrative tasks can improve school accountability and free up more resources for the classroom.

But reform has many currents—sometimes hard to distinguish as they swirl and eddy, separate and rejoin—and many education reformers consider technology a side issue.

“In school reform, there are so many challenges,” Means says. “It is so difficult and complex, technology seems another layer of complexity and expense.”

Henry Levin, the director of the Accelerated Schools Project, a reform network that seeks to bring at-risk children into the educational mainstream, agrees. “It’s very, very expensive to maintain a [high-tech] facility and someone to watch over it. So that competes with other possibilities.”

Yet, he calls technology “a wonderful tool” when integrated into the instructional program.

“My concern is that the technology ought to basically be the servant of a very strong program that challenges students and gets them involved, as opposed to simply being an entity in itself which somehow is going to have an impact because it exists,” says Levin, an education professor at Stanford University who is on a teaching sabbatical at Teachers College, Columbia University

Other educators worry that schools are spending money on technology without a clear plan.

“I’m very concerned over the rush to purchase hardware when we do not have enough evidence on how best to use computers to help youngsters achieve in reading, mathematics, writing, et cetera,” Samuel G. Sava, the executive director of the National Association of Elementary School Principals, says.

“My second concern is that a number of school systems, in order to purchase the hardware, have begun to eliminate such key programs as the arts,” he adds.

No one admits to sacrificing other programs to technology purchases, Sava says. “But when you talk to music educators, or when you talk to our principals, you begin to get a feel that within the limits of the resources, computers are getting a higher priority. Computers are supposed to support the curriculum, not write the curriculum.”

Strong evidence of technology’s effectiveness undoubtedly would further strengthen public and political support for it. But research is slow in coming.

One difficulty in assembling evidence is that technology’s capabilities have changed faster than education researchers can complete the large-scale, controlled studies that lead to firm conclusions.

Studies of that sort were done on computer-assisted instruction by the early 1990s, and showed that “drill and practice” can effectively reinforce basic skills. But by then, many computer-using teachers were using technology in completely different ways. Other approaches are supported only by small studies or by studies limited to a narrow range of technology

Some say no one should be surprised. “In most areas of society, we’ve gone ahead on the basis of instinct, not on evidence,” notes Donald M. Feuerstein, a senior adviser in the U.S. Department of Education.

Where proponents of technology lack scientific proof about its benefits, they cite common sense and educators’ classroom observations.

It is often forgotten that the use of technology in the schools has evolved partly through the grassroots efforts of a handful of teachers and administrators who were dissatisfied with traditional teaching methods and who experimented with their classes and schools, sometimes exchanging their own time and money for training.

They gradually won over more colleagues and formed idea-sharing networks and then coalitions of educators—prodded by technology vendors, to be sure—who toiled to win political support and funding for technology.

Because of that history, many educators believe policymakers are playing catch-up.

Yet, most in education’s own ranks are still more comfortable with chalkboards than with a computer mouse. Only one out of five teachers uses a computer regularly for teaching, according to the National Center for Education Statistics.

Educators and policymakers agree that it is folly to introduce technology without adequate teacher training—both at the university and professional levels. But budgets and funding often don’t reflect that belief.

“Technology is a special case. It isn’t something teachers got in their preparation,” says Kathleen Fulton, the associate director of the University of Maryland’s Center for Learning and Educational Technology. “We assume teachers know about content and pedagogy, and we expect them to stay up to date in those areas. But they have not been prepared to think about how technology can enhance their teaching.”

In addition to a lack of research, the nation also has a dim picture of the amount of technology that is already in the schools and the ways it is used.

Solid data can make a real difference in public perceptions and in policymaking, experts say. For example, a 1995 NCES survey showed the nation that only 9 percent of school classrooms were wired to the Internet. Coming at a time when the public was becoming intrigued by the World Wide Web, the figure provided a rallying point for the NetDay movement to wire classrooms. The number of wired classrooms rose to 14 percent in the 1996 NCES survey and is expected to jump again this year.

As the states exert more leadership in technology and school reform efforts, they have become especially hungry for data that compare their schools with those in other states.

“Legislators ask that question; also, communities and business interests do,” says John Cradler, a technology consultant and a former technology director of the Council of Chief State School Officers in Washington. “It’s like how your state compares on SAT scores.”

Two of the main sources for state data are Quality Education Data and Market Data Retrieval, commercial marketing firms that collect information on technology and sell it to technology manufacturers. QED, which has its headquarters in Denver, is a division of Peterson’s Guides Inc. MDR, based in Shelton, Conn., is owned by Dun & Bradstreet Corp.

Both companies conduct mail and telephone surveys of schools throughout the nation, asking for laundry lists of their high-tech equipment and purchasing plans. Their results are supplemented by information from the Education Department and other surveys.

The two companies try to collect data for every school—a method called a “census survey”—because their main business is assembling marketing lists and reports to help other companies target sales efforts.

By default, the companies’ data have generally come to be treated as an unofficial state-by-state scorecard on technology. That is especially true of data provided by QED, which is better known than MDR.

The problem, say many experts, is that the data are flawed.

Henry Jay Becker, a University of California-Irvine education researcher, notes that a census survey is prone to undercounting. He estimates that QED consistently low-balls school technology counts by about 25 percent.

A census survey is “just not a good way of gathering representative information,” he says. “Sample surveys are much better.”

A number of state technology directors and researchers have pointed out other problems with QED’s survey.

Schools generally are under no compulsion to submit data to the company, and in any given year, many don’t bother.

For example, just 6 percent of schools in Rhode Island responded to a survey conducted by QED last spring. The company supplements these data with data collected from Rhode Island schools in earlier surveys, but this practice raises another issue.

As recently as this past summer, QED was still using some data collected in 1994 or earlier—a long time ago, given the fast pace of change in technology. The company tries to minimize this problem, however, by giving more weight to recent data.

For the nation as a whole, QED says it receives data for 67 percent of schools annually, but the response rate varies greatly from state to state.

One way the company boosts response rates is by accepting school data provided by states or districts— even though that practice runs a risk of lowering accuracy.

“A lot of times, districts don’t take [the surveys] seriously,” Cradler says. The wrong person might complete the survey; he might count computers in the warehouse rather than in schools; or he might misinterpret the questions, Cradler says.

In addition, some district technology coordinators say they receive questionnaires at a point in the budget cycle when their estimate of the year’s purchases is hazy at best.

These problems make QED’s state-by-state rankings unreliable, say some state directors.

In its 1997-98 catalog, for example, QED reports Ohio’s student-to-computer ratio as 14-to-1, for a ranking of 46th in the nation. The ratio of students to multimedia computers is reported at 50-to-1, for a ranking of 49th.

But the data are plain wrong, according to Tim Best, a consultant who serves as Ohio’s technology director. He says that QED did not account for 87,664 multimedia computers purchased by the state over the past two years. The multimedia-computer ratio should be close to 14-to-1, he says.

“Whenever QED figures are released, we get hammered in the press,” Best says. “We spend our time in firefights rather than doing the work we’re supposed to be doing.”

MDR, which heard from 63 percent of Ohio’s schools in its 1997 survey, reports the state’s ratio in multimedia computers as 19-to-1, or 34th in its list of states.

(Education Week is relying primarily on MDR’s 1997 survey for state-by-state information in this report because the company accepted data only from individual schools; it disclosed the response rates for every state; and those response rates were comparatively high—averaging 64 percent, with no state below 48 percent.)

Jeanne Hayes, the president of QED, acknowledges the validity of states’ concerns. “It’s not an acceptable situation,” she says.

But she also points out that state-collected data often have similar flaws. And she says the company recently retooled itself to work with states to improve data collection.

QED has been trying to enlist states to adopt a common survey of current-year technology use and inventories in their schools. To date, 23 states have joined QED’s Project EdTech and lent their clout to encourage schools to return QED’s surveys.

Seventeen more states have shared data they have collected with QED, Hayes says, although she is dissatisfied with the timeliness and consistency of some of it. The company is planning to release the first compilation of EdTech data later this month.

Response rates from the first year of Project EdTech are relatively high, ranging from 41 percent of schools in Arizona to 100 percent in Louisiana, a state that required schools to complete the survey to qualify for a grant.

Some state technology directors wonder why they need to rely on a for-profit company when the states have data-gathering mechanisms in place. They want the states to develop jointly their own national survey without the intervention of data marketing firms.

“Districts tend to take it more seriously if it’s a government survey,” Cradler says.

But Hayes suggests that an independent company is more likely to have fair and consistent survey methodology than 50 different states.

Some researchers, including Becker, downplay the importance of state-by-state comparisons at all. He thinks researchers need to be asking more sophisticated questions that do not lend themselves to the bureaucratic collection procedures of a school census survey—no matter who conducts it.

Data on the numbers of computers, videodisc players, satellite dishes, or wired classrooms in schools can obscure crucial questions—including whether they’re actually being used, experts say.

Momentum is building, meanwhile, for a national research and data-gathering effort on technology.

A report released last spring by the President’s Committee of Advisers on Science and Technology calls for spending at least $1.5 billion on educational technology research. The CEO Forum, a group of corporate and education leaders, endorsed improved collection of data on schools’ use of technology in a report released last month.

The federal government is already developing indicators to track how schools are using grants from the Technology Literacy Challenge Fund, a program that distributed $200 million to the states to award to district technology projects in the 1997-98 school year. And some officials are proposing that the FCC require schools, as a condition of their E-rate discounts, to submit a technology inventory, their equipment purchase plans, and an assessment of their needs.

The Education Department’s Feuerstein says the government wants to create a database of the nation’s school technology that could be collected over time and analyzed. Feuerstein says the FCC inventory could be joined to a massive NCES effort—in concert with the data efforts of the states—to settle on standards for information collected about school inventories.

Once the indicators are established, states could incorporate them in their technology surveys to produce one set of standardized data.

“It has to be a very inclusive effort,” Feuerstein says.

Sen. Jeff Bingaman, D-N.M., has proposed the creation of a national organization that would connect efforts to study content, software development, and Internet use, in light of academic standards and curriculum. He suggests that a consortium of higher education, business concerns, research institutes, and government would fund and direct the group.

“We’re talking about leveraging new research across agencies, and other [private] agencies where they’re doing cutting-edge research,” says Cradler, who has contributed ideas to the proposal. “The idea is to pull together all data sources, to elevate this to a national initiative.”

Some of the demand for national data is driven by the need to persuade legislators in Congress and the statehouses that their investments in technology have been worth the money.

Jonathan Sallet, the chief policy counsel at MCI Communications Corp., warned a group of educators in Los Angeles last spring that the public will want to see the payback.

“Now, there’s goodwill and optimism,” he said. “Three years from now, we’ll need to show results— or in the future [the E-rate] will be seen as a good idea that failed.”

The sections that follow in this report will look at school technology from a variety of perspectives.

The first section, on technology in the classroom, discusses what research has revealed about the effectiveness of technology in raising student achievement.

The second section, on teacher training, examines how well teachers are prepared to apply technology in their classrooms and their professional lives.

The next section, on administrators, addresses school leaders’ readiness to make decisions that take advantage of the full capabilities of technology.

The fourth section looks at the ways in which technology has changed the relationship between schools and the public—from parents to corporations.

The final section examines how federal and state policies are driving or hindering the use of school technology.

The report concludes with a summary of technology efforts in all 50 states.

A version of this article appeared in the November 10, 1997 edition of Education Week as Taking Technology’s Measure