Many school leaders see pilot tests of educational technology products as important opportunities to try out digital tools before their districts leap ahead with costly purchases.
But a new study suggests that school officials and tech developers often fail to set clear standards for judging the success of those trial runs, and that there isn’t an effective process for gathering feedback from teachers and students.
And even if products perform impressively during pilots, district budget cycles tend to prevent administrators from buying the tech tools they’re trying out anytime soon.
Those were among the findings of an analysis put forward by Digital Promise, a congressionally authorized nonprofit that has sought to shine a light on how districts buy educational technology and why the process seems to frustrate and confuse both companies and educators.
Districts typically agree to allow companies to test products in their classrooms in the hope that the technologies will help meet specific educational needs or forge new paths for teaching and learning. Ed-tech companies, in turn, use pilots to showcase their goods, refine them, and nurture relationships with potential clients who could buy their products for use across an entire district or set of schools.
Yet the Digital Promise study, which focused on the experiences of six districts around the country, lays bare the factors that can stymie both parties’ ambitions.
Many K-12 officials would like to be able to judge the value of ed-tech products by looking at their impact on student test scores—no surprise, given the pressure schools face to raise achievement. And some of the districts that participated in the study sought to track gains through test scores, too.
But the participating districts also said that when state assessment results were used to measure success, the results didn’t come in until after the academic year had ended, interfering with their decisions about how, or whether, to use the products the following school year. A few districts in the study sidestepped that problem by using locally crafted tests, which could be administered on their own schedules.
Yet the study’s authors warn of the limitations in trying to link test-score gains to the work of a single product, when the exam results could be swayed by other factors.
For an ed-tech developer, the study found, staging a successful pilot in a district in no way guarantees the company will land a contract to do work there. Districts’ tests of products for the study were held in the second semester and weren’t complete until around the end of the school year. That generally did not leave districts nearly enough time to set aside money and put in motion a procurement process to buy those goods for the next school year, even if that’s what K-12 leaders wanted.
“There’s an asynchronous calendar” between pilots and district purchases, said Valerie Adams-Bass, the lead author of the study, in an interview. And “many districts felt like, ‘We need to spend more time to try this product before we invest in it.’ ”
‘Mature’ Student Feedback
The research on pilots builds on a study commissioned last year by Digital Promise, an organization that seeks to improve education through technology and research, and by the Education Industry Association, as part of an analysis led by researchers at Johns Hopkins University.
That study found that companies have only a vague sense of districts’ buying needs and of how to interest them in their products. K-12 leaders, meanwhile, told researchers they were overwhelmed by product pitches and lacked the time and ability to evaluate them thoroughly.
The earlier study also revealed that districts rely heavily on pilots to test ed-tech products, but that their expectations for those trials can vary enormously.
To probe the topic in more depth, Digital Promise recruited six districts of varying sizes to participate in its new study: the District of Columbia; Fulton County, Ga.; Piedmont City, Ala.; South Fayette Township, Pa.; Vista (Calif.) Unified; and West Ada, Idaho. In some cases, district plans to pilot a product were already underway before the study began. Fulton County was the biggest district, with more than 95,000 students; Piedmont was the smallest, with about 1,200.
The districts tested a variety of products developed for different academic subjects and school needs. The District of Columbia schools used Newsela, a Web-based platform focused on literacy; Fulton County tested BrainPOP and IXL, producers of animated curricular content and of math and language arts content, respectively; Piedmont used Achieve 3000, in an effort to provide digital science and social studies content; South Fayette piloted Vex IQ, a robotics platform; Vista Unified piloted the Web-based math program ST Math; and West Ada tested ALEKS, another Web-based math program.
Districts officials who took part in the study said they valued teacher and student feedback about products but rarely collected it in a formal way, noted Adams-Bass, a postdoctoral fellow in education at the University of California, Davis, the institution that led the study. Information about pilot tests tended to trickle up from students to teachers and from teachers to principals, the study found.
The lack of a systematic way of capturing what students think of ed-tech products is a lost opportunity for developers, the study suggests.
Students’ comments about such products were “surprisingly mature, and they had particularly insightful comments about advice for education technology developers,” the authors found. “The student voice is vital to consider throughout a pilot process, as they are the true end-users.”
The insights offered by students helped shape the thinking of both district officials and developers working on the South Fayette Township district’s pilot, recalled Aileen Owens, the system’s director of technology and innovation. The district partnered with Carnegie Mellon University and the University of Pittsburgh on the pilot, along with Vex IQ, in trying to build students’ computational thinking and robotics programming. The district and its partners are making adjustments to how the product is being used this academic year, based on that what students liked, and what they didn’t, Owens said.
Winning Over Teachers
Among students’ initial observations from the South Fayette pilot: They wanted the platforms they used to challenge them more, and they wanted more real-world simulations, Owens recalled.
“That was eye-opening,” she said, “to learn about students’ discoveries and hear their voice.”
Rusty Greiff, the managing director of education ventures and a general partner at 1776, a Washington-based incubator and seed fund that supports ed-tech companies, agreed with the study’s authors on the importance of developers and K-12 officials working together to set clear goals for implementing pilots and evaluating products. He also agreed such coordination is often lacking.
Communication breaks down for several reasons, he said. Districts sometimes pilot digital products without thinking about whether the technology is aligned to their goals and objectives, and without making sure their teachers are on board, said Greiff, who has helped launch digital products and currently advises and serves on the boards of ed-tech companies.
“There are a lot of players and constituencies who have to be aligned for it to be effective,” Greiff said. “Pilots are hard to execute,” a fact that districts and companies “sometimes figure out quite late in the process.”
Companies and districts also overlook the amount of training and ongoing support educators need to make sure ed-tech products are used as intended during pilots, said Karen Billings, a vice president and the manager of the education technology industry network for the Software & Information Industry Association.
Assigning staff members to help teachers work through technical glitches and implement the product effectively “drives up costs for vendors,” Billings said. Yet while that training can bring “high costs for a company,” she said, “there’s a high reward.”
Tech developers also tend to overlook the day-to-day pressures teachers face—from managing a classroom to juggling myriad digital platforms—and how those factors keep pilots from occurring under ideal circumstances, he added.
Companies tend to think they’re going to “have pilots [occur on] their own island,” Greiff said. “Teachers don’t work on an island. They work in the classroom.”