Federal Projects’ Impact on STEM Remains Unclear
Few U.S. programs have studies evaluating their effectiveness.
This much is known about the federal investment in math and science education: The government spends more than $3 billion a year on those programs.
What is unknown: Do those efforts actually work?
That uncertainty, as spelled out in a recent federal report, has prompted calls from inside and outside government for a more effective way of judging the effectiveness of the federal role in STEM—or science, technology, engineering, and math education—and for more coordination among those programs.
The report of the Academic Competitiveness Council, mandated by Congress and released last year, identifies 105 STEM-related programs across 11 federal agencies, with a combined budget of $3.12 billion in the year they were studied, fiscal 2006. Of 115 evaluations of those programs, only a small fraction—four—were found to have been scientifically studied and shown to have a “meaningful positive impact.”
|States Heeding Calls to Strengthen STEM|
|A School Where STEM Is King|
|Learning to Teach With Technology|
|Cultivating a Diversity of Talent|
|Competing for Competence|
|State Data Analysis|
|Table of Contents|
Of that money, $574 million was devoted specifically to K-12 STEM, with the U.S. Department of Education and the National Science Foundation accounting for 85 percent of the spending. Another $137 million was spent on informal education and outreach, such as programs aimed at sparking students’ interest in STEM topics through schools or museums.
The federal review did not count much of the government’s spending on school technology, such as through the Enhancing Education Through Technology grant program. Funded at $267 million in fiscal 2008, the program is highly regarded by school technology advocates but has been targeted for elimination in recent years by the Bush administration.
Another federal initiative, the E-rate program, uses revenues from telecommunications fees to provide discounts to the nation’s schools—valued at about $2 billion a year—to pay for basic technology infrastructure.
The point about federal STEM programs is not that they are ineffective, many observers say, but something more ambiguous.
“It’s that we don’t know,” says Grover J. “Russ” Whitehurst, the director of the Institute of Education Sciences, the research arm of the Department of Education. “The biggest part of the universe we’re dealing with are question marks.”
Federal STEM education programs are popular among policy experts and elected officials, who see them as tools to raise student achievement and build a stronger workforce in technical fields.
Last year, Congress overwhelmingly approved the America Competes Act, a bipartisan measure that authorized the creation and expansion of STEM programs in teacher training, instruction, educational research, and other areas.
Overall, that law called for about $840 million in new spending on K-12 and undergraduate education in fiscal 2008, according to an estimate from the House Committee on Science and Technology, though Congress chose not to fund many of those programs during that fiscal year. Advocates say they will push for more money to be added in future years.
Yet even those who back a hefty federal commitment to STEM education say a more consistent and reliable method of judging those programs is needed.
“You want to have a means to measure what’s working,” says James Brown, a co-chair of the STEM-ED Coalition, a Washington organization representing educators and scientists who want a strong federal role in that area. While policymakers may disagree on what standards should be used, he adds, “it has to be a systematic approach.”
Today, the measures being used to judge many STEM education programs don’t say much about their impact in the classroom, the report by the American Competitiveness Council says. Specifically, federal agencies tend to judge those programs on “inputs,” such as the number of teachers participating, or changes in attitudes of participants, rather than measures such as whether student achievement improves, it found.
The report suggests a “hierarchy” for studying programs, with the preferred method being experimental studies such as randomized controlled trials. In those studies, a group of individuals, such as students or teachers, is evaluated while participating in a certain program or curriculum, and compared against a group that is not taking part. When the conditions are appropriate, such studies allow researchers to isolate a particular classroom strategy or program and judge its effect.
On the National Assessment of Educational Progress, gaps between math scores for 8th graders from low-income families and those from higher-income levels narrowed in 37 states and grew wider in 14 from 2003 to 2007. Overall, gaps narrowed by an average of 2.4 scale-score points.
But, the report acknowledges, trials do not work in all circumstances, and can be expensive and difficult to implement in schools. Parents, for instance, may be reluctant to have their children participate in a study, or they may not want to be a part of the group that is not assigned to be tested under an innovative teaching method, or with a new technology.
In situations where randomized studies won’t work, the report recommends comparisons of closely matched groups, and as a third alternative, other types of comparisons with “nonrigorous” designs. The primary purpose of those studies should be to refine understanding of a program for more rigorous study later, it says.
But the education community is often too quick to dismiss randomized controlled trials over cost concerns and other worries, Whitehurst contends. Much of the expense of those studies is associated with the collection of data, he notes, and that information has become more plentiful and reliable with the reporting requirements on schools and states under the 6-year-old No Child Left Behind Act.
The reluctance to conduct randomized controlled trials “is often a knee-jerk reaction,” Whitehurst maintains. “It is often an absence of will.”
Federal officials are following up on the report of the competitiveness council by asking various STEM agencies to provide details of their plans to improve the evaluation of their programs. An education subcommittee of the National Science Technology Council, a White House advisory group, is collecting that information and plans to release a follow-up report this spring, says Whitehurst, who is helping direct that process.
The National Science Foundation is making a number of changes to its STEM education programs that will address, but also go beyond, the issues raised in the report, says Joan Ferrini-Mundy, the director of the agency’s division on research in formal and informal settings. The agency already requires evaluation of all its programs within the division of education and human resources, which pays for many STEM education research and activities, and individual projects within those programs are increasingly being put through independent studies, she says.
But the NSF is also attempting to improve its evaluations so that such research will show not only whether programs and projects work, but also why they work, Ferrini-Mundy says. The agency wants to take a lead role in crafting new ways of studying the effectiveness of STEM efforts and increasing the number of experts conducting those studies, she adds.
The Academic Competitiveness Council capped a yearlong study of federal programs to improve STEM performance with a report released in May 2007 that recommended six steps to improve U.S. competitiveness in those subjects.
• The ACC program inventory and goals and metrics should be living resources, updated regularly and used to facilitate stronger interagency coordination.
• Agencies and the federal government at large should foster knowledge of effective practices through improved evaluation and/or implementation of proven, effective, research-based instructional materials and methods.
• Federal agencies should improve the coordination of their K-12 STEM education programs with states and local school systems.
• Federal agencies should adjust program designs and operations so that programs can be assessed and measurable results can be achieved, consistent with the programs’ goals.
• Funding for federal STEM education programs designed to improve STEM education outcomes should not increase unless a plan for rigorous, independent evaluation is in place, appropriate to the types of activities funded.
• Agencies with STEM education programs should collaborate on implementing ACC recommendations under the auspices of the National Science and Technology Council (NSTC).
“We treat [evaluations] as integral to our sense that STEM education can be advanced, and that all available tools for doing so deserve attention,” explains Ferrini-Mundy, writing in an e-mail.
An independent advisory panel to the NSF, the National Science Board, echoes some of the competitiveness council’s findings in a report released last year. The board points to a lack of coordination among federal agencies involved in STEM education and recommends the creation of a national council on STEM education to help coordinate local, state, and federal efforts.
Science board member Jo Anne Vasquez says she and others not only came across duplication in STEM efforts, but also numerous examples of “great” programs that exist in relative anonymity.
“School district people get out and talk about the things that work in their schools,” Vasquez says, and federal agencies should do the same. The government, she says, should make a greater effort to connect “what’s known and proven with [efforts] that are just starting up.”
One popular source of STEM-related funding for states and local schools is the Enhancing Education Through Technology program, which provides money to states to distribute to districts.
Since it was established through the No Child Left Behind Act in 2002, the program has seen its funding drop steadily, from $700 million in fiscal 2002 to $267 million today. Bush administration officials, who have argued that improvements in school technology have made the program unnecessary, call for zeroing out the program in fiscal 2009.
But others, such as technology advocate Mary Ann Wolf, say that money has paid for valuable services, such as interactive classroom and assessment technology, training for teachers on how to use it, and online professional development for educators in rural and disadvantaged schools.
Evidence that those efforts are paying off is found in test scores from individual schools and districts, where various technology initiatives are implemented, says Wolf, the executive director of the State Educational Technology Directors Association, which collects information from states on the program.
“States see the improvement,” Wolf says. “States that are doing well—a lot of the time, they say, ‘The federal government was the catalyst.’ ”
Vol. 27, Issue 30, Pages 20-21