Foundations Seen Increasing Efforts To Evaluate Impact of Grants
Communication problems, power struggles, and the red tape created by government regulations and school administrators are the major barriers to evaluating the effectiveness of grants to education, foundation officials and educators said at a conference here this month.
Grant evaluation is often viewed as a complicated, time-consuming, and politically charged endeavor that can cost as much as the project itself, according to many participants at the two-day workshop sponsored by the Bruner Foundation.
The workshop drew some 60 funders, project directors, and evaluators from mainly small to mid-size foundations in the New York area.
Only a small number of foundations conduct full-scale evaluations of their grant projects, said Janet Carter, the executive director of the New York-based Bruner Foundation, a philanthropy that focuses on urban revitalization and evaluation.
According to Ms. Carter, among the 30,000 charitable foundations in the United States, there are only 20 foundation employees whose major responsibility is evaluating projects.
However, the topic of evaluation is gaining momentum in the philanthropy world, observers say. Foundations are beginning to take "a much more activist stance around this whole issue,'' Ms. Carter said.
An increasing number of foundations are "realizing it's part of their responsibility not to just 'hit and run,' but to understand the factors that led to the success or failure'' of their grants, according to Mary Leonard, the director of precollegiate programs at the Council on Foundations.
The recession has catalyzed a growing concern about evaluation, Ms. Carter wrote recently, placing foundations "under increasing pressure to use their resources wisely to meet increasing demands, to know what programs are effective and why.''
Ms. Leonard also attributed the burgeoning interest in evaluation to a growing candor on the part of foundation directors, who have become more willing to admit that not all of their grants have been successful, and to recognize that projects need not be perfect to be considered worthwhile.
"People are willing to say out loud, 'Well, it didn't do everything we wanted it to do. Some things worked, some things didn't, and here's the lessons learned,' '' Ms. Leonard said.
The Bruner Foundation has played a major role as well, Ms. Leonard added, noting that Ms. Carter is considered one of the pre-eminent sources of guidance for foundations preparing an evaluation.
Evaluation is a continual process of "negotiating power and ethics,'' said the conference's keynote speaker, Michelle Fine, a consultant to the Philadelphia Schools Collaborative, a year-old business-school partnership.
No evaluation of a reform project is completely objective or "value free,'' Ms. Fine said, because local government officials and district administrators generally stand to benefit politically from positive evaluations, and to suffer from critical ones.
As a result of the divergent agendas of grantmakers, educators, administrators, and politicians, she said, many evaluators encounter difficulty obtaining accurate, unbiased statistical data for their reports.
Political Pressures Cited
In addition, it is not unusual to encounter "political pressure to fudge the results'' of an evaluation, according to Hope Hartman, a former evaluator and currently director of the City College of New York's tutoring and cooperative-learning program.
Evaluators sometimes are directed by politicians or school officials to emphasize positive results and to downplay negative elements of a project evaluation, she said.
Evaluators may also feel compelled to minimize a project's negative aspects because they do not want to give their employer--the foundation--bad news about how its money was spent, said Janet Price, a program officer for the Fund for New York City Public Education.
On the other hand, schools may have an "enormous mistrust'' of evaluators, said Heather Lewis, the executive director of the Center for Collaborative Education, an affiliate of the Coalition of Essential Schools. "Many evaluators come in and do a report and don't even show it to the schools.''
As a result, educators and foundations alike are becoming increasingly skeptical about the use of only such quantitative measures as standardized-test scores, attendance figures, and dropout rates to gauge a project's impact, according to many conference participants.
"It's easy to lie with statistics,'' said Ms. Price, "and show that great improvement has been made in testing ... when in fact there was negligible improvement.'' Similarly, she said, progress does not always translate immediately into higher scores.
"We rely so much on the reading score,'' Ms. Price observed. "It's the one thing [New York] schools are judged on more than anything else.'' Calling this measure "specious,'' she said that "It means nothing, [yet] often [scores] are the only thing a school is judged on.''
In addition, Ms. Leonard of the Council on Foundations said that foundation program officers who are "generalists'' may have difficulty understanding and interpreting complex statistical data. "It may take a trained social scientist to understand the difference between a rigorous evaluation and one that has holes in it,'' she noted.
Paralleling the movement in schools toward using portfolios to assess student performance, foundations are now developing more comprehensive methods for assessing the impact of their grants. A growing contingent now use ethnographers' observations, case studies, and surveys of parents' and students' attitudes to supplement or replace quantitative data.
One problem with relying primarily on qualitative evidence, however, is that it is difficult to translate into a "15-second sound bite or headline'' that generates positive media exposure for a school program, Ms. Price said.
Ultimately, what makes an evaluation more credible, Ms. Fine of the Philadelphia project said, is asking the right questions of the right people, and including multiple layers of evaluators and a variety of assessment techniques.
Among the questions that conference participants said should be asked during the design of both the project and its evaluation are: Who selects the evaluator? What are the political agendas of all involved parties? Are the project's goals realistic? Who will have access to various drafts of the report? What impact will the final report have on a project's continued funding, and its future existence?
What often gets lost in the evaluation process is the issue of what characteristics make a school good, according to Anne Cook, a co-director of the Urban Academy, an alternative public high school in New York.
Ms. Cook said she believes foundations can have a greater impact if they focus not on raising scores and "quick fixes,'' but rather on trying to foster a sense of community within a school, to create an environment "where it's O.K. to be intellectual, it's O.K. to read a book, it is important to be respectful, ... and adults and kids can work together.''
Often, Ms. Cook said, the voices of the fundamental players in education--students, teachers, and parents--go unheard in both reform projects and their evaluations.
Donald Murphy, a former high-school teacher in Red Hook, N.Y., and currently an editor of School Voices newspaper, criticized the conference itself for not including teachers, parents, and community members.
Often, Mr. Murphy said, the predominantly white, middle-class foundation representatives and project directors do not reach out adequately to the predominantly black and Hispanic parents and students in urban communities, so they often "end up in a conflict with the community'' over what each values in education.
"There are literally millions of dollars being poured into these efforts, but to the extent it has an effect in Red Hook and Crown Heights and East New York ... it has none at all,'' he said.
Vol. 11, Issue 26, Page 5Published in Print: March 18, 1992, as Foundations Seen Increasing Efforts To Evaluate Impact of Grants