Getting the Most Out of Evaluation Takes Both Money and Time
This post originally appeared on The Center for Effective Philanthropy blog.
By: Amy Arbreton
Amy Arbreton is the evaluation officer in the William and Flora Hewlett Foundation’s Effective Philanthropy Group.
Does spending more on evaluation provide “more meaningful” insights?
Last year I completed a survey for the Center for Effective Philanthropy (CEP) and the Center for Evaluation Innovation (CEI) on evaluation practices at the Hewlett Foundation, where I support staff across all program areas to use and learn from evaluations. I eagerly awaited the lessons from (what turned out to be) survey responses from staff at 127 foundations about their evaluation practices, and the resulting report, Benchmarking Foundation Evaluation Practices, provides plenty of data to chew on.
Two issues covered in the survey data that are of particular interest to the Hewlett Foundation right now are: 1) increasing spending on evaluation; and 2) ensuring the practicality and usefulness of those evaluations. We are especially curious about how much spending and use are related, if at all. And while the CEP/CEI report doesn’t examine these questions in relation to each other, it provides valuable data against which we can compare our own spending and use of evaluation.
Providing a solid estimate of spending on evaluation is a challenge. Only just over one third of the survey respondents were quite or extremely confident in the dollar estimate they provided. That’s pretty telling. In a small sample survey of how much foundations were spending on evaluations we reported on for our Board and shared a few years ago, we and others noted a similar challenge to the rigor with which foundations are able to provide estimates of their spending on evaluation.
The Hewlett Foundation is in the group of 50 percent of survey respondents whose funding levels for evaluation has increased relative to the size of the program budget for the past two years. We know this information because we have put in place practices to more rigorously track our spending. While we have been taking specific steps to increase our overall spending on evaluation, however, our goal in doing so is to also see an overall increase in the quality, practicality, and usefulness of the evaluations our program teams commission.
On the one hand, the CEP/CEI data appear positive, indicating that program officers are likely to use evaluations for a variety of reasons, top among them: to understand what the foundation has accomplished (80 percent); to decide whether to expand into new program areas or exit areas (76 percent); to adjust strategies (74 percent); or to renew grantees’ funding (71 percent). On the other hand, a concerning finding is that “having evaluations result in meaningful insights for the foundation” was marked as at least somewhat challenging by 76 percent of respondents.
So, why aren’t evaluations resulting in meaningful insights? Is spending more on evaluation the answer?
We are beginning to try to answer this question at the Hewlett Foundation, based on a review of our own evaluations. Early analysis suggests that it is not just spending money that makes a difference, but spending time, too.
When program officers spend sufficient time with the evaluator — preparing meaningful evaluation questions, sharing with evaluators what they already know and what would be helpful to learn, and understanding the findings and what they mean — they are rewarded with more useful evaluations and insights. The “time” challenge is clearly shared across foundations, with 91 percent of respondents saying that program staff’s time is a challenge for their use of information collected through, or resulting from, evaluation work.
While we see the importance of increasing our spending on evaluation, when it comes to making evaluation useful and used, it’s not just that time is money — it takes both time and money.