By Rachel Mosher-Williams, Mariam Mansury and Raffie Parke
All too often, the phrase “measurement and evaluation” elicits fear in the social sector. Evaluation is often done only when stakes are high, such as when a grantmaker uses results to decide whether to continue funding an initiative or when a funder requires “evidence of impact” before making or increasing grant funding. As a result, organizations often focus evaluations on binary measures of success (“Was this initiative successful or not?”) rather than opportunities to understand the gaps between what’s working and what’s not (“What have we learned?”) in order to course-correct and build capacity to create greater impact. This orientation to monitoring versus learning affects foundations and nonprofits alike, with 82 percent of foundations struggling to generate useful lessons for grantees through evaluation; nonprofits, beholden to funders’ requirements, are particularly vulnerable.
Treating evaluation exclusively as a yes-or-no question, rather than also using evaluation to create space for open-ended inquiry, can backfire in several ways. First, doing so incentivizes a focus on the most measurable outputs and outcomes rather than the most important ones. For example, counting students in a bullying reduction program is more straightforward than measuring shifts in students’ mindsets or behaviors in bullying situations. This might lead a program to prioritize maximum enrollment rather than ensuring the curriculum has long-term cognitive benefits, losing sight of the desired outcome.
Infrequent, high-stakes evaluations can also perpetuate a cycle of anxiety leading up to them and reactive decision-making based on their results. When results demonstrate that a program is “successful”—i.e., the anticipated outcomes were produced—a team might breathe a sigh of relief and continue work as normal. A team unaccustomed to regularly learning, reflecting and subsequently adapting likely won’t know how to use results to improve a program or even shift directions. They might not view “failure” as a natural part of learning and social change. Regardless of whether evaluation results demonstrate “success” or not, the team will likely disregard any resulting detailed data and learning, missing the opportunity to understand unexpected outcomes or other pathways to greater impact.
Finally, and perhaps most harmfully, a black-and-white approach to evaluation can deteriorate an organization’s culture. A win-lose atmosphere with little tolerance for making mistakes and learning from those mistakes can quickly erode transparency and trust both within the organization and with external stakeholders. This limits an organization’s ability to innovate and, ultimately, achieve its mission.
In contrast, when learning and adaptation are part of an organization’s operating model, its work is likely to have more impact and the organization is likely to be more resilient over time. In recent years, we’ve seen several clients make great strides in this regard. For example, in 2015, NeighborWorks America hired us to conduct a formative evaluation of its Sustainable Homeownership Project (SHP). Importantly, NeighborWorks requested not only a verdict on SHP’s success but also specific recommendations on how it could be stronger. Our suggestions spanned program implementation, cultural shifts, branding, knowledge sharing and more. NeighborWorks embraced the ideas and even shared them with their board. This year, we partnered with NeighborWorks again to support a new phase of SHP and were delighted to see many of those learnings in action. We are humbled and inspired by partners who demonstrate the tremendous benefits of learning-driven evaluation.
To nudge change forward, all social sector organizations should more fully adopt an evaluation culture, a culture in which learning is a companion to doing good work. In doing so, formal evaluation becomes a natural culmination and enhancement of the continuous learning embedded in the way an organization operates.
Given funder privilege inherent in developing grant requirements and in allocating resources specifically for learning and evaluation, we call on grantmakers to not only tolerate but embrace and incentivize continual learning among their grantees. Nonprofits and foundations both can also benefit from shifting the way they evaluate work internally. These three questions are a start:
- Do our key metrics and targets directly support our mission and vision?
- How can we more regularly measure progress toward these outcomes?
- What processes do we need to implement (or remove) to adapt quickly to what we learn?
In 2015, Community Wealth Partners created a Learning and Impact team to better understand our impact and help our partners do the same. If the social sector stands a chance at solving vast problems like hunger and poverty, we all must embrace evaluation with humility and a hunger to learn.
About the Authors
Rachel is the first Senior Director of Learning and Impact at Community Wealth Partners. She leads the design and testing of learning and performance measurement systems so the Community Wealth Partners team and our clients and partners better understand what it takes to solve social problems. Rachel brings over 20 years of experience developing program, research and network development strategy for social sector organizations. Learn more about Rachel and send her an email.
As a Consultant, Mariam leads client engagements, advises and coaches leaders, and spearheads internal teams to develop solutions and deliverables through research and analysis. Mariam has extensive global experience, working with high-level government officials and leading NGOs to design, implement, and evaluate national security and development strategies. She has deep expertise in strategic planning, coalition building, and monitoring and evaluation. Learn more about Mariam and send her an email.
As Data Analytics Manager, Raffie spearheads the advancement of data systems and infrastructure, develops user-friendly models to promote a culture of data sharing, partners with client teams to plan and execute actionable analysis, and manages data needs across the organization. Her background is in primary and secondary research and analysis: translating qualitative and quantitative findings into strategic insights. Learn more about Raffie and send her an email.