Learning from Evaluation: Summary of Eval 2017 Discussion Between Ipsos, Vital Voices, Lutheran World Relief, and Coca-Cola

In response to the theme of learning from evaluation, Ipsos convened a panel of its clients for the 2017 American Evaluation Association Conference in Washington DC. The panel explored what happens to impact measurement work after we deliver it and how we can better tailor our process and deliverables to clients’ organizational learning needs.

Alejandra Garcia, Director of Monitoring and Evaluation at Vital Voices, presented on her organization’s work to build and pilot an M&E framework to test the hypothesis of deep, individualized investment in women leaders as a successful model in catalyzing larger societal and cultural progress. Her presentation discussed how Vital Voices, via a collaborative partnership with Ipsos, has begun to address the challenges of evaluating a program that is founded on the principle of the hard-to-measure “ripple-effect”.

Alejandra discussed the process of developing common goals and indicators across multiple and diverse programs within Vital Voices. She explained how the development of program-level theories of change has helped to bring teams together and to align with their organizational theory of change. While consultants are also able to offer additional skills and resources that may be limited within an organization, she felt that they must understand the nuances of internal dynamics and work hard to become perceived as team members and not auditors in order to enable effective collaboration.

Garrett Schiche, Director of Program Quality at Lutheran World Relief offered a perspective on how to respond to internal knowledge gaps to produce impact assessments that balance rigor and resources. He discussed the limited utility of traditional capacity building (where one-off training happens in groups). He found that one good option for delivering impact evaluations and meeting learning needs in this environment is to engage staff by working with technical experts to coach them through the evaluation process, as his organization did with Ipsos on Lutheran World Relief’s CORE resilience project in the Sahel.

This approach does come with its challenges, however. At times, the distance between staff capacity and technical expertise was too great to bridge, and this problem was frequently exacerbated by language barriers and staff spread across remote areas and working in highly complex contexts. While the project achieved higher quality evaluation deliverables than in the past, the learning piece stalled. One of the primary learnings to take into the next project was the need for better communication from headquarters about the role of the technical experts and the learning objective of the evaluation.

Angie Rozas, Senior Director for Social Impact at Coca-Cola shared Coca-Cola’s experience of communicating the findings of the long-term evaluation of their 5by20 initiative. One of the most pressing challenges quickly became how to offer early feedback to stakeholders with confidence that the early trends would continue and the outcome narrative wouldn’t be dramatically different at each time-point. This was overcome by looking at the findings that you could expect to be confident in at each stage. For example, early outcomes would focus on perceptions, learning, and intent to apply learning.

The use of statistical models required to understand change in such longitudinal studies is highly technical and necessitates an intensive, collaborative process around how to best interpret, utilize and communicate the findings internally to gain support for the program and externally to stakeholders. It was important to consider carefully how to display the complex findings in a simplified way, and to fully engage with Ipsos on the technical details to enable a credible translation for those unable to engage with the detail.

There were several aspects of this rich discussion that I will reflect on in future evaluation engagements to ensure that the impact work Ipsos delivers is optimal for integration into organizational learning. In particular:

  • Theories of change are critical tools for learning about a program and organization, and getting everyone “on the same page” about the goals of an intervention and what metrics should be tracked. This isn’t new information – all of our studies have a theory of change component – but it is worth reinforcing the benefits of theory of change approaches here and with all of our clients.
  • We must work hard to understand an organization’s learning goals as distinct from an evaluation’s objectives and consider the potential barriers to learning (language, geography, politics, etc.) from the outset. To do this we must ensure that our clients perceive us to be advisors and collaborators and not judges of their work, thus fostering openness and candor.
  • Technical rigor is only as clever as the way it is communicated. If clients cannot understand the findings they are presented with, however rigorous, and if they cannot communicate those findings internally and understand what they mean for their program, then we have failed to do our job as evaluators.