Track 4: Getting and Using the Right Kind of Evidence
There are three main challenges with regards to evidence: How do we generate different types of evidence that is relevant to multiple audiences, such as donors, private sector, government?
How does a program get the right kind of internal data to iteratively improve their programs? And how do we make sure we effectively use data?
Continued learning and experimentation is therefore important for developing insights and enhancing our capacities to combine the collection and analysis of new sources of data (quantitative
and qualitative) with our time-tested research and M&E capabilities. These processes are even more critical in multi-stakeholder collaborations where the use and interpretation of
evidence can influence strategies, as well as the way we assess change, behaviors, and relationships in market systems.
This track, first presented at the 2016 SEEP Annual Conference, will explore effective methods for collecting both “new” and “traditional” data and then transforming
it into useful evidence to understand real-time needs, improve decision-making at multiple levels, promote learning, influence policy, and increase overall development impact.
Can we use evidence to incentivize collaboration? How can collaborative learning approaches be used to support innovative M&E practices and/or adaptive management?
How do we identify the most relevant data for our stakeholders, and then effectively balance the need to provide data to different stakeholders with M&E efficiency and cost?
What does it mean to be flexible and rigorous with data at the same time? How does this affect the resulting evidence?
How do we ensure that data, knowledge, and evidence can be used in real-time for iterative programming and collaboration?
How do we assess behavior change within market systems?
How do we assess market systems development in financial inclusion?
How can we support policy makers to iteratively use new information in decision-making, and to use evidence that goes beyond “traditional” numbers?
How can we be more deliberate and collaborative in contributing to the evidence base for market systems development?
Illustrations of participatory methods for market systems research and evaluation
Exploration of successful organizational strategies to incentivize information-sharing and data-driven decision-making
Practical examples of how MEL systems evolve in a project using adaptive management approaches, and the cost of adaptation
Alternative methods to assess project impact--beyond traditional surveys (mid-line, end-line, etc) -- examples might include Most Significant Change, Outcomes Mapping, etc.
Examples of using adaptive management or “CLA” approaches to contribute to the evidence base for market systems development
Examples of measuring systemic change and how collaboration reinforces or undermines identified changes
Lessons learned on helping stakeholders to focus on the right data at the right time