This practice paper from IDS captures lessons from recent experiences on using ‘theories of change’ amongst organisations involved in the research–policy interface.
Monitoring and evaluation
It’s not easy to measure the impact of development research in bringing about positive change. It’s even harder to show how communications efforts, and expenditure, helps to achieve both research objectives, and development outcomes. This section aims to offer key resources and insights to help support better monitoring and evaluation of research uptake activities.
A team from the University of Exeter’s (UK) DESCRIBE project ,funded by JISC, have authored a report which gives guidance on the definition, evidence and structures required to capture research impacts and benefits.
This working paper offers guidelines that nonprofit organizations can use when designing evaluations to learn about both their investments in communications strategies and the impacts of those investments.
This Background Note describes a case study of one attempt to assess the impact of a knowledge product: The Vietnam…
This Background Note outlines key lessons on impact evaluations, utilisation-focused evaluations and evidence-based policy. While methodological pluralism is seen as…
One of the first important points when defining socio-economic impact is the distinction between „dissemination‟ and „impact‟. One of its main conclusions, that while disseminating to a wide range of audiences is positive, impact has to be evidenced by the application of the research outcomes by the user or community.
Vanesa Weyrauch and Gala Diaz Langou have developed a conceptual framework to identify factors that help or hinder impact evaluation…
This paper lays out the reasons why we might want to examine the difference that research can make. It then explores different ways of approaching this problem, outlining the core issues and choices that arise when seeking to assess research impact.