Last month the Research to Action Roundtable series brought together a group of evaluators from the DFID-funded What Works to Prevent Violence Against Women and Girls programme. The panelists included Professor Tamsin Bradley (Evaluation Research Lead), Dr Sheena Crawford (Team Lead on Performance Evaluation), Katherine Liakos from IMC (project managers of the evaluation), and Megan Lloyd-Laney (Research Uptake Lead). The Roundtable comprised discussions of the objectives and approaches of the evaluation process, insights into challenges unique to a programme of this type as well as the broader learning outcomes that could be shared with the wider evaluation community.
Katherine Liakos kicked things off, outlining the structure and objectives of the What Works programme and describing the three components that make up the global programme:
- 10 innovation grants and six Impact Evaluations;
- What Works to prevent violence in conflict and humanitarian crises; and
- The economic and social cost of violence against women.
The main aim of the programme is to build a bank of knowledge on What Works to prevent Violence Against Women and Girls (VAWG), both for DFID and other implementers more widely, using primary prevention strategies and programmes as well as research and innovation. Katherine Liakos pointed out the value of the performance evaluation. While such a wide-ranging evaluation, covering multiple countries and components, is a large undertaking, its objective is to support and drive forward the objective of the programme as a whole, providing implementing partners with independent third-party recommendations and guidance.
Professor Tamsin Bradley talked of the challenges of evaluating research itself:
‘There is no magic relationship between high-quality research and then changes in the world, but fundamentally…a critical mass of evidence [is essential].’
Bradley explained that the evaluation team have adapted the Research Excellence Framework (REF) used by UK Higher Education Institutions to assess outputs. The REF has four categories for assessment, and she outlined how these are being applied to the WW evaluation:
- Significance: looking for evidence that could trigger a paradigm shift in how VAWG is researched and responded to.
- Reach: looking at whether there is enough to data available to leverage policy and implementation commitment.
- Rigour: ethical considerations, methodology, how data is handled, kept, coded, etc.
- Impact: looking at particular outputs, what is the likelihood that it will bring about the change hoped for.
Bradley was keen to point out the need for both sympathy and understanding of context when dealing with a sensitive topic such as VAWG. A politicised subject matter in a challenging environment can limit data collection and she stressed the importance of attributing value to a range of capacity-building activities. She also emphasised the need to recognise the different values and contributions of different types of output.
Sheena Crawford, who leads the team on the evaluation of innovation in the programme, provided frank insights into the expectations and surprises faced in the evaluation process so far. She admitted that initially they had anticipated looking at only one aspect of innovation, in relation to the grants given in support of implementation projects to help develop new ways of looking at ending VAWG. However, instead they found that, in order to recognise different types of impact across the whole programme, they must look at innovative methods in every aspect of the programme design.
The significance of identifying synergies was a key message in her presentation; examining outputs and learning from different people, organisations and sectors and establishing how these can build impact beyond the sum of its parts.
Megan Lloyd-Laney’s presentation looked at research uptake (RU), highlighting the slim body of evidence in existence around RU of VAWG particularly. She pointed to specific challenges such as attribution versus contribution in such a politicised and active field; with many of the leading researchers, thinkers and activists being part of the WW programme, how is their influence accounted for and how much can be credited to the programme itself?
She described the three broad principles that framed the evaluation process:
- Aiming to be a critical friend to researchers and NGO partners, ensuring that the process is not extractive and judgmental.
- Looking at what works but also why and under what circumstances?
- Drawing on literature on best practice and principles around RU, building strategies, implementing them, building ownership and capacity.
Acknowledging that research uptake might be hard to evaluate at this early stage she explained the evaluation was looking instead for recognition within the programme of the importance of RU capacity, not just at an individual level but at an institutional one- ensuring that they are providing enabling environments to support uptake activities now and in the future.
She also pointed out the need to communicate the evaluation findings and stressed that the evaluation team were attempting to practice what they preached by identifying key audiences for their findings (primary stakeholders, the programme itself, as well as the wider evaluation community), and repurposing and repackaging evaluation findings accordingly.
The panelists then turned to questions from attendees, which ranged from discussions of Theory of Change (ToC) in the evaluation design – Sheena Crawford revealed that the team have worked with the three components to bring in one unifying ToC across the programme- to the delicate balance of arbitrating between NGOs and Research Institutions and the ethical considerations of working with vulnerable parts of the population in applied research.
A full recording of the roundtable is available here.
Check the Research to Action website for more information on upcoming Roundtables in the series or follow #R2ARoundtable on Twitter.
Social Media