Monitoring and evaluation

Webinar summary: A cup of tea with John Young

By 15/06/2017

On Thursday, 25 May R2A held our final webinar in the ‘Cup of Tea’ series, with an interview between Megan Lloyd-Laney and John Young.

If you missed the webinar you can watch the webinar recording on R2A’s Vimeo and Youtube channel (coming soon). You can also view the webinar slides and the suggested further resources on Slideshare.

Below we have provided a summary of the discussion, questions covered, and published the results of the poll about what participants found most tricky monitoring and evaluating research uptake.

Intro

Megan Lloyd-Laney, Director of CommsConsult, began by introducing John Young, Head of the Research and Policy in Development programme at ODI. Megan gave a brief overview of the webinar topic, monitoring and evaluating research uptake, noting that there would never be enough time to cover such a complex topic.

John began by explaining the purpose of the RAPID programme, which exists to increase the use of research-based evidence in development policy and practice. He defined both policy and practice broadly, noting that RAPID’s work ranges from getting ideas into practice, to ensuring research evidence is used within policy documents.

Definitions

Next, John defined the three levels of research use. The first being research uptake, which is generally about making sure that the audience you want to be aware of your research, is indeed aware of it. Then impact, which is when the research audience does something differently as a result of the research. AT the third level are outcomes, when there are changes in peoples’ livelihood due to the research.

John also defined monitoring and evaluation (M&E), highlighting that they are overlapping processes. Monitoring checks on the expected outputs and immediate results of the research during its lifecycle, whereas evaluation checks on the outcomes and causality of the research, normally at the midpoint and end of the project. John mentioned that we tend to talk more now about ‘monitoring, evaluation and learning’, which encompasses implementing the findings of M&E, to really see what works and how we can make things work better.

M&E mistakes

Megan launched the discussion by asking about the most common mistake that people make when going about M&E. The reply was that most people aim too high, thinking too narrowly about uptake only in policy and legislation. Most people also monitor too late and make it much too complicated.

John stressed that it is important to ask ourselves what we mean by uptake, and what policy dimensions are we really talking about? Policy is not just regulations, legislation and strategies, there are many other levels of policy before you can get to that. Researchers need more clarity about which level of policy they are trying to target, which then makes M&E easier as a consequence. John outlined five different levels of policy:

  1. Discursive: concepts, discussions and how ideas evolve.
  2. Attitudinal: stakeholders’ minds need to change across the policy spectrum before attitudes can change.
  3. Procedural: different ways in which issues are discussed.
  4. Content: regulations and legislation.
  5. Behavioural: approaches have to be applied in practice, behaviours of those involved in the process and policy need to change.

Projects seeking to use research-based evidence to influence policy probably need to work at all of these levels.

A more strategic approach to M&E

John outlined a more strategic approach, that through careful planning and monitoring can give confidence that your research is having an impact on policy. John described six levels to assessing progress towards policy impact:

  1. Strategy and direction: the basic plan the research has, for example the contract or proposal in smaller projects and how it plans to meet its intended goals.
  2. Management: the systems and processes for implementing the strategy, effectively tracking if you are doing what you said you would do. This area might include peer or user review and quality assurance.
  3. Outputs: the goods and services produced by the research, such as papers, policy briefs or events.
  4. Uptake: direct responses to the research, including policy mentions or newspaper pick up.
  5. Outcomes and impacts: changes in behaviour, knowledge, policies, capacities and/or practices that the research has contributed to, directly or indirectly.
  6. Monitor the context: what else might be influencing the changes you observe? This is important so you can tell how much influence your research is having.

John gave an example to illustrate the final level of progress, how in Indonesia where he had been working on a research initiative, sudden demands for decentralisation had larger influences than the research project, meaning that the dramatic change in political context influenced policy more than the research.

Tools

A number of different tools were discussed to help assess the six levels of progress:

  1. Strategy and direction: Log frames, Theories of Change, Impact Pathways
  2. Management: quality audits, horizontal evaluation, after action reviews
  3. Outputs: peer review, evaluating websites, evaluating networks
  4. Uptake: impact logs, citation analysis, user surveys
  5. Outcomes and impacts: stories of change, most significant change, episode studies, performance stories
  6. Monitor the context: bellwether surveys, media monitoring, timelines

John gave examples of M&E tools used at ODI, favouring: Theories of Change, performance stories (which aim to describe what the project is trying to do) and after action reviews (when you sit down with the research team and ask a series of four questions, concluding with what you would do differently next time). John noted that ODI use impact logs a lot; an innovative and efficient example is that any email received by staff which references ODI work is sent on to a central email box, where it is sifted and categorised then coded to record the impact it relates to. Stories of change are also used often, with a recent project in Indonesia commissioning research participants, from researchers to policymakers, to submit their stories of change.

Systematic approaches in organisations

The variety in the size and scale of development research means that not all tools are applicable for projects, programmes and organisations, although it is important to try to monitor and evaluate something at each of the six progress levels. Even for the smallest projects, there is a proposal and a contract. The proposal should have a clear purpose and a sound strategy, which can be monitored. Medium-sized projects might use PRINCE 2 as a project management tool. At programme-level there might be the resources for user surveys to check the quality of the outputs, whilst evaluations tend to come at organisational levels due to funding.

Megan asked where the learning comes in, and how well it works, particularly in smaller projects. John replied that PRINCE 2 breaks projects down into stages, meaning that you should never assume you can plan to deliver outcomes, you have to review each separate project stage and its plan. The management tool enables you to re-look at strategies and results in an iterative way. Then you can bring in evaluative activities later down the line, feeding the results of the ongoing M&E into the project cycle. John stated that assimilating the evidence is part of the learning.

What tools do you use to keep effective impact logs?

John replied that you can simply use a notebook and a piece of paper. Another way is an email-based system which you forward emails to. Then these can be hand-coded into an excel spreadsheet, to collect the changes found. John also mentioned that in Indonesia they are using timelines, which are effectively a big Google spreadsheet which is shared and participatively populated; each quarter a new column is added and the project team annotates what happens inside and outside the project. Finally, a more complex alternative is to build yourself a database to do it.

How can we do M&E on return on investment?

John responded that it was a very interesting question and we should be using more of Redstone Strategy’s approach, which assesses a policy issue where an organisation can claim to have made a contribution. They do a political economy analysis and then a contribution analysis, normally the numbers are small as in reality there are lots of contributing factors to policy change. Then they look at the money spent on achieving that contribution and generate a cost-benefit analysis.

John stressed that we should be doing more of this kind of analysis, even if we do have to make assumptions within these large economic assessments because the numbers will convince people who fund the development research that more funding is necessary.

Poll Results

Conclusion

John made two final statements as parting gifts of guidance:

1) We have a moral obligation to work together to generate convincing evidence (qualitative researcher may be nervous of this) to demonstrate the value of research to make the case for continued investment.

2) We must educate donors that not all research will have impact, we need pure as well as applied research. Knowing what not to invest in is also important. Without funding the 100% you will not find the 10% that work.

For more information on RAPID’s approach to M&E read:

Join us for the next R2A webinar series of Research Uptake Roundtables, starting on Thursday 29th June. Sign up for free using GoToWebinar.