There’s no one-size-fits-all framework for monitoring and evaluating research projects – but here’s a start
It was once enough to gauge the ‘success’ of a research project – even those aiming to influence policy – by counting the number of articles published in peer-reviewed journals, and possibly citations or downloads.
But the scope and scale of policy research projects has broadened – moving away from single research studies towards multi-component, multi-partner and multi-sector endeavours. These projects often have numerous aims, beyond producing high-quality publications: to have impact beyond academia, engage different external stakeholders and/or build partners’ capacity, to name just a few examples.
Evaluating outputs is no longer enough to assess a project’s impact as it only captures a small proportion of what these policy research projects aim to achieve.
Designing a monitoring and evaluation (M&E) framework for these large-scale policy research projects can often be a challenge – and it’s this vital planning that the Research and Policy in Development (RAPID) team of the Overseas Development Institute is often asked to help with.
Based on years of experience working with NGOs, policy-makers and other research institutes, and building on previous RAPID work, The Methods Lab have put together new guidance on how to design M&E frameworks for complicated and complex projects.
Here, we share four key things to bear in mind as you embark on designing an M&E framework for your policy research project:
1. Go beyond outputs and uptake
Outputs – and to some extent uptake – are usually well captured in research M&E frameworks. But thinking through diverse ways that uptake, outcomes and ‘impact beyond academia’ can take place and how context can be monitored is often more challenging.
The framework RAPID has been using for almost a decade (based on the five-level ‘Ingie Framework’, with the addition of a sixth level – ‘context’) tracks changes more closely and pays attention to the often neglected elements of strategy and management.
Though overlooked, these two levels are hugely important and influential – especially in big, multi-year, multi-million research programmes. Strategy and management are usually assessed intuitively by managers but it is worth systematically and regularly (e.g. annually) reflecting and assessing whether the project direction is still valid and whether applied decision-making and governance systems are functioning and fit-for-purpose.
2. Start with questions and prioritise
As with any good evaluation plan, the design of an M&E framework should start with purposes and questions, not with methods, indicators or logframes. Beginning with indicators can lock you into what will be measured (indicators) before thinking through what you want to know (M&E questions) and why (purposes).
Questions have the power to help direct ‘sense-making’ and can support programme design. It is also better to prioritise and focus on one or two main M&E questions, and then support these with secondary questions that don’t all necessarily need to be answered every year or at each assessment point.
3. There’s no one-size-fits-all solution (sorry)
Though many of the policy research projects and programmes we work with have similar characteristics, their nature and the context in which they operate in differ significantly. Even within RAPID we all use the M&E framework in diverse ways, highlighting different elements and steps.
Our guidance note doesn’t attempt to provide comprehensive instruction on all aspects of developing an M&E system (for example, how to collect, manage, analyse and use data); instead it should be viewed, not as set-in-stone, but as flexible guidance to start and structure thinking.
4. Be realistic with M&E plans and activities
Ultimately, many of the choices about deciding the scope, intensity and timing of the M&E areas will largely depend on the resources available – personnel, time and funds as well as capacity, experience and skills of those people dedicated to, and involved in, the M&E work.
It is better to be realistic and practical about what can be done and how much time people can truly spend on M&E activities, rather than trying to do everything possible but in a hasty or unsystematic manner.
This post was originally published here.