By Lindsey Jones, Research Officer, Overseas Development Institute (ODI)
How can research feed into development policy and support positive change? This question remains critical to the development research community, and attempts to answer it often rest on understanding the roles, interactions and incentives between the many different actors in the research to policy process.
The challenge of answering it was recently put to a group of influential researchers, practitioners and policy-makers at a ‘sandpit’ event in Nairobi (6th-8th September 2012). Similar to the CDKN’s Action Lab, the event took a speed dating approach and had a specific focus on climate change and conflict. Participants were tasked with developing group proposals that combine cutting-edge scientific research with innovative capacity building tools and evidence of policy uptake.
The hope was to share knowledge and ideas, develop new networks, and (above all) encourage healthy completion in applying for the upcoming Conflict and Cooperation in the Management of Climate Change (CCMCC) call. Emphasis was on linking different skill sets and ensuring their research has the maximum impact on the ground (i.e. more than just resulting in academic papers and other research outputs).
Numerous consortia quickly took shape. A wide range of interesting issues were discussed and proposed, such as: what are the impacts of climate change (and climate policies) on conflict affected areas? Are there tipping points between cooperation and conflict? And what can we do to effectively build the resilience of poor communities in fragile states (is resilience in fragile states even the same as in other contexts)?
Brilliant minds, influential people, and cutting-edge questions: surely a quick-win recipe for meaningful change? Unfortunately, it isn’t that easy. It quickly became clear that what was being asked was more ambitious than many participants had previously envisaged.
Research to policy to practice: a logical chain?
The development sector revolves around the principle that better use of research and evidence in policy and practice can help save lives, reduce poverty and improve quality of life. Yet, applying research to guide and inform the development sector’s many actors (from donors and researchers to NGOs and policy makers) and shaping the policy agenda is a difficult task – for more see the work of ODI’s Research and Policy in Development programme.
In practice, academic research typically has little influence on policy. When it does, it’s far more complex than the linear model of research-informing-policy-leading-to-change-on-the-ground. Why? One reason emerged above all others in discussions at the CCMCC sandpit:
Perverse incentives
It may not always be evident, but traditional systems for incentivising and evaluating academics (i.e. measuring their success) are largely incompatible with practitioners needs and often discourage the integration of research into policy and practice. For example, the success of a university academic – say a professor – is primarily measured against the number of peer-reviewed journal articles he/she produces, and the number of times those papers are cited by their peers (typically other university academics). Yet academic articles rarely feature on a policy-maker’s radar. If they do, they are often found behind an expensive paywall.
What’s more, research often comes in a language that is incomprehensible to many policy-makers. Often technical research needs to be translated into practical and engaging recommendations that communicates uncertainties and relates strongly to a policy-maker’s decision-making environment. Yet shorter non-technical outputs like policy briefs, blogs and other forms of grey literature are rarely recognised or rewarded from an academic’s perspective.
Blame is not only attributable to the system of producing and communicating research. From a policy perspective, research is supposed to feed into decision-making and planning. However, few developing countries have been able to equip their technical officers with the resources and networks to be able to translate new research into policy relevant actions for key decision-makers (say a head district government).
New approaches to capacity building
And what about using research to support capacity building of policy-makers? Well one is thing is clear: creating better links between research and policy will require innovation and thinking outside of the box. The standard model for communicating development research to policy makers has long been to host a workshop. As part of this model, decision-makers are introduced to countless conceptual frameworks and receive guidance on the latest development interests (whether it’s climate change, water and sanitation, or maternal health care). It’s typically a one-way knowledge sharing process, and rarely does it result in practical change (unless there are clear financial incentives to do so). ‘Workshop fatigue’ is a real issue. Researchers looking to support capacity building and influence policy are therefore required to try new approaches if they want to demonstrate real change. One example is to trial other forms of experiential learning, like ‘policy-gaming’. Other innovations are desperately needed.
An ambitious future for development research
Thankfully, there are signs of change. The CoCoon call is a brilliant example of this: bringing together different members of the development community that rarely interact directly and asking them to design (and document) a research process that will build capacity and translate into policy change. CDKN has similar objectives with its research calls, requiring applicants to provide evidence for the demand of their research from policy-makers themselves.
As members of the CoCoon event were quick to point out, asking researchers to do all of this together as one package is an ambitious task. It’s a task that rarely receives sufficient financial resources to do all components effectively – from research design, to innovative capacity building, to M&E of impact. However, until the right incentives structures are created, and entry points for collaboration strengthened, change will not happen. The lead taken by CoCoon, CDKN and others is hoping to incentivise this change and instil new ways of translating research into policy and practice.
An interesting article. I agree with your point about funding not covering the whole process of reseah action and uptake. Within the DRUSSA project (a DFID funded programme working in Sub-Saharan Africa) this issue has had some discussion. There is a hint that funding agencies are moving towards recognising the issue, but I am not sure if some have understood just what is involved.
The problem is quite straightforward: as an academic you go through the usual cycle of research: you spend months writing proposals, and hopefully one gets funded. Then you get the research going. Just when you are getting your results you find that the money is running out, and the University is demanding that you write papers. So how do you spend your time? After delivering the final report, priorities are maintaining your research team, students, and papers. So you seek an injection of money for your team, by writing more proposals. Unfortunately no one will fund the development phase of the research you have just done (funding agencies generally seem to want to fund only new projects with novel ideas and they do not seem to want to pick up and run someone else’s project. Therefore you have to create something different and new. If you are successful you now have to throw your time and effort into that, so the old project gets forgotten.
In addition the skills needed to implement research findings are different to the skills required to do the research, so the researcher may not be the best person for this task. The problem is perhaps better addressed institutionally, which is where having a research Uptake management scheme n place is important (hence the DRUSSA project).
The third big issue is that research uptake can take a lot of time and money. The time between patenting an idea and achieving a positive return for a product that has gone through the prototype development is on average around 10 years! Not many organisations think on that time scale.
None of these problems are insoluble, and the fact that some people still patent items, and others achieve significant uptake of their research either through influencing policy change or implementation at the grassroots level of society shows that we still have a large number of hard working and dedicated researchers. We probably do not give enough recognition and credit to these wonderful people!
There are some great points in this blog, and the points on
perverse incentives are of course true, but most academics wouldn’t recognise
them as ‘perverse’ – they are only when viewed from the perspective of someone
trying to extrapolate meaning from the research to address their own knowledge
requirements; hence it’s more of a conflict of goals (my PhD in part considers
this area – I think causes a LOT of problems across governments and public
bodies).
In my role (I work for a Science branch of Government, but unfortunately not in
the area of International Development) I find a lot of the same issues, and I
wonder whether we really need to rethink HOW the role of science and research
can help. This might make uncomfortable reading for some, but I think quite
often we really don’t do this in the right way, and we rely on some fairly
heroic assumptions about the ability and willingness of policy-makers to
exploit research outputs.
I think it is best to try to bridge the gap between ‘demanders’
(exploiters – if indeed there is a demand for it) and ‘suppliers’ (researchers)
– this gap to me is one of the greatest problems in research uptake. I actually
think it requires a third role, and I am not too precious about what we call
this role. It is in reality the role I try to do now as an embedded scientist
in a government department.
I propose developing an engagement/partnership model that
takes all the revelant principles and activities from a combination of some of
the applied social sciences (applied occupational psychology, applied sport
psychology, clinical psychology) with business analysis, soft systems methods
and models of ‘consultancy’ (if one can suspend any dislike for this term). If
this could be done, then we could develop quite a potent bridge between the research
and policy-making communities.
This would provide
a better framework for helping to understand the true root causes (rather than
just symptoms) and to understand all the intervening variables and ‘blockers’
(such as conflicting goals, perverse incentives, and cultural idiosynchrasies)
that could limit any potential utility of the research. It will also address my
other key worry about our common areas. Quite often the research projects
deliver an ‘output’ (in P3M speak), which is usually some form of articulated
knowledge captured on some form of written artefact – be it a journal paper,
policy briefs, etc. It might contain superb information, but we then rely on a
leap of faith that the ‘reader’ / potential ‘exploiter’ will be able to:
– find the document
– understand its potential utility for their problem set
– have time to read it
– be able to understand it
– be able to extract the appropriate meaning for THEIR problems
– be able to understand the implications for action with respect to their
problem
– understand the implications of how the implications for action might interact
with others from other programmes
– have the ability, willingness time and resources to implement the actions
– understand how to measure the effects (outcomes), and have the tools to do so
– have the ability, willingness time and resources to measure the outcomes
– have the ability to understand the outcomes as they emerge
– have the ability to understand the implications for follow-on research or
actions (e.g., policy)
This doesn’t take just a bit of optimism, it requires a LOT and often we just
ignore it. The model I call for above won’t just help to identify, refine and
articulate the problem and the intervention required to address it, it will
also make scientists/researchers available who can help with all of the above
steps in PARTNERSHIP. I do acknowledge that there will be substantial
individual differences in policy-makers’ ability and therefore they might
require less/more support accordingly.
These are just my thoughts from my experiences in my area. Do they hold true
for International Development?
Thanks for the post and the comments that followed. I work with policy makers in developing countries to stimulate demand for research and recognise some of the problems articulated by C Studman and Dr G. I was at a meeting last week where a researcher said: ”policy makers don’t read my policy briefs or give me feedback” as if just because you produce a brief the policy maker is a) interested and b) has the skills to understand your research.
These two things, motivation of your end user to use research and the capacity of policy makers to understand research are very important factors that are sometimes ignored when discussing research uptake. It is easier to believe that policy makers do not use research because of complex and sometimes perverse political reasons but sometimes it is just a lack of (individual and institutional) capacity to access, interpret and apply research that is the problem. When thinking about research uptake it is also worth thinking about the motivation and skills of your end user.
Finally for any of this to matter long term, research uptake should be about more than just supplying your research or your programme’s work and should also be about stimulating demand for research more broadly. Such that policy makers themselves are motivated to seek out research otherwise we end up with ever more sophisticated models of supply and still limited demand.
Also good to involve research beneficiaries in the conversation. We doing this with indigenous communities in Borneo Malaysia; eBorneo Knowledge Fair, see http://www.ebkf.org