Making your research accessible

Assessing the strength of evidence: Introducing DFID’s new ‘how to’ note

By 25 April 2013

By Mark Robinson, William Evans and Kirsty Newman, Research and Evidence Division, DFID

In February 2013 DFID launched the “Assessing the Strength of Evidence” How to Note. The Note aims to help all DFID staff better appreciate the strength of the evidence that they are using to inform their policy and programming choices.

The Note builds on DFID’s commitment to use the best available research evidence to inform its spending decisions. Speaking last year, the Secretary of State noted DFID’s ambition to “…invest in what works. Where we don’t know [what works], I want to find out. It will make sure we are clearer about where we should focus our resources so we know that what we’re funding will actually work as we intend. There’s a huge amount of high quality research done into development. And I will be making the most of it to allow us to be even more targeted with our investment.”

Purpose of the How to Note

Identifying and using high quality research studies isn’t straightforward. The How to Note offers some rules of thumb, enabling DFID staff to:

  • Understand different types of empirical research evidence;
  • Appreciate the principles of high quality evidence;
  • Consider how the context of research findings affects the way that staff might use them;
  • Understand how to make sense of inconsistent or conflicting evidence.

Development of this knowledge and skills will enable staff to make more informed decisions about how DFID spends taxpayers’ money on viable development projects.

The Note is an integral part of DFID’s commitment to equipping staff with the skills to help them improve their use of evidence. It forms part of several initiatives being launched by DFID in an effort to boost the analytical skills of its staff. These include an online guide to research designs and methods; a handbook explaining a variety of research and evaluation methods; and a guide to using statistics responsibly. There are also plans to strengthen staff capacity on using evidence and research.

We hope this How to Note will be helpful to researchers and policy makers in research institutes and funding bodies outside DFID. We would welcome feedback through although we are not able to respond to every comment or query.

What does this How to Note do?

This How to Note provides a thorough introduction to (a) the appraisal of the quality of individual studies and (b) the assessment of the strength of bodies of evidence. Specifically, this Note will help DFID staff to broadly understand the distinctions between different data collection and analytical methods, what they can and cannot conclude as a result, and establish a common language that can be used in the discussion of the strength of evidence. Assessing the strength of evidence is a challenging task which requires a combination of technical knowledge and individual judgement. It may also require consultation with research specialists within and outside DFID. Proper assessment of evidence will help staff use evidence responsibly and judiciously for the benefit of better policy.

The guidance is applicable to all categories of research and evaluation evidence used by DFID staff, especially in the social sciences. It applies to evidence generated through both quantitative and qualitative research methods. It recognises that some academic disciplines, such as medicine and the methodologies associated with them, have a stronger tradition of assessing quality of research than social science disciplines. The Note references alternative evidence grading frameworks accordingly. To ensure consistency, it also draws on approaches developed in other parts of Government to assess research and evaluation evidence.

Facing up to some critiques

During the development of this How to Note, the team behind it has received a range of comments on the approach from staff and from a number of external reviewers. While the bar was deliberately set high, initial feedback has been broadly positive and staff welcome the clarity of the Note and expectations regarding quality of evidence in DFID. We provide a response to three of the main concerns that people have raised below:

Critique 1: ‘DFID is only interested in experimental research’

DFID’s appreciation of ‘what constitutes good evidence’ and the approach used to generate the evidence is heavily dependent upon the question being asked. If you are trying to establish the existence (and magnitude) of a causal relationship (e.g. intervention ‘x’ leads to outcome ‘y’ in environment ‘z’), a randomised controlled trial (RCT) will offer you the best possible evidence of that as it allows you to isolate the impact of an intervention by comparing treatment groups with control groups. Where good RCTs are available they will form an important part of the evidence base on causality. However, there will be some questions about causal relationships where it’s simply impractical to adopt an RCT approach: after all, they’re expensive and time-consuming. In such cases, quasi-experimental research designs can provide solid evidence of likely causal relationships.

In addition, there are many important development questions that are not best answered by an RCT or quasi-experimental design. Questions relating to how a process or intervention actually works (or does not work), whether it really matters to people, whether it’s socially or culturally acceptable, whether it provides a satisfactory service to recipients, and, crucially, the socio-political context in which an intervention will be deployed are enquiries that simply cannot be answered by RCTs. Instead, a set of observational (sometimes called ‘non-experimental’) research designs, many of them applying robust and valid qualitative methods, will be more appropriate.

A great deal of DFID research aims to understand context, politics and local realities through comparative case studies. The How to Note enables us to better evaluate research which examines multiple dimensions of a particular question using a range of approaches. DFID recognises that RCTs can suffer from ‘external validity’ issues (i.e. we can’t be sure that similar causal relationships would play out in multiple contexts). That’s partly why a wider range of research is commissioned.

When it writes or commissions evidence synthesis products, DFID typically differentiates research on the basis of ‘quality’ and ‘appropriateness to the question’ rather than on the basis of design and method alone. With the launch of the How to Note, we are strongly encouraging both our staff and our partners to be clear about the type of research that they are citing in order to support particular claims. That way, the reader of any evidence synthesis (be it in a literature review or a business case) can form their own view about how appropriate is the evidence for the question at hand.

Critique 2: ‘By getting DFID staff to grade the quality of evidence, you are devaluing the peer review process’

DFID uses research evidence in a very specific way – to inform its decisions on policy and practice in order to achieve poverty reduction. While peer review is an important way of assessing research quality, the standards which a group of academic peers judge a paper will not always be the same as the criteria that a DFID adviser uses. For example, a peer reviewer may judge a theoretical discussion which introduces a new way of thinking about a certain type of intervention to be of extremely high value and it may be accepted for publication in a good journal. However, DFID aims to base its decisions on empirical evidence with practical application. A theoretical paper will not provide robust evidence about whether that intervention is effective, whether it is appropriate, how it leads to observable effects and so on. And thus a paper which is judged to be of high academic value, while valuable in its own right, will not necessarily be judged as strong evidence for DFID.

Critique 3: ‘By getting non-professionals to grade evidence, you may make inaccurate judgements about its strength or quality’

DFID staff have variable levels of familiarity with research and the designs and methods used to carry it out. Some advisory staff come from academic backgrounds, a number hold PhDs, most have post-graduate degrees, and are highly skilled at this kind of activity. Others have expertise that is much more focused on programme implementation and management. So notwithstanding the efforts that DFID is making to boost the analytical skills of our staff, this does mean that some DFID advisers may not always be well positioned (as compared to research design and method specialists) to assess strength and quality of evidence. Even if the assessments that staff make about strength of evidence are not always perfect, they are still likely to help start a discussion (with our internal experts and our external advisory partners) which will, over time, help us move towards the establishment of more effective development programmes through a more robust evidence base.

Our responses to these critiques do not mean that DFID is not open to further discussion and debate on this subject. We welcome further comment and consideration of our approach, as we seek to reduce poverty through the proper application of strong research evidence.

Title:  DFID How to Note: Assessing the Strength of Evidence Author: DFID Year: 2013



Topics: , ,

Kirsty Newman

Kirsty Newman is Research Uptake Manager with the Research and Evidence Division of the UK Department for International Development (DFID). She has a PhD in molecular virology and worked as a research fellow in the immunology department at LSHTM from 2004 to 2008. Since that time she has been working on capacity building programmes at the Wellcome Trust, the UK Parliament and the International Network for the Availability of Scientific Publications.