This 195-page report by Sandy Oliver, Kusha Anand and Mukdarut Bangpan looks at the impact and use of 86 systematic reviews funded by the UK’s Department for International Development (DFID – now part of the Foreign, Commonwealth & Development Office). It examines their impacts both in and beyond academia.
The study finds that the greatest impact of systematic reviews is achieved within academia with most reviews appearing as academic papers or as citations in works by other authors. Overall, 81% of the studied systematic reviews have achieved academic impact.
Additionally, 55% of the reviews have achieved impact beyond the academic sphere: several reviews have informed specific decisions or policies, some have been embedded in new or existing procedures for decision-making, while others have been cited as improving understanding or knowledge accessibility. Often, the systematic reviews are cited alongside other evidence.
Thirty-nine reviews have not achieved any impact beyond academia and out of these, 16 have had no impact at all.
The document classifies the use of systematic reviews outside academia into three categories:
a) transparent use of evidence … increased understanding and accessibility
- b) embedded use of evidence … no direct action taken, but evidence use becomes embedded in processes, systems and working culture
- c) instrumental use of evidence … evidence is being used directly to inform a policy or programme
Out of these three categories, the studied systematic reviews have mostly been used to improve understanding (for example, organisations can use them to stimulate thinking and debate, advocate for a change, inform a political forum or encourage evidence use).
The study uses three models of knowledge exchange to explain how the reviews achieve non-academic impact, specifically the linear model, the relationship model and the dynamic systems model. These models are described in detail in the report.
Based on these models, the authors conclude that if writers communicate policy implications clearly in the reviews, their chances of achieving impact are improved. Similarly, systematic reviews are more likely to be used if policy input is included in their preparation stages.
Lastly, the document highlights the importance of investments in capacity building of individuals, teams and institutions, their knowledge brokering skills and creating a global support system for systematic reviews.
Finally, the report presents a few general recommendations. It advises strengthening the clarity and relevance of systematic reviews by including implications for policy and practice and including policy input in their preparations. It also encourages collaborative working and engagement between policy and review teams and calls for a better understanding of systematic reviews and their methods.
This article is part of our initiative, R2A Impact Practitioners. To find out more, please click here.