Impact Practitioners

Evaluating impact from research

By 25/05/2023

This 14-page academic paper by a group of UK-based researchers presents a methodological framework for selecting an impact evaluation method in any academic discipline. The paper also features a typology of research impact evaluation designs and provides new definitions of research impact and impact evaluation

Overall, the academic paper identifies five common evaluation designs:

a) experimental and statistical methods

These are typically used to show the direct attribution of research to impact, usually involving a comparison of two groups.

b) textual, oral and arts-based methods

These methods attempt to build a case that research is necessary to cause the impact, using multiple sources of evidence to attribute the impacts to research. They are often participatory, engaging stakeholders and beneficiaries in the evaluation.

c) systems analysis methods

These methods analyse whether research is necessary to cause the impact. They usually involve a combination of qualitative and quantitative methods to describe complex cause-and-effect relationships. They are particularly useful for understanding complex, non-linear and unpredictable outcomes.

d) indicator-based approaches

These approaches often involve developing a theory of change at the beginning of the research project and identifying indicators that serve as milestones and targets, which are later evaluated to see if the intended impacts are achieved. 

e) evidence synthesis approaches 

Evidence synthesis (for example a systematic review) typically takes place at the programme level, drawing on work from multiple projects. It is a review of existing data, literature and other forms of evidence with the goal of providing a rigorous and objective assessment.

Impact practitioners quote "impact is in the eye of the beholder"

For each category, the paper gives examples of commonly used approaches and methods, types of evidence you would likely need and types of impact that are usually evaluated by the given approach. The different designs and their key advantages and disadvantages are then discussed in detail.

In the last chapter of the paper, the authors present a methodological framework to help with selecting the most relevant evaluation design and methods for research projects. In short, there are two key factors that inform the choice between the different evaluation designs. 

Firstly, the chosen design must be suited to the context in which it is going to be used. The authors advise considering: 

  1. a) the resources (some evaluation designs are time-consuming and resource intensive),
  2. b) the scope of the evaluation and,
  3. c) the types of impact being evaluated (some designs are better suited to evaluating certain types of impact). 

Secondly, assessors should think about the aims of the evaluation, such as establishing research as a sole or contributing cause of impact and deciding whether to receive summative or formative feedback. 

In summary, the paper and its framework are great resources that can help you select suitable evaluation methods for your research project. The typology of different assessment approaches is helpful for understanding the key differences and advantages of a number of commonly used methods. Additionally, all the information in the paper is generalised, meaning you can tailor it to your discipline and context.

This article is part of our initiative, R2A Impact Practitioners. To find out more, please click here.

Get 'New Post' e-alerts and follow R2A


Contribute to R2A:
We welcome blogposts, news about jobs, events or funding, and recommendations for great resources about development communications and research uptake.

Topics: