Impact Practitioners

Reviewing indicators used in research capacity strengthening

By 04/05/2023

This 12-page academic paper by a research team at the Liverpool School of Tropical Medicine presents a review of research capacity strengthening indicators from across 32 publications and studies their range, type and quality. 

Research capacity strengthening (RCS) leads to a greater ability to perform useful research and up-skills both individuals and institutions, which is an important step in decolonising research. While South Asian countries account for 23% of the world’s population, they produced less than 5% of the global output of scientific publications in 2013. Similarly, sub-Saharan Africa which accounts for 13% of the global population, holds only 0.1% of global patents.

The paper finds an almost equal spread between individual, institutional and systemic-level indicators, implying a need for RCS metrics across all levels of the research system. In total, the authors study 668 indicators of which 40% measure output (e.g. activities that are directly controllable by the RCS initiative such as the number of people undergoing academic writing training), 59.5% measure outcome (e.g. changes in behaviour or performance in the short to mid term such as the number of manuscripts published after an academic writing course) and 0.5% ,measure impact (e.g. longer term change that relates to the overarching aims of the RCS initiative such as the reduction in infectious disease mortality). 

Outcome indicators are the most common and the majority of them are clustered in four areas: a) research management and support, b) the attainment and application of new research skills and knowledge, c) research collaboration and d) knowledge transfer

The near absence of impact indicators in the sample shows that there is a lack of long-term evaluation of RCS interventions. The authors therefore highlight the need for developing evaluation frameworks and methodologies.

The review also looks at the indicators’ quality. It judges the indicators by four quality criteria:

1) Is the indicator implying a measurement focus?

2) Is it clearly defined? 

3) Is the measure sensitive to change? and lastly, 

4) Is it time-bound? 

Unfortunately, the review shows that the quality of the studied indicators is uniformly poor. Only 1% of outcome indicators, and none of the impact indicators, meet all four criteria. Quality ratings are highest among indicators focused on measuring research funding and bibliometrics and lowest amongst research management and support and collaboration activities.

Impact Practitioners quote "The near absence of impact indicators is a finding of significant note, highlighting a lack of long-term evaluation"

In conclusion, the academic paper shows there is a large number of outcome indicators across the public and grey literature, but they are spread across a limited range, resulting in overlaps and duplications. Very few impact indicators are currently being used and the quality of all indicators is poor. Therefore, there is a great need to develop RCS indicators, especially in the following four areas: research management and support, the attainment and application of new skills and knowledge, research collaboration and knowledge transfer. 

If you are interested in what RCS indicators are currently available, you can study the review’s data here. The academic paper also offers a number of data visualisations from the study and informative boxes detailing the different indicators by their categories, variants and practical examples. Overall, it is a useful resource for RCS funders, managers and evaluators.

This article is part of our initiative, R2A Impact Practitioners. To find out more, please click here.

Get 'New Post' e-alerts and follow R2A


Contribute to R2A:
We welcome blogposts, news about jobs, events or funding, and recommendations for great resources about development communications and research uptake.

Topics: