Impact Practitioners

How to standardise quantitative indicators of impact 

By 26/01/2023

This 40-page guide by RAND Europe, an NGO policy research organisation, presents recommendations for standardising quantitative indicators of impact. Originally produced in 2018 for case studies intended for the UK’s Research Excellence Framework 2021, this guide is still relevant and applicable to researchers across the world who wish to have a style guide to follow when writing about quantitative indicators. 

The guide has two major sections. The first one, called ‘Style guide’ looks at the way numerical data are written or presented in the impact case studies. It includes general stylistic items (such as percentages and numbers) that can be standardised to make numerical indicators of impact more discoverable in the case studies. The authors found six areas that could be standardised: numbers, percentages, currency and rates, measures of change, time periods and lastly, units.

These categories are all described in the resource with guidance for their standardisation. For example, when it comes to numbers, the authors propose to “use numerals when referring to quantitative indicators of impact (e.g. 4, 1,567, 2,000,000)”.

The second chapter, called ‘Specific guidance’, looks at the ways we could standardise the most commonly used quantitative indicators. The document gives us the given indicator’s explanation, followed by the suggested approach to standardising it. It covers five areas associated with quantitative indicators: engagement, mentions in non-academic documents and media, employment, financial figures and emissions. For instance, for mentions of the media in the impact assessment, we should use: “… referenced [X] times in … (e.g. referenced 50 times in the media across 10 countries)”.Impact Practitioners quote from resource "research conducted by universities can lead to impact beyond academia"

The authors also explain the reasoning behind creating a standardised style guide for indicators. In the past, analyses across REF case studies were very challenging to do because data were not systematically presented in a standardised format. So the standardisation would enable a more effective analysis of the submitted data across a large body of case studies. For example, it would make impact indicators more easily discoverable. As a result, we would be able to demonstrate evidence of the wider impact of academic research to stakeholders. 

Apart from the advice on standardisation, the document also serves as a guide on how to cohesively write about indicators in impact case studies. 

If you would like to read more about the rationale behind the standardisation and a description of the process, look into the appendix section of the resource.

This article is part of our initiative, R2A Impact Practitioners. To find out more, please click here.

Get 'New Post' e-alerts and follow R2A


Contribute to R2A:
We welcome blogposts, news about jobs, events or funding, and recommendations for great resources about development communications and research uptake.