This 16-page narrative review published in BMC Medicine studies the strengths and limitations of different methods of measuring research impact. It provides readers with an overview of six established methods (specifically the Payback model, Research Impact Framework, CAHS Framework, monetisation framework, societal impact assessment, and UK Research Excellence Framework) and an analysis of more recent and less-tested approaches (such as electronic databases, contribution mapping, the SPIRIT Action Framework, and the participatory research model).
From the analysis of different methods, the scientists draw four main conclusions.
First, one size does not fit all. In other words, different approaches to measuring impact are designed for different purposes. For example, logical models (those that follow the pathway of input ->activities -> output -> impact) might fail to depict non-linear research impact.
Second, the most robust approaches are often labour-intensive and costly. Producing a detailed case study with an assessment of context and verification of all claims takes a lot of skill, time, and resources. The authors explain that: ‘There is a trade-off between the quality, completeness, and timeliness of the data informing an impact assessment, on the one hand, and the cost and feasibility of generating such data, on the other.’ For example, the CAHS Framework offers a very comprehensive evaluation, but is often expensive and labour intensive. In contrast, the Research Impact Framework is relatively easy to use and very straightforward, but isn’t the best option for a formal assessment.
Third, most metrics tend to capture direct and proximate impacts and do not focus enough on the more indirect elements of research impact. Critics warn that this might prevent doing research that is more complex and/or politically sensitive because its impacts are likely to be indirect and difficult to measure.
Finally, research impact methodologies are rapidly changing and many new frameworks are being developed, often with a focus on data and automation. One example of a new method would be Researchfish, a platform that uses technology and algorithms to collect outcomes and outputs of research.
In conclusion, this academic paper is a perfect resource for those who have not decided which method they would like to use to measure their research impact. The authors analyse the most common methods, describe how they operate, and discuss the main strengths and weaknesses.
This article is part of our R2A Impact Practitioners initiative. To find out more, please click here.