The question is “how effective are “indicators” as a yardstick of impact?” I’ve been troubled by this question for some time. It is highly feasible that the impact of a project or research programme may not be evident until many, many years after the initiative has been wrapped up. In fact many impacts may not be immediately obvious or directly attributable to the project at all. Some impacts may remain invisible until large-scale qualitative research is commissioned, and the likelihood of this happening is very much hostage to budgetary constraints and what is feasible across the overall project lifespan.
It is for this reason that there needs to be a nuanced debate on impact that releases us from the constraints of our logframes and opens the door to a more holistic, long-term perspective of impact that involves a deeper comprehension of the role of influence.
UK academics will now be all too familiar with the impending Research Excellence Framework (REF) exercise that adds an impact component to the grading of research excellence across all participating higher education institutions (HEIs) in the UK. The addition of the impact component to the REF represents a refinement of research assessment and a new focus on the public goods that emerge from research. The inclusion of a 20% weighting for impact in the REF has created a vigorous debate across the academic community. One of the problems that continues to crop up in these debates is that no one is completely sure what impact exactly is or, indeed, how one goes about measuring it.
We know that impact, as a concept, is clearly important but it appears that everyone has their own conception. This is inevitably of great significance to how practitioners of all walks of life deal with the data, metrics and indicators that are used to monitor and evaluate projects. The issue is not what data there is, but how it is rationalised given how widely opinion (and cognition) can differ on what impact looks like. The same problem presents itself when trying to measure “success” – everyone has a different opinion on what the indicators for that would look like.
But our differing perspectives could actually be key to solving the impact riddle. The more perspectives we have, the greater our understanding will be. However, we need to recognise the limitations of our datasets and indicators as they may narrow our focus and divert our attention from the bigger picture. In other words, whilst indicators and metrics are absolutely key as an evidence or performance base, they tend to reinforce some perspectives at the expense of others that are equally valid for our understanding of the interrelatedness of effects. This is because effects and outcomes will not always be measurable or even visible – a dynamic responsible for throwing more than one logical framework into disarray.
The central issue here is that we need to rediscover our relationship with impact and dream anew. We need to step back from the quantitative and consider, however briefly, the possibility that impact is a phenomenon with a far longer timeline (or lifespan) than can be encompassed by indicators and metrics alone. Should we even attempt to audit impact in any definitive sense without first subjecting our performance indicators to scrutiny?
Understanding effects, outcomes and their linkages requires an open-minded, critical thinking about the relationship between effects and outcomes. Influence as an effect can be part of a broader, long-term view of a situation but it is difficult to measure no matter how brilliant one’s data is. In terms of its significance to impact, influence is also perceived differently by different audiences – the word itself may not be understood by all. The debates on impact that arose in the run-up to REF 2014 evidence just how slippery a concept can be in this regard.
Boiled down to its lowest terms, impact is an outcome. A result of an activity rather than an activity in itself. This may sound obvious but the subtleties of the impact-influence nexus are not always plainly visible or easily understood. Whilst we may feel that it is adequate to frame impact in terms of direct or indirect effect, this often fails to account for how influence has (or continues to be) achieved, how a project or programme is interpreted and understood by audiences, beneficiaries, and practitioners through varying lenses of culture, politics, economics, and the traditions they contain.
Achieving influence is therefore a sophisticated craft and one that has not been adequately framed by the ongoing debate surrounding impact. Without an understanding of its relationship to outcomes, no framework, indicators or metrics give us the full picture. Although physical results are essential for evaluating success, we should avoid becoming preoccupied by them.