I was very lucky to be invited to participate at two meetings on theories of change in the Netherlands in the last few weeks, one at Hivos and one at the Centre for Development Innovation. At both meetings, we discussed a lot of things, from really thinking about your purpose for working with theory of change through to how to visualise – and even dramatise! – your theory of change.
Exchanges were very rich and productive. One of the most interesting conversations we had was about the assumptions part of theory of change. This is usually the most difficult aspect that people find challenging, as people flagged in the recent review I completed for DFID on the use of theory of change.
Something that we wanted to do was to try to clarify for ourselves how we can work better with assumptions. We were led in one discussion by Irene Guijt, an experienced evaluator and facilitator. Irene proposed some useful definitions, one of them was :
‘An assertion about the world that underlies the plan.’(See RAND for more)
Some assumptions are made explicit – in fact, if you do a log-frame, you are expected to put something in the assumptions column! But it is more common for many of the critical ones to remain implicit, perhaps because there is not time for proper discussion, or it is too early in the programme to really have an insight, or perhaps because it is pretty hard in practice to access your own assumptions – that’s the point right, you take them for granted?
But a key insight is that assumptions only hold true for a period of time. They are also pretty unstable because they are highly context dependent. If they change or become ’wrong’, then the activities you have designed will also be wrong. So you need to take good hard look at your assumptions, and pretty regularly, to update them and keep them current.
Layers of an onion?
One of the challenges people face with this aspect of theory of change thinking is that when they start thinking about assumptions, they generate a lot, and about many different things. This can quickly become overwhelming.
As part of our conversations, we realised that the process is a bit like peeling back the layers of an onion. In about 30 minutes’ brainstorm, you will probably get the most superficial, comfort-zone assumptions that are taken for granted.
It takes more questioning – even just simple questioning, for example, ‘why is that?’ and ‘what might be behind that idea?’ -to start to get at the deeper, more complex and difficult ones. Someone said if you can ask ‘why?’ at least five times, you can start to uncover these.
You do need to accept and handle positively the discomfort and debate that often accompanies sharing these deeper ideas – it’s often a sign that critical thinking is happening. It also can suggest where there are power issues that are blocking dissenting views from challenging the dominant assumptions in a programme or organisation. If all views can be explored and negotiated positively, then you can start to look afresh at your initiative.
Practical ways of dealing with assumptions: how evidence can help
When working with DFID on some guidance arising from the review, we realised that a practical way of managing assumptions was to categorise them and relate them to specific aspects of the theory of change. Developing this further at the Hivos meeting, we identified a set of potentially useful categories, including:
- Causal links between outcomes at different levels
- Operational assumptions about the context and strategic options
- Paradigm or ‘worldview’ assumptions about the drivers and pathways of change
- Assumptions about the belief systems at play within society, informing judgements about the appropriateness of different strategies for that context.
After categorising them, you will still have too many, so the next step is to try to identify the most important assumptions to really drill down into, for example:
- Those most critical to success
- Those with highest risk, the most unstable or high potential for critical impact
- Those about which you know least
- Those that have implications for long-term success.
It is at this point that the role of evidence and wider learning comes in. Checking those assumptions against other sources of knowledge, not just the knowledge that is in the heads in the room, can be a really useful way of thinking more critically about them and re-focusing them to help guide you for the next period, until they need to be checked again.
This is worth the effort because shifting out of those comfort zones and ‘business as usual’ ideas is where theory of change thinking can really start to help inspire new options, help you to negotiate a shared understanding with colleagues and partners so that you can work productively together and help to generate more robust designs and strategies that take into account the context and the changes you want to support.
What are our assumptions about research communication and uptake?
As I said before, the checking of assumptions is an on-going process. They hold true for a while, but we need to look at them again as we learn through implementing our initiative, and the trial and error which is an inherent part of any change endeavour.
Reflecting on how this might apply to research communication and uptake, the thoughts I have been left with are – we make a strong case for how important it is to look at evidence when developing interventions, but how good are we at looking at evidence, or at least documented learning, about our own research-related interventions and processes? Generally, I don’t think we are any stricter with ourselves than any other discipline or community, although researchers are known for their critical thinking.
Assumptions that are expressed in connection with research uptake often remain too vague to be useful and, with no critical thinking, end up acting more as caveats or ‘shields for our backs’ as someone else put it. For example, the classic superficial one is: ‘decision-makers are willing to use scientific evidence in decision-making’. Fine, but what if they aren’t? You need to go deeper…
There may well be an implicit but crucial assumption in the next layer down, for example: ‘our strategies should support the motivations and capacities of decision-makers to consider scientific evidence positively’ (to take some new thinking put forward by K. Newman, C. Fisher and L. Shaxson in the most recent IDS Bulletin 2012).
I wonder if it would be a useful exercise to take a framework of different categories of assumptions and use it to help us identify common, generic assumptions about research uptake that we don’t usually delve into as critically as we should? This might include assumptions about capacities on the research-user side, or assumptions about language and accessibility of research findings, or about the change mechanism that a research influencing strategy is seeking to trigger.
Mapping the evidence and learning we already have about them could help us to focus our critical thinking on uncovering more focused assumptions that lie beneath that would be a more useful guide for our initiatives.
Hi Isabel, really interested in your discussions on how to visualise theories of change – any resources you could share on this?
Hi Jessica,
There is an appendix of annotated examples that we gathered for the review, these have some good examples of visualisations http://www.dfid.gov.uk/r4d/pdf/outputs/mis_spc/Appendix_3_ToC_Examples.pdf
Also, the Hivos Portal on Theory of Change has some examples and will shortly be adding resources on visualising theories of change http://www.hivos.net/Hivos-Knowledge-Programme/Themes/Theory-of-Change/Resources/2.-I-am-looking-for-practical-examples.-Which-ones-are-there
Does anyone have any other thoughts on visualising theories of change?
Isabel
Isabel,
que bueno encontrarte otra vez. Este post es tan interesante y no siempre facil de explicar.
Silvia