This page is dedicated to the Policy Engagement and Communications (PEC) programme’s second peer-learning event that took place on March 12, 2014. Here you can access three audio podcasts and two related PowerPoint presentations from PEC mentors and think tanks in Nigeria and Uganda on Measuring the impact of your communications tactics. These provided the focal point for the peer-learning event, a recording of the discussion that took place at this event can be found directly below.
Comments and questions are welcome from all.
Peer-learning discussion: Measuring the impact of your communications tactics (audio)
This discussion brought together representatives from numerous think tanks across Africa to discuss best practice around measuring the impact of communications tactics. The discussion was based on the podcasts and presentations, which are freely available below.
Facilitator: David Olson
Podcast 1:Monitoring the Effectiveness of Communications [Please post your comments and questions at the bottom of this page].
Presenter: Modupe Adefeso-Olateju
Mo is the PEC mentor for Nigeria and CEO of The Education Partnership Centre. In this presentation she provides an overview to the monitoring and evaluation of communication activities. Mo possesses cross-cutting skills and industry experience gained from a decade of leadership in the profit and non-profit sectors, and years of research on private sector participation in education. She holds a PhD in Education and International Development from the University of London.
Podcast 2: The Impact Log: A Communications Monitoring Tool [Please post your comments and questions at the bottom of this page].
Presenter: Drusilla David
Drusilla is the communications officer at the Centre for the Study of Economies in Africa (CSEA) in Abuja, Nigeria, she presents a case study on Impact Logs, a tool that CSEA uses to monitor its communications. Drusilla possesses communication and writing skills gained from her experience working as a correspondent on various sectors of the economy with a Nigerian daily newspaper.
Podcast 3: Measuring Communications Impact at EPRC [Please post your comments and questions at the bottom of this page].
Presenter: Elizabeth Birabwa
Elizabeth Birabwa is the programme manager of at the Economic Policy Research Centre (EPRC) in Kamapala, Uganda, presents EPRC’s tools for measuring communications. Elizabeth has over 14 years of experience in advocacy, communications, media relations and information management. She holds a bachelor’s degree in Mass Communication from Makerere University and a Master’s Degree in Library and Inform.
This blog post has been produced as part of the Think Tank Initiative’s Policy Engagement and Communications (PEC) programme. However, these are the author’s personal opinions and do not necessarily reflect those of TTI. You can find all ongoing outputs related to this project via the PEC mini-site on Research to Action. To get updates from the PEC programme and be part of the discussion sign-up to our RSS or email updates. You can also follow our progress via Twitter using the following hashtag #ttipec
Image courtesy of iviltsamfunn i Sudan
In her podcast above, Drusilla David says the Impact Log enables her think tank to “refine its evaluation methods and communications strategy.” Can she give me an example of how CSEA has changed its communications strategy as a result of information gathered on the Impact Log?
David, In the course of monitoring its activities via the impact log, the centre identified
some areas that are very relevant to the centre’s mission and objective which necessitated
the need for a more defined communications strategy. An example is to improve access to information to stakeholders(such as CSO’s) and communicating technical issues to a wide audience in an easily comprehensible manner and also developing an interactive based mode of communication(social media) in addition to the traditional mass media outlets.
Can Drusilla David say more about how the Impact Log has helped CSEA measure whether or not its communications has contributed to policymaking?
David,contributing to policymaking is a continuous process which cannot be measured instantaneously.However,the Centre measures its progress using the Impact Log, by keeping records of activities such as mentions or citations by either policymakers,stakeholders,civil society organisations or the media.These help to demonstrate that the Centre’s research output is visible or been recognized in the policymaking sphere.
Questions/comments for Elizabeth:
Measuring communications impact is important to assess what kind of demand there is for your information BUT also what is NOT demanded or needed. How easy is it to really know what specific information is not being demanded or used? While # of downloads and views/shares is one overall indicator, what about indicators for information presented at events, in briefs, etc…?
Measuring communications impact is great to assess whether your impact is also extending beyond your home country. If you are a think tank catering only to the issues of your home country but then you realize many other individuals from other regions are also demanding/using your work, what should you do and could this eventually affect your long-term mission? It’s important to ensure you have a manageable list of target audiences, and broadening this list could possibly be counter-productive?
Questions/comments for Drusilla:
How often is the impact log updated in real-time? Further, because you have the impact log created via an excel spreadsheet, can you run into quality control/knowledge management issues?
Where were some quick wins that came from using an impact log?
Henna, The same tools used to track the demand for your products are inversely used to determine which ones are not being used or have limited demand by the counts. Once you realize that a specific product or piece of information is not being demanded for a period of time, you then have to either conduct a focus group discussion or survey with specific groups to establish why this information is not being used. The feedback you receive will help you to identify whether it is a problem with the content—too technical, format of presentation or the timing. Basing on your findings you will be in position to either repackage or do away with a particular form, or change the timing and accessibility of a given piece of information.
Henna, the Impact log is updated quite frequently provided there is a new research or
communication activity in progress, furthermore with the proposed sections the centre hopes to include in the log, the frequency of updating the log is bound to increase.
There hasn’t been any challenge of managing the impact log using the excel spreadsheet format but in an event that we run into such difficulty, we may consider using tables on Microsoft word or perhaps if anyone has any suggestion on what will aid this, we will be happy to learn.
With the log we have been able to monitor the frequency of communication and interactions with policymakers and also analyze the impact of these meetings, thereby aiding us strategize for future ones. Currently, there is no specific list for a rule of thumb but recommendations from the log mostly evolves when it is been reviewed, however this review is not limited to the quarterly review by the management, the log is quite convenient for an everyday review, by this I mean as you update data and information, you can review
previous updates simultaneously.
I agree with Modupe Adefeso that the measurement of communications is heavily skewed towards outputs, and that we need to strengthen our ability to measure our outcomes. The big question is, how do we do that?
David, this is a good question. Measuring begins with a clear sense of what it is one desires to measure (goals and objectives) and reference points (Indicators) to help ascertain if/when such goals and objectives have been met. I often find when developing monitoring and evaluation frameworks, that one of the most tedious activities is agreeing effective indicators for the outputs and outcomes associated with each objective. It is however an extremely useful and important activity.
These were three great presentations – many thanks again for sharing these thoughts and reflections! I have a few thoughts / comments:
1. Elizabeth, I really liked the categorisation of the three themes you measure: Reach, Usefulness and Use. Drusilla, it seems to me that the impact log you use primarily captures the ‘reach’ bucket. Do you think the log can also better capture usefulness and use? Elizabeth also touched on the difficulty of measuring these two components – perhaps another thought is to have yearly focus groups, where you interview key stakeholders to probe your influence and usage of the research?
2. Mo touched on the fact that measuring, monitoring, and evaluation are all crucial pieces. How can we fit Learning into the framework (or is it implicit?)? For instance, how have you been able to course correct or make adjustments in PEC strategy and learn from the measurement / monitoring? This point also touches on David’s comment below.
Looking forward to hearing other thoughts on this!
Hi Shubha, thanks for the question. The monitoring and evaluation cycle should always include a review/reflection period which provides the opportunity to take a critical look at the effectiveness of communications activities and take learnings and lessons to plan for the next cycle. The duration of this cycle is up to the organisation, but the period for review and planning is a crucial part of it. Besides this set review period however, it is important that lessons are taken regularly from monitoring efforts. Weekly, fortnightly or monthly meetings can be scheduled to review the effectiveness of communications activities.
Mo, you make an extremely important point that monitoring and evaluation of communication efforts should be a more complex and sensitive process that goes beyond listing outputs. What does this mean in practice for think tanks given the magnitude of challenges that they operate against?
Hi Farai, your point is important and this is why monitoring should ideally form part of the daily life of a communications team. Monitoring should not be viewed as a separate additional task, but as an integral part of the communications function. This way, identifying and learning from outcomes (intended or otherwise) will not be viewed as cumbersome.
As i mentioned in my reply to David’s question, simply agreeing the indicators that represent our desired outcomes, and scheduling regular review meetings (weekly, fortnightly or monthly) offer the opportunity to identify whether or not these desired outcomes are being met.
For monitoring and evaluation to be useful to the communications function, there must be deliberate effort to draw learnings and adapt the communications strategy or tactic(s) as needed.
Drusilla, very interesting idea of an Impact Log to capture data on CSEA’s different communication efforts. Who is involved in the management of the Impact Log — who captures the data, analyses it and feeds it into the organisation’s decision making processes. What time and effort in involved in doing this? And could you provide practical examples of how the Impact Log has transformed the way the organisation approaches and implements its communication work.
Mo’s illustrative presentation is quite lively.But how can we juxtapose IMPACT and TIMING in evaluation? The question is asked because an activity may be seen not to have achieved its intended objective if the timing of such is too long; especially in developing countries with
unstable political environment- prone to changes in policy makers.
Drusilla’s presentation is quite interesting and resourceful. I will further get in touch with her personally to shed more light on the use of excel sheet for this and other tools, probably.
It seems the “Impact Log” measures “OUTPUT “rather than “IMPACT” from Drusilla’s presentation and podcast. Can “Impact log” be used to measure IMPACT of
policy engagement/dialogue bearing in mind that this may take a long time? Can Drusilla help in demonstrating instances of such in their work in CSEA?
Good job Elizabeth. Have you tried to overcome the challenges you mentioned towards the end of your presentation? I would like to know, using 2 or 3 of the challenges.
Sola, we have made efforts to address some of the challenges namely harmonization of indicators that measure communication outputs and outcomes for the various projects. In the process of drafting our new PEC strategy we are incorporating an M&E framework with a set of indicators and tools that will be used to track the specific indicators. We have not yet tested this practically but hope it will help harmonize the different components of our communication interventions.
Regarding encouraging everyone within the unit and the organization participate in monitoring and evaluation we have developed a joint monitoring tool and assigned different staff roles and responsibilities in tracking the necessary data. The tool is online, accessible via Google drive and any staff on the Centre’s mailing list can view it, with restricted rights to update information as per the assigned tasks.
Sola, yes measuring the impact of policy engagement and dialogue may take a while
which is why there must be consistency in monitoring an d updating the log with
details of every activity which can be as slight as a brief phone call to a
meeting with a stakeholder and subsequently the progress of this activity will be
imputed into the log for effective monitoring.
For instance with the log, the centre has been able to identify and strategize for subsequent meetings which has also increased the visibility of the centre and improved its networking with stakeholders, civil society organisations and the media. This however remains an ongoing process as is contributing to policymaking.
Farai,the impact log is primarily managed by the communications officer who monitors
all communication activities of the centre, collates data and information from
researchers on their various research activities and analyses the progress of
these output with support from senior researchers. The impact log is reviewed
by the management quarterly and can be shared with other staff for further assessment
or contributions. The log is updated recurrently and reviewed from time to time.
A practical example of the how the log has helped transformed the centre’s approach to communication is by improving on its communications strategy plan to cover a broader
aspect of communication utilizing various tools of communications, targeting
key audience for dissemination of its research output and ensuring that these
activities are effectively monitored. Similarly the impact log has helped the
centre in the aspect of planning for events (seminars, policy dialogue). While
these details have not been physically imputed in the current log, the centre
keeps a database of feedback and information on events highlights the areas
that needs to be improved thereby strengthening the centre’s ability to plan
better in future events.
Without a general and continuous capacity building for Research,Library,IT and Communications staff together same time, much may not be achieved in M and E because the three presentations showed that M and E is not just the work of Communications Specialists.
Shubha, Your very right about conducting annual or regular surveys to establish how your target audiences perceive your information products and services. Tools like the survey monkey can be used to send out short questionnaires to individuals on the mailing list to gauge their level of satisfaction, relevance quality and timeliness of your information products and services.
Elizabeth gave a comprehensive presentation on her think tank’s impressive system for monitoring the impact of its communications. I wonder if this system could also work in a smaller think tank that doesn’t have a knowledge management person or even a full-time communications person?
Elizabeth, a great and insightful presentation. I liked the fact that your presentation pays a great deal of attention to audiences. Reaching the right audiences, with the right information at the right time is crucial if communication efforts are to have impact. I have two questions for you:
– You mention that you use circulation, listenership and viewership figures to estimate the reach of of your media related activities. Have you gone further to determine whether this is the case in reality? Do you further segment this to determine whether you are reaching your intended audiences?
– You also raise an important issue around attribution. This is something that many organizations continue to grapple with. In instances where you are unsure if you can attribute policy changes to the work of your organization, do you go further to engage policy makers for example to determine the extent of usefulness of information produced by your organization?
Elizabeth explained in detail how her think tank collects information for measuring the impact of its communication but I would like her to share more information about what her think tank does with this information once it is collected.
I am going through the posted materials and i find them relevant for our works here at STIPRO
Comment by Bitrinia Diyamett from STIPRO (Tanzania):
1) A log approach to monitor impact of communication by Drusilla from CSEA is very interesting; and I think it is the most systematic tool to track impact of communication activities. I however think the log can be improved in many ways. Among others, it should have a column for different communication tools, different spheres of influence, outcomes and impacts and their monitoring indicators. As it is, it is not clear what the log on slide 9 indicate. There is also no clear relationship between slide 8 on what I believe to be communication tools/channels and slide 9 on log. In this regard I subscribe to what Elizabeth of EPRC is saying on slide 5, bullet 3: “Ensure we are reaching the intended audience (sphere of influence), in the right way (communication tool) and right time”, meaning different spheres of influence are reached through different tools”
2) Interesting to also note – from EPRC- how to measure demand for products of TT. I suggest they add demand for embodied knowledge such as invitation to give talk at meetings, seminars and workshops; invitation to task forces, etc. How many people from your organization are being invited to this events/ activities? How frequent?
3) The major target for TT is to inform and influence policy; but there are unintended positive by products. For instance TT also build capacity of different stakeholders in the process of research and policy influence. Dont we need to capture and measure this? How do we do it?
4) one of the indicators of quality of research – as defined by EPRC – is the extent of its use; which is ok, but we should not forget that research is not always politically neutral, and therefore good and credible research output can meet a major stumbling block in the process of being put into use, while at the same time some of the sloppy ones get quickly into the system. What issues should be considered here? What are most effective channels of influence?
Bitrinia, I’m sorry there wasn’t an indication on slide 9. It is actually a screenshot of
the impact log that we use and that is the section which captures data or information from the media. If you were able to listen to the podcast, further information was given on slide 8 and slide 9. Also, in an effort to sustain the current impact log, other sections/columns have been proposed by the centre to further improve on the log as indicated in slide 12.
I have gone through all presentations and find them superb and really have contributions to Communication Unity at ESRF.
Mo, based on the definitions of Activity, Input,Output, Outcome and Impact and your explanation, how can think tanks handle success story attributions in the case where the “Impact” is not known until a very long time (even when the issue has been rested before the impact is showcased)or when several think tanks/NGOs work on the same issue?