Measuring Communication Impact
Healthlink Worldwide (formerly Exchange)
This 9-slide presentation was offered by Andrew Chetley of Healthlink Worldwide, a United Kingdom (UK)-based health and development agency, at a December 2005 meeting of The Communication Initiative (CI)'s Partners, who gather annually to guide the strategic direction of the organisation. The second day of the 2005 meeting featured a number of presentations from CI Partners on the theme of "measuring communication impact" (click here for additional background, and to access all the presentations from that meeting).
This particular presentation begins by acknowledging that examples of the impact of communication initiatives are difficult to pin down, both because the development context is "complex, volatile and dynamic" and because development agencies tend to crave signs of change before communication interventions have had a chance to have had a sustainable impact on poverty or other development challenges.
In this context, Chetley argues, it is key to keep strategic goals related to both learning and accountability in mind. The former involves incorporating learning for the future when thinking about evaluation, as well as "finding rigorous and appropriate ways to reflect critically on ongoing work". Chetley explains that this type of learning - ideally, a process of action learning over time - is more likely to be effective when the process is initiated, designed, and owned by the people directly involved in project work and those the work is supposed to be helping, and when the process of dialogue and reflection is kept open. Communication initiatives, like one he cites here which was carried out in collaboration with the Bernard van Leer Foundation on early childhood development, found success by drawing on the diversity of local circumstances but also linking to broader, common themes. Initiatives like these can also enhance effectiveness when they systematise the learning and evaluation methods used (by reducing dependency on qualitative data, for instance).
With regard to accountability, Chetley advises communicators to stay attuned to the following key lessons:
- Evaluation lessons that are too specific or too general are likely to be dismissed (relevance matters).
- Mid-term and thematic evaluations may be of more use than those evaluations which provide too much (in the form of a "large document, crammed with information") that is delivered long past the end of the project (timeliness matters)
- "Supply driven evaluations are all too common."
- Face-to-face communication is "the most effective in generating lessons and learning, but how can that be scaled up?"
- "How lessons are drawn out and phrased is crucial to the usefulness of the evaluation. The dry and technical language used may also a key constraint."
- "Attention to communication is vital, yet very few evaluation units have communication specialists in their teams."
Reflecting on the challenge of evaluating communication programmes, Chetley draws on the metaphor of a river to suggest that, even as they start small - "programmes have progressed, have been influenced and have had influence in their own distinctive ways. As we trace their courses, we can begin to map the contours of the territory that each programme has covered and we can see their influence." He provides several links throughout the presentation to help guide communicators interested in learning more about monitoring and evaluation (M&E) strategies and examples.
- Log in to post comments











































