Development action with informed and engaged societies
After nearly 28 years, The Communication Initiative (The CI) Global is entering a new chapter. Following a period of transition, the global website has been transferred to the University of the Witwatersrand (Wits) in South Africa, where it will be administered by the Social and Behaviour Change Communication Division. Wits' commitment to social change and justice makes it a trusted steward for The CI's legacy and future.
 
Co-founder Victoria Martin is pleased to see this work continue under Wits' leadership. Victoria knows that co-founder Warren Feek (1953–2024) would have felt deep pride in The CI Global's Africa-led direction.
 
We honour the team and partners who sustained The CI for decades. Meanwhile, La Iniciativa de Comunicación (CILA) continues independently at cila.comminitcila.com and is linked with The CI Global site.
Time to read
3 minutes
Read so far

Measuring the Impact of Communication in Public Health Programs

0 comments
Affiliation

Bloomberg School of Public Health, Johns Hopkins University

Date
Summary

This 30-slide presentation was offered by the United States-based Johns Hopkins Bloomberg School of Public Health Center for Communication Programs at a December 2005 meeting of The Communication Initiative (CI)'s Partners, who gather annually to guide the strategic direction of the organisation. The second day of the 2005 meeting featured a number of presentations from CI Partners on the theme of "measuring communication impact" (click here for additional background, and to access all the presentations from that meeting).

This particular presentation begins by outlining, in graphic form, a "basic model for communication impact", through which an action-intervention (a communication programme) leads to observed outcomes (as measured by various indicators). Presenter D. Lawrence Kincaid explains that size of impact is a function of several factors, including:

  • the level before the intervention
  • the frequency, channel, quality and strategic design of the communication
  • the nature of the problem and the expected outcomes
  • contexual, environmental, and cultural constraints

The key is attributing impact to the programme, which enables a valid estimate of cost-effectiveness (as in the case of a third programme Kincaid discusses, carried out to spur contraceptive use in the Philippines). Multivariate Causal Attribution Analysis (MCA) with a statistically balanced, matched control group is one methodology for demonstrating that observed differences are actually "caused" by a communication initiative rather than other unknown or unobserved variables/confounding factors. Requirements for MCA include:

  1. An Intervention: Action that is expected to cause a specific outcome
  2. A theoretical model that specifies the causal pathways leading from exposure
    to the expected outcome
  3. A counterfactual condition with which to estimate the net effect of the intervention, which involves structural equation modelling (SEM), treatment effect analysis, and propensity score matching

It is the final step in the list above that poses a special challenge for those seeking to understand how a communication programme changes behaviour, Kincaid argues. The "Counter-Factual Dilemma" may be described as follows: Since each audience member can be in one of two possible states, exposed or unexposed, it seems impossible to compare the outcome of those exposed to the outcome of the same people when they have not been exposed. Kincaid suggests that the Randomized Control Trials (RCT) design, which assigns individuals randomly to treatment and control groups, shifts the focus from change in individuals to changes in the mean outcomes of statistically comparable groups of individuals. However, this approach relies on the notion that the control group is, by chance, on average statistically equivalent to the treatment group.

Another option for establishing a counterfactual condition is Propensity Score Matching (PSM), which approximates the conditions of the RCT design by using multiple regression to create a single variable to match survey respondents who were exposed and not exposed to a communication programme. Kincaid lays out this methodology in some detail, concluding that PSM can lead to a valid (unbiased) conclusion about the observed outcome.

Kincaid cites evaluation findings from several different types of health communication programmes to illustrate the above-described process(es), including:

  • A TV campaign focusing on home water treatment promotion in Sindh, Pakistan (2005), which involved community dialogue, focus-group discussions (FGDs), and in-depth interviews in 3 villages to help determine the factors that facilitate and inhibit household water purification in the province. Next, communication theories of behavioural and social change were applied in order to generate a model for strategic design and impact evaluation using survey research. The programme was found to lead 30% of the "treatment group" to carry out home water treatment as compared to 18% of a matched control group. (A conceptual framework for sustained water treatment in Pakistan is offered on page 15 of this presentation; a chart showing observed causal pathways between campaign exposure, psychosocial intervening factors, and water treatment appears on page 16).
  • The 2004 "Partner Faithfulness to Prevent AIDS" South Africa intervention. Regional surveys found (p<0.01) that the percent of those aged 16-26 years who decided to remain faithful to their partner by exposure to the television drama Tsha Tsha was 44% among the "treatment group", compared to 27% among the matched control to 44%. Condom use at last sex among those exposed to the campaign was found (p<0.05) to be 60%, compared to 51% among the matched control group. Use of voluntary counselling and testing (VCT) was found (p<0.05) to be 17% among those exposed, compared to 11% of the matched controls.

The presentation ends with several strategic suggestions for evaluators. First, the author stresses that evaluation and programme design should be based on a thorough understanding of the problem in the social, cultural, and physical context in which it occurs. This knowledge needs to be organised into a coherent "model" to help design the programme and the measures used to evaluate its impact. Evaluators need to measure as many "relevant" variables as possible (examples from the communication programmes highlighted throughout the presentation are shared as illustrations) to construct a statistically balanced, matched control group. If successful, then a valid cost-effectiveness analysis can be conducted. Where possible, all programmes related to the outcome should be measured, with their cumulative impact as well as each one's individual impact measured.