Leveraging Volunteer Fact Checking to Identify Misinformation about COVID-19 in Social Media

Boston University
"How can we amplify the efforts of volunteer fact checkers to identify emerging health misinformation on social networks?"
Apart from the positive effects of social media on health literacy, social media can be a venue of health misinformation. This effect has been exposed vividly during the COVID-19 pandemic, a situation the World Health Organization (WHO) has characterised as an "infodemic". Existing methods for detecting health misinformation online often rely on intensive manual labeling or known misinformation sources (such as domains, URLs, or accounts). However, misinformation can be buried amongst a large volume of accurate information, and the manner and type of emerging health misinformation are often unknown. This study proposes an approach that leverages the nature of social media, network structure, and the efforts of volunteer fact checkers to correct COVID-19 misinformation when they encounter it.
The strategy starts by identifying replies to Twitter posts that have similar context to official advice from health authorities. Then, a natural language model is used to calculate similarity between replies and official advice. These posts act as seeds and indicate areas of a social network where misinformation is likely harboured. Specifically, the researchers collected public reply posts containing context-specific keywords (e.g., COVID-19) and calculated semantic textual similarity with official advice provided by the WHO. They then collected parent posts of replies of high similarity. Whereas many existing approaches rely on linguistic features of misinformation or misinformation-sharing behaviours, this strategy exploits the structure of the local networks surrounding fact-checked parent posts.
In the below schematic, starting from the volunteer fact checker (blue node) who provides accurate information, one first identifies misinformation in the parent post (red node) and then further detects misinformation in the upstream (friends: green nodes) and downstream (followers: orange nodes) peers of the parent.

To test the strategy, using the search period January 1 to March 31 2020, the researchers collected 16,383 public tweet replies (in English) related to COVID-19 and 2 topics: (i) antibiotics: Although the claim "antibiotics are effective in preventing and treating the new coronavirus" is incorrect, and despite public health education efforts, this misinformation continues to spread on Twitter; and (ii) a cure: The WHO has said, "While some western, traditional, or home remedies may provide comfort and alleviate symptoms of COVID-19, there is no evidence that current medicine can prevent or cure the disease."
Using the strategy depicted in the above schematic, the researchers identified COVID-19 misinformation about antibiotics and a cure on Twitter. They observed that misinformation is present in the upstream (friends) and downstream (followers) peers of the accounts who created posts that were fact-checked by others, suggesting that network-oriented strategies can uncover emerging misinformation. They found that this strategy is more efficient than keyword-based searches in identifying tweets containing misinformation and requires neither advanced knowledge of the type or manner of misinformation nor a set of URLs or domains that have been associated with misinformation.
Based on this exploration, the researchers suggest a collaborative system that amplifies and is aided by the efforts of volunteer fact checkers in identifying and correcting emerging misinformation on social networks. They conclude that this strategy and findings have implications for:
- Organisations engaged in fact checking, who could "harness the wisdom of the crowds to enhance discovery of misinformation spreading on social media and lower search costs...Notably, volunteer fact checkers have been found to be as effective as platform governed efforts in correcting health misinformation (Bode & Vraga, 2018)."
- Social media platform providers, who could complement their Application Programming Interfaces (APIs) with content seeds that permit searching content within local network neighbourhoods, governing access to such tools to prevent abuse.
- Researchers, who could leverage the strategy to locate and better understand subpopulations where misinformation emerges. In communicating the findings of their investigations with policymakers, they could help foster the design of targeted policies that reduce adverse effects of misinformation on society.
The Harvard Kennedy School Misinformation Review, May2020, Volume 1. DOI:https://doi.org/10.37016/mr-2020-021. Image credit (top): Brian McGowan/Unsplash
- Log in to post comments











































