Social Media 4 Peace: Local Lessons for Global Practices

"Bosnia and Herzegovina, Indonesia, and Kenya provide evidence of online hate speech and disinformation affecting human rights, democracy, and peace and stability offline."
This report, published by the United Nations Educational, Scientific and Cultural Organization (UNESCO), shares research findings from the Social Media 4 Peace project, which works to strengthen the resilience of societies against potentially harmful content that is spread online, particularly hate speech that incites violence. It also seeks to enhance the promotion of peace through digital technologies, notably social media, in conflict-prone environments (see Related Summaries below for more information). The findings outlined in the report focus on three pilot countries - Bosnia and Herzegovina, Kenya, and Indonesia - and investigate the root causes, scale, and impact of potentially harmful content and the effectiveness of the tools used to address it. The discussion includes an analysis of the regulatory frameworks governing harmful content online in these countries, assessments of self-regulatory tools and content moderation policies of platforms, and the mapping of local efforts by civil society. The objective of the publication is to inform global discussions on countering harmful content, especially in conflict-prone environments, by delving into the complexities of these countries' political, cultural, linguistic, and societal contexts. Its insights seek to serve as guideposts for stakeholders working to promote freedom of expression and a safer online environment.
As explained in the report, "Harmful content, particularly hate speech and disinformation, has become pervasive in the digital realm, profoundly impacting people's lives beyond virtual interactions. It seeps into the real world, affecting human rights, social cohesion, democracies, and peace. This has corroded public discourse and fragmented societies, with marginalized communities often bearing the consequences. Addressing these challenges requires understanding the root causes and impact of harmful content. Governments, social media companies, civil society organizations, and international bodies must collaborate to develop strategies that protect fundamental rights online while safeguarding users."
The report is based on two sets of research undertaken in each country during the first year of project implementation: One set sought to understand the legal frameworks, the adopted forms of regulating harmful content, and the trends and concerns regarding the implementation of these laws, their effectiveness to protect the targets of harmful content, the loopholes, and the impact on freedom of expression in each country. The second set of research looked at the current status of content moderation in each country, particularly the self-regulatory framework and tools put in place by social media companies to curb harmful content, and their effectiveness. Throughout the research, the experts engaged with around 150 local stakeholders from the three countries through interviews, surveys, and consultations, including civil society organisations (CSOs), media outlets, national regulators, and representatives of groups in situations of marginalisation or vulnerability.
The following are some of the main findings:
- Online harmful content - in particular, hate speech, disinformation, and gender-based violence - affects the offline world and has a negative impact on peacebuilding in the three target countries. However, the lack of transparency on the moderation of such content by social media companies creates dependence on anecdotal evidence.
- In the three countries, national legislation to address harmful content shows some degree of inconsistency in comparison to international standards, notably in relation to the protection of freedom of expression. The reasons for such inconsistency vary among countries.
- The effective enforcement of legal frameworks is uneven in all three countries. Social and cultural inequalities are often reproduced in government or judicial decisions, and vagueness in legislation opens space for discretionary decisions.
- In the three countries, there is a lack of transparency in how companies allocate the roles of moderation tasks, including the number of different language moderators and their trusted partners and sources. Companies do not process content moderation in some of the main local languages, and community standards are not entirely or promptly available in local languages.
- CSOs are active in all three countries to monitor, curb, and respond to online harmful content, but, currently, they have no strong coalitions to cooperate on these activities. Kenya and Indonesia in particular have vibrant organisations and seemingly a fruitful collaborative environment. However, the relations between CSOs and social media companies need to be consolidated.
- The preconditions to ensure that social media companies undertake content moderation that considers local contexts are not yet in place. For example, platform companies have offices in Indonesia and Kenya, but not in Bosnia and Herzegovina.
- Existing legislation is often being used to restrict legitimate rights, notably freedom of expression, while at the same time not sufficiently protecting vulnerable groups.
- Tensions arising from countries' historical and political contexts are often reinforced by social media dynamics.
- Adherence to international standards to curb online harmful content on social media while protecting freedom of expression should be strengthened. At the same time, discussions are needed on the interpretation of these standards as they apply to the information ecosystem of social media, characterised by speed and volume of circulation of potentially harmful content.
Based on the findings, the report offers 34 recommendations for international organisations, states, social media companies, civil society, donors, and multi-stakeholder actions. A few examples are mentioned below:
International Organisations:
- Through UNESCO, use the Social Media 4 Peace project to convene a multi-stakeholder dialogue on the governance of harmful content, aimed at promoting a common understanding of hate speech and disinformation trends and occurrences and how to counter them. Ensure that lessons of the project are shared at the national level, feeding into discussions at the global level.
- Develop media and information literacy programmes aimed at providing online users with the skills to critically examine online content and identify disturbing, hateful content, and misinformation. Prioritise preventive educational approaches that alert to the harmful effects of online hate speech, and foster media and information literacy alongside mitigation and counter efforts.
To States:
- Reform legislation so that it is adapted to international standards, especially those laid out in the International Covenant on Civil and Political Rights (ICCPR), the Rabat Plan of Action, the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), and the Convention on the Protection and Promotion of the Diversity of Cultural Expressions, as well as the official interpretation documents produced by the implementing bodies of these instruments. Ensure that legislation to curb online harmful content specifically protects the most vulnerable groups while safeguarding freedom of expression.
- Use legislation to promote transparency, due process, and appeal and redress rights for users in the content moderation process.
- Provide judicial assistance and restorative mechanisms for minorities and other vulnerable groups that constitute the majority of victims of incitement to hatred.
To Social Media Companies:
- Ensure transparency at the national level, offering granular data on a range of issues including, for example, the number of users active in the country or accessing content from that country, and the number of actions applied to accounts and content related to hate speech, disinformation, terrorism, violence, harassment, and the types of moderating classifiers that affect peacemaking.
- Ensure that reports, notices, and appeals processes are available in the language in which the user interacts with the service, and that users are not disadvantaged during content moderation processes on the basis of language, country, or region.
- Establish local focal points to be contacted by vulnerable groups or individuals when affected by infringing content.
To Civil Society:
- Facilitate the creation of civil society coalitions on freedom of expression and content moderation, gathering groups with different expertise in relation to various types of harmful content and approaches. Coalitions can play an effective role in bridging the gap between local CSOs and companies that operate on a global scale.
- Gather qualitative data on individuals targeted by hate speech to better understand the scope and nature of harms, while respecting personal data protection, so as to foster evidence-based policies.
To Donors:
- Provide adequate resources to specialised organisations dedicated to monitoring and countering hate speech, disinformation, and gender-based violence, particularly those best equipped to take local contexts into account.
- Support the development of educational programmes that foster resilience to hate speech, informed by current hate speech trends and responding to related challenges. In doing so, closely collaborate with social media companies, research institutes, and education stakeholders.
UNESCO website on October 19 2023. Image credit: © UNESCO/Monika Martinovic
- Log in to post comments











































