Skip to main content

Coca-Cola Curses: Updating Dangerous Speech for Postcolonial Contexts

Posted by on Saturday, August 6, 2022 in Blog Posts.

By Brittan Heller[1]

The Atlantic Council

June 2022

 

Postcolonial environments pose unique moderation challenges for social media, as traditional understandings of hate speech fail to provide the resources to understand its usage in a postcolonial context. I saw this firsthand during my fieldwork in Kasese, Uganda, where I worked to create a lexicon of localized hate speech.[2] After three days of training local religious, tribal, and civic leaders on what hate speech means and how it functions, a hesitant participant provided a local example: calling a Kasese politician a “Coca-Cola bottle.”  Suddenly the room burst into chaos, with outraged shouts and nervous laughter.

 

Originally, it was unclear how calling someone a Coca-Cola bottle targeted them for the immutable characteristics that hate speech typically focuses on, like race, gender, or ethnicity. The example was not a clean fit with prevailing scholarship on hate speech.

 

However, further explanation from local partners revealed multiple separate insults embedded in the epithet. First, the insult called out that the candidate was from a “pygmy” tribe, which was shorter than his opponent’s tribe, just as a bottle would be smaller than a man. Second, a full Coca-Cola bottle is deep brown, emphasizing the darker skin color of the target.

 

Third, the epithet calls the target ignorant by alluding to the Coca-Cola bottle that was central to the film The Gods Must be Crazy.[3] In the film, an African tribesman finds a discarded Coca-Cola bottle in the desert and is convinced it is a gift from the gods. As the bottle creates jealousy and unrest in his village, the man decides it must be destroyed. Critics highlight the film’s portrayal of the “Bushmen” as inferior and ignorant, and this insult intentionally conveys the same message to its target. Fourth, in calling the candidate a Coca-Cola bottle, the accuser identified the target with the harmful characteristics of African stereotypes — a source of frustration and offense for Ugandan people — while simultaneously implying the candidate was a colonial tool.

 

Finally, calling someone a Coca-Cola bottle aligns with the theory of dangerous speech by comparing the target to literal trash. Dangerous speech is defined as dehumanizing, and hateful speech often normalizes the propensity towards violence against a targeted group through the use of tried-and-true techniques. This rhetoric follows hallmarks, or consistent patterns across culture and time, like comparing a particular ethnic group to vermin, waste and garbage. A cast-off Coca-Cola bottle falls under the same category.

 

Examples like this initially confused my team, as they did not facially fit with the scholarship around hate speech or even the community-based definition of hate speech that our Kasese participants developed.[4] From studying these patterns, the problem clearly was not a cross-cultural misunderstanding. Rather, our own conceptual limitations left us unable, at first, to appreciate how hate speech might function in a postcolonial environment. What is hateful might manifest differently there and might be more difficult for outsiders to discern.

 

Why focus on hate speech in postcolonial contexts? Notwithstanding the cost in human suffering from hate speech and derogation, many of the most terrifying ethnic conflicts in recent memory arose in postcolonial environments and were built off social legacies of colonial rule. From the radio in Rwanda and Sudan to social media in Myanmar, Sri Lanka, and India, those who fueled these conflicts used modern communications platforms to propagate their hateful narratives.[5]

 

Postcolonial contexts engender nuanced understandings of hate speech. The context and culture imposed by colonizers may exist in fundamental tension with – and maybe even in opposition to — the context provided by the history, tradition, and culture of the colonized. In this way, a person from Kasese may see a Coca-Cola bottle as both hateful and not hateful. The duality helps sublimate postcolonial hate speech in subtle but volatile ways; in other words, observers from outside the colonized culture may not see the full impact of hate speech, especially if it can be mitigated by the more innocuous understanding of the term in the colonizers’ context. But the full meaning becomes explosively clear when epithets fuel ethnic, religious or political violence.[6]

 

In a forthcoming article in the Michigan Technology Law Review, I apply these findings to expand existing scholarship by proposing a new “hallmark of dangerous speech.”[7]  The proposed qualifier, “calls for geographic exclusion,” stems from the particular characteristics of postcolonial hate speech. Examples from the Kasese study illustrate how this phenomenon could and does upend expectations of what platforms expect hate speech to look like — such as using a Coca-Cola bottle as an epithet. Applying this new hallmark creates a more inclusive understanding of localized hate speech. I hope the paper challenges platforms to address online hate speech and content moderation at a global scope and scale.

 

Determining whether a statement is hate speech is incredibly difficult from content alone — especially in post-colonial environments with many layers of conflicting context. The mismatch between local outrage at the Kasese example and how similar speech is treated by global social media companies is striking, but not surprising. It is unlikely that a US-based technology company, for example, would even recognize the importance of context in a reference to a Coca-Cola bottle — context that may decide whether such a reference is innocuous or impermissibly hateful.

 

If platforms used automation to enforce against the use of Coca-Cola bottles as a hateful symbol, they would almost certainly over-enforce against innocuous content.[8] Artificial intelligence (AI) catches most hate speech on Facebook before it hits users’ feeds,[9] but the context behind this reference is so nuanced and specific that AI is unlikely to understand when a user intends to refer to the epithet instead of the drink. Even if platforms get the balance right, the Coca-Cola company may push back because of the potential negative impact on its brand.

 

Similarly, it’s unlikely that content moderators would flag such a post as hate speech, especially if they were unfamiliar with local mores. In a postcolonial society like Uganda, where social meanings exist in layers, the full implications of context are veiled in this coexistence.

 

We cannot meaningfully address or counter the colonial history that underlies many modern-day conflicts, but we can understand what it means, how it manifests, and how it helps fuel hateful narratives and ethnic tensions. Studying how online hate speech functions in environments like Uganda may help us build better tools to combat it — and understand the consequences of colonialism decades after the colonizers have left.

[1] Brittan Heller is a technology and human rights lawyer and an expert on hate speech and international law. She is a fellow at the Atlantic Council. She previously prosecuted genocide and war crimes at the International Criminal Court and the U.S. Department of Justice. Heller was a 2020 AI and Human Rights fellow at Harvard Kennedy School, focusing on content moderation systems, and the founding director for the Center on Technology and Society. She is a graduate of Yale Law School and Stanford University.

You can download a copy of Brittan’s post here.

[2] Thank you to the Berkeley Human Rights Center, the U.S. State Department, the Dangerous Speech Project, and Peace Tech Labs for providing research assistance and support to make this project possible. Additionally, thank you to the Privacy Law Scholars Conference for selecting this full draft for review in 2021. Most of all, thank you to our local partners in Kasese, Uganda, without whom this work would have been impossible.

[3] Wellson Chin, Billy Chan & Jamie Uys, The Gods Must Be Crazy (1980).

[4] In another example, Ugandan civic leaders consistently flagged “white beauty standards” as hate speech in our focus group discussions.

[5] Examples include Rwanda, Cambodia, Sudan, Myanmar, and present-day violence occurring in India.

[6] Susan Benesch & Jonathan Leader Maynard, Dangerous Speech and Dangerous Ideology: An Integrated Model for Monitoring and Prevention, 9 Genocide Studies and Prevention (2016), https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1317&context=gsp. Examples of this include use of “cockroach” during the Rwandan genocide and “love jihad” currently used in India.

[7] Susan Benesch defines dangerous speech as “[a]ny form of expression (e.g. speech, text, or images) that can increase the risk that its audience will condone or commit violence against members of another group.” Her five hallmarks of dangerous speech include: dehumanization, “accusation in a mirror,” threats to group integrity or purity, assertions of attacks against women and girls, and questioning in-group loyalty. See Susan Benesch, Dangerous Speech: A Proposal to Prevent Group Violence, Dangerous Speech Project, https://dangerousspeech.org/wp-content/uploads/2018/01/Dangerous-Speech-Guidelines-2013.pdf.

[8] Emma Llanso, Human Rights NGOs in Coalition Letter to GIFCT, Free Expression (2020), https://cdt.org/insights/human-rights-ngos-in-coalition-letter-to-gifct/.

[9] Arcadiy Kantor, Measuring Our Progress Combating Hate Speech, About Facebook (2020), https://about.fb.com/news/2020/11/measuring-progress-combating-hate-speech/.