Skip to main content

Algospeak: Jailbreaking the Marketplace of Ideas

Posted by on Friday, March 8, 2024 in Blog Posts.

By Amaris Aloise

Social media algorithms have become the captors of the marketplace of ideas, but they have also captured its importance to US culture. With the exponential rise in algorithmic content moderation, the marketplace of ideas has become dependent on internet culture and slang, resulting in a “chronically online”[1] censorship of expression known as “algospeak.” A “sub language” created by social media users, algospeak refers to the replacement of words or phrases disfavored or banned by a platform’s algorithm and community guidelines with ones that evade detection by algorithmic content moderators.[2] While social media platforms characterize their content moderation as a defense against hate speech and misinformation, their algorithms make opportunities for the discussion of sensitive (yet important) topics almost obsolete and narrowly accessible.[3]

Clearly, content creators and consumers alike are not blind to the censorship of their ideas—algospeak is a consumer-created form of expression.[4] It is equally clear, however, that consumers want to engage in the marketplace of ideas, even if those ideas contain sensitive or widely contested topics that might be filtered by algorithmic content watch dogs.[5] Although the fruitfulness of the marketplace of ideas has been hotly debated in recent years, the advent of algospeak proves that counterspeech[6]  is not only something that users seek but also something that users are willing to create entire “sub languages” to express.[7] Indeed, even after algorithms adapt to algospeak, users create new phrases to escape detection, thus highlighting users’ deep desire to facilitate and engage in the dissemination of ideas.[8]

Although algospeak facilitates expression, it comes with drawbacks. It can cause users to misinterpret ideas, places undue emphasis on ideas, and even impedes the discussion of ideas deemed acceptable by platforms. The very use of algospeak can lead to misinterpretations—it implies that, because a platform deemed a word or phrase objectionable, the idea communicated by codifying that word or phrase must be de facto objectionable. Additionally, the attention-grabbing form of expressing the idea could place more emphasis on the idea than it otherwise would have received in an uncensored marketplace. Although this could lead to greater discussion, it could also lead to emphasizing a topic that a platform has deemed impermissible. Indeed, the use of an asterisk or codeword invites a sense of curiosity and bestows the writer with an aura of rebellion that goads users to take a closer look. Thus, because of the use of algospeak, a platform’s content moderation may draw more attention to an objectionable idea, which upends its goal of eliminating that speech. Conversely, algospeak can impede the discussion of ideas a platform recognizes as permissible by requiring additional effort and self-censorship that may dissuade a creator from publishing content that they otherwise would have published. Moreover, because algospeak requires learning a “sub language” to understand and engage in discussion, it can create echo chambers by making the discussion of ideas accessible to only those who can navigate the censored terrain of the algorithm.[9]

Such an enduring resistance to content moderation should not be ignored in the debate of First Amendment principles and reinforces the importance of counterspeech. Today, many opponents to the counterspeech doctrine claim that counterspeech cannot stand as an equal remedy to hate speech because it echoes only privileged voices.[10] However, algospeak proves that counterspeech continues to thrive despite strict regulations against hate speech and evinces the serious harm that follows when permissible ideas are swallowed up by the broad categorizations of content moderation regimes. Algospeak serves as a case study of the importance of counterspeech, the enduring need for the marketplace of ideas in US culture, and the harms that persist despite content-based regulations on speech.

Prior to law school, Amaris studied at California State University, Fullerton and majored in Anthropology and Public Administration. Amaris plans to pursue a career in litigation after law school.

[1] Abrar Al-Heeti, ‘Chronically online’: What the Phrase Means, and Some Examples, CNET (Sept. 9, 2021, 10:26 AM), (“Chronically online describes those who spend so much time online it skews their sense of reality and hinders their ability to effectively communicate about topics like politics or social justice because they lack real-world experience.”).

[2] Taylor Lorenz, Internet ‘Algospeak’ is Changing Our Language in Real Time, from ‘Nip Nops’ to ‘Le Dollar Bean’, The Washington Post (Apr. 8, 2022, 7:00 AM),; Roger J. Kreuz, What is ‘Algospeak’? Inside the Newest Version of Linguistic Subterfuge, The Conversation (Apr. 13, 2023, 8:38 AM),

[3] Kreuz, supra note 2 (explaining that there are many reasons why people may want to engage in the discussion of sensitive topics, including finding community and opportunity for therapeutic discussion).

[4] See id.

[5] See id.

[6] Counterspeech posits that when negative speech enters the marketplace, the best solution is to counter it with positive speech. Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring) (“If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”).

[7] See Lorenz, surpa note 2; Anthony Tellez, ‘Mascara,’ ‘Unalive,’ ‘Corn’: What Common Social Media Algospeak Words Actually Mean, Forbes (Jan. 31, 2023, 3:36PM), (“almost one-third of Americans who use social media have said they use emojis and altered phrases to communicate banned terms”).

[8] Id.

[9] For example, Julia Fox issued an apology after misinterpreting a Tik Tok that described a sexual assault by replacing the banned term with “mascara.” Fox, thinking that the post was really about makeup mascara, made an insensitive comment and faced significant backlash from algospeak-fluent users who accused her of “diminishing sexual assault.” Dana Di Placido, Julia Fox Apologizes After Misinterpreting TikTok’s Secret Meaning Of ‘Mascara’, Forbes (Jan. 27, 2023, 6:51 PM),

[10] Ruth Coustick-Deal, What’s Wrong with Counter Speech, Medium (Feb. 6, 2017),,work%20on%20already%20oppressed%20groups.

Tags: , , ,