Skip to main content

The De-Platforming Debate: Balancing Concerns Over Online Extremism with Free Speech

Posted by on Sunday, January 31, 2021 in Blog Posts.

By Lucas Osborne

Following January 6, 2021, after Trump supporters stormed the U.S. capitol, both Facebook and Twitter indefinitely suspended former President Trump from using either social media platform. Other platforms made similar decisions to restrict or ban Trump content. The decision sparked both intense celebration and condemnation concerning the power of big tech executives and the proper way to combat the growth of extremism and misinformation online that led to the riots at the Capitol.

For years, Trump has used Twitter to undermine American elections, promote racist birther conspiracy theories, and even threaten nuclear war. He has been one of the biggest contributors to online misinformation leading those to call for his removal from the platform since he claimed Barack Obama was not a United States citizen. However, does de-platforming really work?

The evidence shows it does. Trump is not the first contentious person to be kicked off social media. In 2015, researchers examined Reddit’s decision to eliminate its most pernicious subreddits and studied the conduct of those subreddit followers after the ban. The study showed that most users stopped using Reddit entirely. Those that stayed posted 80% less extreme or hateful rhetoric on the site. Other scholars who study deplatforming concur that deplatforming reduces toxic speech on social media sites.

While extremists have responded to de-platforming by moving to smaller social media sites, their following tends to greatly diminish. They have less subscribers and less people interact or like their posts. This indicates that de-platforming can help stifle the spread of conspiracy theories and extreme views by preventing the future recruitment of susceptible individuals. However, while deplatforming might stop the spread of misinformation, it can also lead to more isolated echo chambers that further radicalize a smaller group of “true believers.” Although de-platformed extremists have less followers, the followers they do have are considerably more engaged and active. Furthermore, it is harder to track the movements of extremist groups if they use more obscure or multiple social media platforms to communicate.

An additional problem is that de-platforming in practice has been reactive, not preventative. Millions of Americans have already bought into Trump’s extreme rhetoric and lies. Polls show that a most Republicans believe the election was stolen although there is no evidence supporting that claim. The damage might already be done.

While those fearful of rising online extremism laud big tech’s actions, free speech proponents have condemned them. It is incredibly alarming that big tech executives can unilaterally decide to ban the President of the United States and severely compromise his ability to communicate to the American people. Trump had about 88 million followers of Twitter and 35 million on Facebook. He had an ability to communicate his policy and cultural views directly to his followers at the push of a button. The system is broken if big tech’s bigwigs have the power to ban any political movement they don’t like.

While these social media platforms are private companies, in practice, they operate as public squares for political and cultural discourse and do not have to deal with any competing rivals. Parlor, Gab, and other social media platforms that court extremism to grow their users are not a realistic threat to Twitter. Not only are these sites the only information source for many Americans, but they have destroyed other traditional news alternatives, like local newspapers, by draining their advertising revenue.

Besides banning users, the big platforms have adopted other measures to effectively de-platform users, like changing their algorithms to prevent political groups from being recommended to their users. While this action may discourage divisive and extreme conversations, it could also reduce political engage and stifle civic participation. While some of these tradeoffs are inevitable, big tech executives should not be the ones making them.

Facebook’s Oversight Board (FOB) is a step in the right direction, but it’s not enough. FOB is an independent body made up of 20 experts from across the world which has the power to review Facebook’s platform moderation decisions. Like a government administrative agency, it accepts public comments on company decisions and, like a court, issues rulings that sustain or overturn de-platforming decisions. However, Facebook pre-selects its adjudicators, and the company’s executives could always ignore FOB’s decisions if they hurt the platform’s bottom line. Ultimately, big tech’s creation of faux “independent” bodies does not lead to proper checks and balances.

Balancing the legitimate concerns of those fearful of online extremism and those frightened of big tech’s immense power to manipulate public discourse is hard but possible. Big tech’s decisions should be held accountable to the American people necessitating the creation of a regulatory framework akin to those put in place to regulate telephone companies and broadcast television. This new regulatory framework should force social media platforms to formulate a uniform content moderation policy designed to combat misinformation and violence instead of serving the political biases and business interests of big tech executives.

Lucas Osborne is a 2L from Nashville, Tennessee. He hopes to work as a public defender or in some social justice role once he graduates.

You can download a copy of Lucas’s post here.