clock menu more-arrow no yes mobile

Filed under:

Kicking people off social media isn’t about free speech

The debate over deplatforming Trump has overshadowed how effective social media bans are at fighting extremism.

Image of a screen showing Trump’s Twitter account and an “account suspended” notification.
Twitter suspended the @realdonaldtrump account after the Capitol riot on January 6.
Florian Gaertner/Photothek via Getty Images
Aja Romano writes about pop culture, media, and ethics. Before joining Vox in 2016, they were a staff reporter at the Daily Dot. A 2019 fellow of the National Critics Institute, they’re considered an authority on fandom, the internet, and the culture wars.

Within days of the January 6 Capitol insurrection, outgoing President Donald Trump’s internet presence was in upheaval. Trump’s social media accounts were suspended across Facebook, Twitter, YouTube, Instagram, Snapchat, Twitch, and TikTok.

The same was true for many of Trump’s more extremist followers. Twitter suspended more than 70,000 accounts primarily dedicated to spreading the false right-wing conspiracy theory QAnon. Apple, Google, and Amazon Web Services banned the right-wing Twitter alternative Parler, effectively shutting down the site indefinitely (though it’s attempting to return) and relegating many right-wingers to the hinterlands of the internet.

Permanently revoking users’ access to social media platforms and other websites — a practice known as deplatforming — isn’t a new concept; conservatives have been railing against it and other forms of social media censure for years. But Trump’s high-profile deplatforming has spawned new confusion, controversy, and debate.

Many conservatives have cried “censorship,” believing they’ve been targeted by a collaborative, collective agreement among leaders in the tech industry in defiance of their free speech rights. On January 13, in a long thread about the site’s decision to ban Trump, Twitter CEO Jack Dorsey rejected that idea. “I do not believe this [collective deplatforming] was coordinated,” he said. “More likely: companies came to their own conclusions or were emboldened by the actions of others.”

Still, the implications for free speech have worried conservatives and liberals alike. Many have expressed wariness about the power social media companies have to simply oust whoever they deem dangerous, while critics have pointed out the hypocrisy of social media platforms spending years bending over backward to justify not banning Trump despite his posts violating their content guidelines, only to make an about-face during his final weeks in office. Some critics, including Trump himself, have even floated the misleading idea that social media companies might be brought to heel if lawmakers were to alter a fundamental internet law called Section 230 — a move that would instead curtail everyone’s internet free speech.

All of these complicated, chaotic arguments have clouded a relatively simple fact: Deplatforming is effective at rousting extremists from mainstream internet spaces. It’s not a violation of the First Amendment. But thanks to Trump and many of his supporters, it has inevitably become a permanent part of the discourse involving free speech and social media moderation, and the responsibilities that platforms can and should have to control what people do on their sites.

Studies show that deplatforming works

We know deplatforming works to combat online extremism because researchers have studied what happens when extremist communities get routed from their “homes” on the internet.

Radical extremists across the political spectrum use social media to spread their messaging, so deplatforming those extremists makes it harder for them to recruit. Deplatfoming also decreases their influence; a 2016 study of ISIS deplatforming found, for example, that ISIS “influencers” lost followers and clout as they were forced to bounce around from platform to platform. And when was the last time you heard the name Milo Yiannopoulos? After the infamous right-wing instigator was banned from Twitter and his other social media homes in 2016, his influence and notoriety plummeted. Right-wing conspiracy theorist Alex Jones met a similar fate when he and his media network Infowars were deplatformed across social media in 2018.

The more obscure and hard to access an extremist’s social media hub is, the less likely mainstream internet users are to stumble across the group and be drawn into its rhetoric. That’s because major platforms like Facebook and Twitter generally act as gateways for casual users; from there, they move into the smaller, more niche platforms where extremists might congregate. If extremists are banned from those major platforms, the vast majority of would-be recruits won’t find their way to those smaller niche platforms.

Those extra hurdles — added obscurity and difficulty of access — also apply to the in-group itself. Deplatforming disrupts extremists’ ability to communicate with one another, and in some cases creates a barrier to continued participation in the group. A 2018 study tracking a deplatformed British extremist group found that not only did the group’s engagement decrease after it was deplatformed, but so did the amount of content it published online.

“Social media companies should continue to censor and remove hateful content,” the study’s authors concluded. “Removal is clearly effective, even if it is not risk-free.”

Deplatforming can change cultural mores

Deplatforming impacts the culture of both the platform that’s doing the ousting and the group that gets ousted. When internet communities send a message of zero tolerance toward white supremacists and other extremists, other users also grow less tolerant and less likely to indulge extremist behavior and messaging. For example, after Reddit banned several notorious subreddits in 2015, leaving many toxic users no place to gather, a 2017 study of the remaining communities on the site found that hate speech decreased across Reddit.

That may seem like an obvious takeaway, but it perhaps needs to be repeated: The element of public shaming involved in kicking people off a platform reminds everyone to behave better. As such, the message of zero tolerance that tech companies sent by deplatforming Trump is long overdue in the eyes of many, such as the millions of Twitter users who spent years pressuring the company to “ban the Nazis” and other white supremacists whose rhetoric Trump frequently echoed on his Twitter account. But it is a welcome message nonetheless.

As for the extremists, the opposite effect often takes place. Extremist groups have typically had to sand off their more extreme edges to be welcomed on mainstream platforms. So when that still isn’t enough and they get booted off a platform like Twitter or Facebook, wherever they go next tends to be a much laxer, less restrictive, and, well, extreme internet location. That often changes the nature of the group, making its rhetoric even more extreme.

Think about alt-right users getting booted off 4chan and flocking to even more niche and less moderated internet forums like 8chan, where they became even more extreme; a similar trajectory happened with right-wing users fleeing Twitter for explicitly right-wing-friendly spaces like Gab and Parler. The private chat platform Telegram, which rarely steps in to take action against the many extremist and radical channels it hosts, has become popular among terrorists as an alternative to more mainstream spaces. Currently, Telegram and the encrypted messaging app Signal are gaining waves of new users as a result of recent purges at mainstream sites like Twitter.

The more niche and less moderated an internet platform is, the easier it is for extremism to thrive there, away from public scrutiny. Because fewer people are likely to frequent such platforms, they can feel more insular and foster ideological echo chambers more readily. And because people tend to find their way to these platforms through word of mouth, they’re often primed to receive the ideological messages that users on the platforms might be peddling.

But even as extreme spaces get more extreme and agitated, there’s evidence to suggest that depriving extremist groups of a stable and consistent place to gather can make the groups less organized and more unwieldy. As a 2017 study of ISIS Twitter accounts put it, “The rope connecting ISIS’s base of sympathizers to the organization’s top-down, central infrastructure is beginning to fray as followers stray from the agenda set for them by strategic communicators.”

Scattering extremists to the far corners of the internet essentially forces them to play online games of telephone regarding what their messaging, goals, and courses of action are, and contributes to the group becoming harder to control — which makes them more likely to be diverted from their stated cause and less likely to be corralled into action.

So far, all of this probably seems like a pretty good thing for the affected platforms and their user bases. But many people feel wary of the power dynamics in play, and question whether a loss of free speech is at stake.

Deplatforming isn’t a violation of free speech — even if it feels like it

One of the most frequent arguments against deplatforming is that it’s a violation of free speech. This outcry is common whenever large communities are targeted based on the content of their tweets, like when Twitter finally did start banning Nazis by the thousands. The bottom line is that social media purges are not subject to the First Amendment rule that protects Americans’ right to free speech. But many people think social media purges are akin to censorship — and it’s a complicated subject.

Andrew Geronimo is the director of the First Amendment Clinic at Case Western Reserve law school. He explained to Vox that the reason there’s so much debate about whether social media purges qualify as censorship comes down to the nature of social media itself. In essence, he told me, websites like Facebook and Twitter have replaced more traditional public forums.

“Some argue that certain websites have gotten so large that they’ve become the de facto ‘public square,’” he said, “and thus should be held to the First Amendment’s speech-protective standards.”

In an actual public square, First Amendment rights would probably apply. But no matter how much social media may resemble that kind of real space, the platforms and the corporations that own them are — at least for now — considered private businesses rather than public spaces. And as Geronimo pointed out, “A private property owner isn’t required to host any particular speech, whether that’s in my living room, at a private business, or on a private website.”

“The First Amendment constrains government power, so when private, non-governmental actors take steps to censor speech, those actions are not subject to constitutional constraints,” he said.

This distinction is confusing even to the courts. In 2017, while ruling on a related issue, Supreme Court Justice Anthony Kennedy called social media “the modern public square,” noting, “a fundamental principle of the First Amendment is that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more.” And while social media can seem like a place where few people have ever listened or reflected, it’s easy to see why the comparison is apt.

Still, the courts have consistently rejected free speech arguments in favor of protecting the rights of social media companies to police their sites the way they want to. In one 2019 decision, the Ninth Circuit Court of Appeals cited the Supreme Court’s assertion that “merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.” The courts generally reinforce the rights of website owners to run their websites however they please, which includes writing their own rules and booting anyone who misbehaves or violates those rules.

Geronimo pointed out that many of the biggest social media companies have already been enacting restrictions on speech for years. “These websites already ban a lot of constitutionally protected speech — pornography, ‘hate speech,’ racist slurs, and the like,” he noted. “Websites typically have terms of service that contain restrictions on the types of speech, even constitutionally protected speech, that users can post.”

But that hasn’t stopped critics from raising concerns about the way tech companies removed Trump and many of his supporters from their platforms in the wake of the January 6 riot at the Capitol. In particular, Trump himself claimed a need for Section 230 reform — that is, reform of the pivotal clause of the Communications Decency Act that basically allows the internet as we know it to exist.

Section 230 shouldn’t be a part of the conversation around deplatforming — but Republicans want it to be

Known as the “safe harbor” rule of the internet, Section 230 of the 1996 Communications Decency Act is a pivotal legal clause and one of the most important pieces of internet legislation ever created. It holds that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Simply put, Section 230 protects websites from being held legally responsible for what their users say and do while using said websites. It’s a tiny phrase but a monumental concept. As Geronimo observed, Section 230 “allows websites to remove user content without facing liability for censoring constitutionally protected speech.”

But Section 230 has increasingly come under fire from Republican lawmakers seeking to more strictly regulate everything from sex websites to social media sites where conservatives allege they are being unfairly targeted after their opinions or activities get them suspended, banned, or censured. These lawmakers, in an effort to force websites like Twitter to allow all speech, want to make websites responsible for what their users post. They seem to believe that altering Section 230 would force the websites to then face penalties if they censored conservative speech, even if that speech violates the website’s rules (and despite several inherent contradictions). But as Recode’s Sara Morrison summed up, messing with Section 230 creates a huge set of problems:

This law has allowed websites and services that rely on user-generated content to exist and grow. If these sites could be held responsible for the actions of their users, they would either have to strictly moderate everything those users produce — which is impossible at scale — or not host any third-party content at all. Either way, the demise of Section 230 could be the end of sites like Facebook, Twitter, Reddit, YouTube, Yelp, forums, message boards, and basically any platform that’s based on user-generated content.

So, rather than guaranteeing free speech, restricting the power of Section 230 would effectively kill free speech on the internet as we know it. As Geronimo told me, “any government regulation that would force [web companies] to carry certain speech would come with significant First Amendment problems.“

However, Geronimo also allows that just because deplatforming may not be a First Amendment issue doesn’t mean that it’s not a free speech issue. “People who care about free expression should be concerned about the power that the largest internet companies have over the content of online speech,” he said. “Free expression is best served if there are a multitude of outlets for online speech, and we should resist the centralization of the power to censor.”

And indeed, many people have expressed concerns about deplatforming as an example of tech company overreach — including the tech companies themselves.

Deplatforming is a tricky free speech issue, but when it comes to online extremism, there may be other issues to prioritize

In the wake of the attack on the Capitol, a public debate arose about whether tech and social media companies were going too far in purging extremists from their user bases and shutting down specific right-wing platforms. Many observers have worried that the moves demonstrate too much power on the part of companies to decide what kinds of opinions are sanctioned on their platforms and what aren’t.

“A company making a business decision to moderate itself is different from a government removing access, yet can feel much the same,” Twitter’s Jack Dorsey stated in his self-reflective thread on banning Trump. He went on to express hope that a balance between over-moderation and deplatforming extremists can be achieved.

This is by no means a new conversation. In 2017, when the web service provider Cloudflare banned a notorious far-right neo-Nazi site, Cloudflare’s president, Matthew Prince, opined on his own power. “I woke up this morning in a bad mood and decided to kick them off the Internet,” he wrote in a subsequent memo to his employees. “Having made that decision we now need to talk about why it is so dangerous. [...] Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.”

But while Prince was hand-wringing, others were celebrating what the ban meant for violent hate groups and extremists. And that is really the core issue for many, many members of the public: When extremists are deplatformed online, it becomes harder for them to commit real-world violence.

“Deplatforming Nazis is step one in beating far right terror,” antifa activist and writer Gwen Snyder tweeted, in a thread urging tech companies to do more to stop racists from organizing on Telegram. “No, private companies should not have this kind of power over our means of communication. That doesn’t change the fact that they do, or the fact that they already deploy it.”

Snyder argued that conservatives’ fear of being penalized for the violence and hate speech they may spread online ignores that penalties for that offense have existed for years. What’s new is that now the consequences are being felt offline and at scale, as a direct result of the real-world violence that is often explicitly linked to the online actions and speech of extremists. The free speech debate obscures that reality, but it’s one that social media users who are most vulnerable to extremist violence — people of color, women, and other marginalized communities — rarely lose sight of. After all, while people who’ve been kicked off Twitter for posting violent threats or hate speech may feel like they’re the real victims here, there’s someone on the receiving end of that anger and hate, sometimes even in the form of real-world violence.

The deplatforming of Trump already appears to be working to curb the spread of election misinformation that prompted the storming of the Capitol. And while the debate about the practice will likely continue, it seems clear that the expulsion of extremist rhetoric from mainstream social media is a net gain.

Deplatforming won’t single-handedly put a stop to the spread of extremism across the internet; the internet is a big place. But the high-profile banning of Trump and the large-scale purges of many of his extremist supporters seems to have brought about at least some recognition that deplatforming is not only effective, but sometimes necessary. And seeing tech companies attempt to prioritize the public good over extremists’ demand for a megaphone is an important step forward.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.