Those worried about the alt-right argue the Internet has fundamentally changed how free speech works. The mechanisms that allowed for free speech, while not giving it a platform, no longer exist. Instead of quietly fading away, ideas like white supremacy are able to find their own corner of the Internet where they flourish.
On the Internet, ugly ideas aren’t discarded, they’re supercharged; one recent report found a 600 percent increase in Twitter followers of major white nationalist movements since 2012. J.M. Berger, a fellow with the International Centre for Counter-Terrorism in The Hague who wrote the report, says social media give extremists some powerful tools for growth: anonymity and an easy way to seek out people with similar interests. “That’s the real danger,” he said in a recent interview. “If you were a radical Druid in 1950 living in Peoria, Illinois, you could go your whole life without ever meeting anybody who shared your views. Now if you’re that same person, you get online, and within an hour you can be following a hundred other radical Druids. And in two or three weeks, you can all be getting together to plant trees.”
Banning white supremacists from Twitter seems like a no brainer, but it could lead to unintended consequences. The ban itself could mobilize hate groups and lend them legitimacy. Further complicating things, figuring out what qualifies as hate speech is a harder task than it seems.
Perhaps outright racist remarks like those made towards Leslie Jones seem like an obvious candidate for removal. And maybe people can agree that accounts belonging to white-power groups shouldn’t be allowed to remain. But where is the line between a political group that advocates outright racism and the kind of remarks Donald Trump has made about Mexicans and Muslims? Or Breitbart News?Critics have argued that Twitter only allows such behavior because it is desperate for engagement and user growth, which is similar to the argument for why Facebook doesn’t care about fake news. There is probably some truth to that, given Twitter’s financial status.But if the company is going to start removing accounts belonging to anyone who says anything remotely offensive, it is going to be spending all of its time doing that, and by doing so it is probably going to alienate as many users as to which it appeals. Do we really want Twitter to be the one that decides what constitutes appropriate speech, and who is allowed to exercise it?