We use cookies for site personalization, analytics and advertising. You can opt out of third party cookies. More info in our privacy policy.   Got it

What should Twitter censor?

Technology
Internet

Recent headlines accusing Twitter of political censorship have abounded. First, the sitting president of the United States accused Twitter of silencing Republicans through a method commonly referred to as ‘shadow banning’. His inflammatory tweet was followed by accusations in the media – from both the usual suspects and those that ought to know better – prompting Twitter to ‘[set] the record straight’ about their practices in a blog post.

Accusations of ‘shadow banning’ have existed for a long time on fora like Reddit and 4Chan, but only made it to the mainstream after now-disgraced alt-right Breitbart provocateur Milo Yiannopoulos claimed in an ‘exclusive’ that a Twitter insider had admitted that the company had been engaging in the practice of shadow banning by surreptitiously hiding the tweets of right-wing users from public view.

‘Shadow banning’ may not be a thing, but that doesn’t mean these companies don’t engage in what amounts to political censorship

‘Shadow banning’ may not be a thing, but that doesn’t mean these companies don’t engage in what amounts to political censorship. Yes, censorship – we think of it as something that only governments have the power to engage in, but in fact, the term applies whenever authority is wielded to restrict speech. Furthermore, ‘censorship’ as a term is value-neutral – just because we might agree with a certain set of restrictions, they are no less censorship.

American social media companies are operating well within the law when they restrict content, be it nude art, jokes about a Pepsico snack containing plastic, or a democratically-elected political party. If they wanted to take a stand against cat videos (heaven forbid!), they would be within their legal rights.

So when a company restricts dangerous conspiracy theories, or terrorist recruitment videos, we very well might cheer or sigh in relief – while those behind the content are undoubtedly angered. At the same time, when groups working for justice – such as Black Lives Matter – are silenced by a company, we see the injustice.

Of course, there’s no requirement for these companies to be politically neutral. And so it’s important that we recognize that they aren’t: Every decision that they make about their policies, and the content they remove or keep up, is indeed a political decision. We will each undoubtedly disagree with some of these policies (I for one am particularly peeved by Facebook’s policies on women’s bodies), and support others. And the folks across the political aisle will undoubtedly have an entirely different take on the matter.

This is one major reason why, historically, so many have fought for freedom of expression: The idea that a given authority could ever be neutral or fair in creating or applying rules about speech is one that gives many pause. In Europe’s democracies, we nevertheless accept that there will be some restrictions – acceptable within the framework of the Universal Declaration of Human Rights and intended to prevent real harm. And, most importantly, decided upon by democratically-elected representatives.

When it comes to private censorship, of course, that isn’t the case. Policies are created by executives, sometimes with additional consultations with external experts, but are nonetheless top-down and authoritarian in nature. And so, when Twitter makes a decision about what constitutes ‘healthy public conversation’ or a ‘bad-faith actor,’ we should question those definitions and how those decisions are made, even when we agree with them.

When it comes to private censorship, policies are created by executives and are top-down and authoritarian in nature

We should push them to be transparent about how their policies are created, how they moderate content using machines or human labor, and we should ensure that users have a path for recourse when decisions are made that contradict a given set of rules (a problem which happens all too often).

That’s exactly what some civil society groups have been trying to do (disclosure: I work for one such group, the Electronic Frontier Foundation). Earlier this year, a group of non-profit organizations and academics got together and created a set of principles, called the Santa Clara Principles on Transparency and Accountability in Content Moderation, that set forth a set of initial steps companies should take to provide meaningful due process to their users.

This work builds on the work of groups like Ranking Digital Rights, whose Corporate Accountability Index evaluates some of the world’s most powerful companies on their commitments to freedom of expression (as well as privacy) and urges companies to improve on their practices.

But it’s important that the public weigh in as well – as I’ve written before, companies are torn in various directions, urged by governments, special interest groups, religious bodies, and even the sitting US president to censor – or restore – certain content. These companies are never going to get every decision right, but if we insist that they create more transparent guidelines, publish more information about their decision-making process, and ensure every user has the right to remedy – we just met get a better deal.

 

Help us produce more like this

Editor Portrait Patreon is a platform that enables us to offer more to our readership. With a new podcast, eBooks, tote bags and magazine subscriptions on offer, as well as early access to video and articles, we’re very excited about our Patreon! If you’re not on board yet then check it out here.

Support us »

Subscribe   Ethical Shop