Does social media bias in content moderation exist?

Allegations that Facebook and other platforms have bias in content moderation decisions may not be so far fetched, Jillian York writes

A 3D plastic representation of the Facebook logo is seen in front of displayed logos of social networks in this illustration in Zenica. REUTERS/Dado Ruvic

Amidst a surge of articles about social media companies over the past few months, one small piece of news may have gotten lost: A manual used by Facebook to train its thousands of content moderators contained a photograph of red-robed monks standing amongst heaps of dead bodies, whilst below it a caption read ‘The Body of Muslims slaughted [sic] by Buddhists (Barma) [sic].’ According to Motherboard, the adjacent slide from Facebook reads: ‘This is a newsworthy exception for the victims of violence in Burma [Myanmar].’ (Disclosure: I’m quoted in that article.)

What made this image troublesome wasn’t the dead bodies (though indeed the photo represented a tragedy) or the glaring typos in the caption but the fact that the photograph wasn’t from Myanmar at all, but was rather taken in Jeiegu, China in the aftermath of a 2010 earthquake; the monks were Tibetan, not Burmese.

The image had circulated online for some time as ‘proof’ of Muslims being slaughtered in Myanmar long before details about the actual persecution of Rohingya in the country had become widespread. Its inclusion in Facebook’s training manual may therefore have been an innocent error made by the guide’s author, but for a company with that much money that professes commitment to combating fake news, it’s a rather egregious one.

It is also, potentially, evidence of bias within some part of the company, or one of its contractors. Social media companies are notoriously secret about their content moderation practices – including the creation of manuals and the training of workers – so it’s difficult to know much about who makes the content decisions. But for years, marginalized groups – such as Morocco’s secularist community – have accused Facebook and other platforms of bias in content moderation decisions, alleging that the content moderators likely come from one particular group or another.

In some parts of the world, it isn’t such a stretch to believe that such a bias would exist. 7amleh, a Palestinian digital rights organization based in Haifa, has alleged that Facebook is lax in its moderation of Israeli hate speech while regularly silencing Palestinian speech. And in May of this year, the Lebanese organization SMEX launched a petition along with local band Al-Rahel Al-Kabir (‘The Great Departed’) after iTunes’ Middle East branch excluded five of the band’s songs – which mock religious fundamentalism, among other things – from its platform. Apple quickly responded, blamed UAE-based content aggregator Qanawat for the decision, and pledged to work with another company and to include the songs in the future. There may have been!

Similarly, for many years, Microsoft restricted certain search terms from Bing’s Middle East editions, including key terms related to sexual health. A few years ago, I spoke with a Microsoft staffer about the decision, explaining that only a handful of countries in the region actually censor such content online, and was told that it was a ‘market-based decision’ – in other words, Microsoft’s marketing team viewed the entire Arab Middle East as a monolith and chose to cater to the lowest common denominator. Eventually, the company caved and now only restricts such content in its Saudi Arabian edition.

Often, free expression is at stake, but so can be the safety of certain communities. Although the image in Facebook’s training manuals was mislabeled, the persecution of Myanmar’s Rohingya community is very real...and some activists there have alleged that the company isn’t doing enough to stop violence from spreading. In response, the company removed several top military officials from the platform, which raises complicated questions about how companies should deal with troublesome state actors.

It’s important to note here what should be obvious: The decisions that companies make about which actors can stay and which are beyond the pale – as raised ad nauseum through last month’s debate about conspiracy theorist Alex Jones – are political. Companies and the people who work at them, aren’t neutral, but rather bring with them experiences that inform corporate policies and practices.

Bias, whether in policy design or content moderation or aggregation practices, can have real-world impact, further marginalizing communities that already experience prejudice or persecution offline. Such groups have often taken to the Internet to find community and seek out a platform that is otherwise not readily available to them. But when that platform is led by corporations with minimal interest or knowledge of the realities on the ground, it’s all too easy for biases to take hold.