Concern about the influence of social media, particularly Facebook, on international societal and political climates is a growing issue in today’s digital culture. From political campaigns to acts of teen bullying, social platforms like Facebook provide a space for any social movement, regardless of its intent.
Facebook, in its pursuit to connect the next billion users to the internet, has found itself under fire in the past few months concerning its role in human rights violations in Myanmar. The Rohingya are a Muslim minority group who have co-existed with the country’s primarily Buddhist population for centuries. Like many minority groups, however, the Rohingya have historically endured discrimination. For the Rohinga this escalated dramatically after the 1962 coup that brought in a military junta who stripped them of citizenship, and during the decades that followed their political leaders were tortured and jailed. Inevitably many Rohinga fled to neighbouring Bangladesh, and since the most recent 2016-17 crisis more than 625,000 Rohinga (from a population of just over a million) have sought refuge in Bangladesh.
Facebook has a huge presence in Myanmar: 90 percent of the country’s population have mobile phones, and 60 percent of phone owners use Facebook or other social media sites to get news. A reach this deep brings a significant potential for misuse, a lesson that is being learned the hard way all over the world.
Anti-Muslim propaganda posted on Facebook by Myanmar’s citizens has perpetuated violence and hatred toward the Rohingya. In a UN official report on the matter, Facebook was called a ‘useful instrument for those seeking to spread hate’. Four months after CEO Mark Zuckerberg’s pledge to act on the content, over 1,000 pieces of hateful posts, comments, images, and videos attacking the Rohingya in Myanmar’s main local language, Burmese, remained accessible on the platform. Facebook’s pervasive influence on real-world discrimination and hate is a force that often goes unrecognized, especially when posts are created in other languages. According to an investigative report by Reuters on Facebook’s Myanmar operation, in early 2015 only two of the people reviewing problematic posts at Facebook could speak Burmese, while the majority of people reviewing Burmese content spoke English. With a significant imbalance in native-speaking Facebook employees, hateful or discriminatory content in other languages can easily slip through the cracks.
Facebook has since admitted that they were ‘too slow to prevent misinformation and hate’ in Myanmar and has promised to be more proactive about policing content ‘where false news has had life or death consequences’. To this day, however, the company still relies heavily on users to report violent or hateful content because its systems struggle to interpret Burmese text. Even now, Facebook doesn’t have a single employee in the country of some 50 million people. Until Facebook establishes a better method for monitoring foreign hate speech, NGOs and social activists are stepping in to combat these acts of social discrimination.
Facebook’s guidelines for reporting content
Facebook’s mission is ‘to give people the power to build community and bring the world closer together’, but that doesn’t come without a few guidelines. Facebook’s Community Standards, which any user can view, encourages expression in a safe environment, rooted in three main pillars of Safety, Voice, and Equity. The site specifically governs hate speech in section three of the standards, ‘Objectionable Content’. According to Facebook, hate speech is defined as:
‘A direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.’
Facebook separates attack levels into three tiers, which you can read more about here. Richard Allan, VP EMEA Public Policy at Facebook, addressed concerns of hate speech usage in his blog post on the company’s digital newsroom last year. Allan refers to the company’s commitment to remove content containing hate speech anytime it is brought to their attention, but admits the company ‘is not perfect when it comes to enforcing our policy’, and oftentimes ‘get[s] it wrong’. When enforcing the use of hate speech, Facebook assesses both context and intent, which are determining factors in what Allan describes as ‘ambiguous language’.
So what can activists do to help?
Best practices for reporting content
For those looking to do their part in reporting hate speech on Facebook, here are the proper steps to take:
- On the offending post, look in the upper right hand corner of the post for the three ellipses, and click on them.
- A list of options will appear. Choose ‘Give feedback on this post’.
- The options for feedback include hate speech specifically, but review the other options too and make sure to select the most accurate response for effective results.
**************************************************************************************************
Don’t miss the latest from R2A. Sign up for an email alert, or an RSS feed. Follow us on Twitter Facebook LinkedIn
Contribute to R2A. We welcome blogposts, news about jobs, events or funding, and recommendations for great resources about development communications and research uptake.
Social Media