Home > Business > Facebook takes hate speech seriously… as long as it’s in English

Facebook takes hate speech seriously… as long as it’s in English


If like many Australian Muslims you have reported hate speech to Facebook and received an automated response saying it doesn’t breach the platform’s community standards, you are not alone.

We and our team are the first Australian social scientists to receive funding through Facebook’s content policy research awards, which we used to investigate hate speech on LGBTQI+ community pages in five Asian countries: India, Myanmar, Indonesia, the Philippines, and Australia.

We looked at three aspects of hate speech regulation in the Asia Pacific region over 18 months. First, we mapped hate speech law in our case study countries, to understand how this problem might be legally countered. We also looked at whether Facebook’s definition of “hate speech” included all recognized forms and contexts for this troubling behavior.

In addition, we mapped Facebook’s content regulation teams, speaking to staff about how the company’s policies and procedures worked to identify emerging forms of hate.

Even though Facebook funded our study, it said for privacy reasons it could not give us access to a dataset of the hate speech it removes. We were therefore unable to test how effectively its in-house moderators classify hate.

Instead, we captured posts and comments from the top three LGBTQI+ public Facebook pages in each country, to look for hate speech that had either been missed by the platform’s machine intelligence filters or human moderators.

Admins feel let down

We interviewed the administrators of these pages about their experience of moderating hate, and what they thought Facebook could do to help them reduce abuse.

They told us Facebook would often reject their reports of hate speech, even when the post clearly breached its Community Standards. In some cases, messages that were originally removed would be re-posted on appeal.