Report: Hate speech on Facebook a major issue
The first ever regional report into hate speech regulation on Facebook has revealed that it is still a major problem, with inadequate mechanisms to address it at the platform level.
The report, Regulating Hate Speech in the Asia Pacific, was authored by Aim Sinpeng (University of Sydney), Fiona Martin (University of Sydney), Katharine Gelber (University of Queensland), and Kirril Shields (University of Queensland).
The researchers analysed a cross-section of public pages administered by LGBTQI+ groups in Australia, Philippines, India, Indonesia and Myanmar asking three central questions:
1) What constitutes hate speech in different Asia-Pacific jurisdictions?
2) How well are Facebook’s policies and procedures positioned to identify and regulate this type of content?
3) How can we understand the spread of hate speech in this region with a view to formulating better policies to address it?
Among the report's key findings:
Hate speech is wide-spread on Facebook, especially in the context of marginalised groups or communities;
The language and context dependent nature of hate speech is not effectively captured by Facebook’s classifiers or its global Community Standards and editorial policy, yet;
Facebook’s definition of hate speech is more comprehensive than most legislation in the Asia Pacific, where there is a regulatory deficit;
Facebook offers little or no support if pages and their participants are targeted by hate speech;
In India, Indonesia and the Philippines, LGBTQI+ groups are exposed to an unacceptable level of discriminatory, hateful and threatening posts;
Page owners are disinclined to report and flag hate speech as a result of their perceived lack of impact on Facebook’s moderation practices;
Training and support on these issues is required for anyone with custody of a digital social space, in particular non-professional moderators and administrators.
Download the full report
Australian Community Managers see these realities play out. We work with a wide range of organisations maintaining communities or audiences within social media platforms and we're responsible for the governance of those spaces.
ACM was very happy to support this report and contribute in part to its formation, in line with our national mission to help build healthy, thriving communities and help scale the benefits of online community management practice.
New hate speech training
ACM has on-demand hate speech training for online community managers (professional and volunteer) currently in development, to be rolled out later in 2021, with the input and support of report authors.
This national first training will give online community managers, social media managers, moderators and anyone overseeing interactions in a digital social setting the tools and the confidence to mitigate and manage hate speech should it occur.
While this is being created, we offer live training options in moderation and governance, which include how to deal with hate speech. If you're interested in learning more, contact us.