Australian Community Managers

View Original

Here's how Facebook can improve moderation

Though not a community platform by design, Facebook remains a popular site of both community building and social marketing in Australia due to its ubiquity.

Despite its status as the de facto platform for many, professional community or social media practitioners widely agree it has critical deficits in moderation functionality compared to stand alone community platforms. Some practitioners report they cannot confidently advocate adoption of Facebook as a community platform until these issues are addressed.

We polled our membership of online community professionals, social media managers and moderators about the top issues they encounter moderating or managing interactions on Facebook.

The following is a collective list of our top recommendations for improved moderation functionality on Facebook’s platform. ACM has provided it to Facebook directly, with an invitation to participate in collaboration and co-design processes. There are many other features that would assist, however we have focused on those that in our view are the most urgent and offer the greatest immediate effect.

We know moderation is indispensable work in creating safe and constructive digital cultures. Facebook is an advertising business optimised for maximum engagement and some of these recommendations would intrude on this priority. However, we note too that Facebook is struggling to manage moderation load across its platform. Equipping community management or social media professionals with the toolsets to share in moderation load can deliver better outcomes for all - safer spaces for participants and increased usage for Facebook.

Ability to switch off comments on a post

The single most important moderation feature sought by community or social media managers on Facebook is the ability to turn off commenting on individual, specific posts within a Page.

Organisations often know ahead of time if a particular piece of content is likely to attract high-risk, inappropriate or harmful comments from users. In these instances, where they are obligated to still share that content (such as a news organisation) or would still like to inform people, they would be able to share the content but reduce risk and likelihood of harm by limiting engagement. The same is true for advertising content, which is subject to the governance obligations and best practices of non-advertising content from an audience management or community management perspective.

Currently there are many organisations using Facebook that post less frequently, deleting live posts after a limited time, or choosing not to post at all, as a workaround for the lack of comment configurability and risk management support.

This feature exists in Groups and should be available across all contexts to minimise harms.

Comment pausing

Both Facebook Page and Group administrators would similarly like the ability to temporarily pause or ‘snooze’ comments on all or selected posts until they are unpaused or ‘woken up’. This affords frontline digital practitioners critical time amidst a crisis, and lets them temporarily limit the spread of high-risk or harmful content.

It is also a high-value tool for smaller organisations or those who cannot afford full-time moderation. A small business can pause content when they’re not present to check interaction, allowing them to engage audiences or community members responsibly and legally.

Ability to switch off sharing on select posts

As with the ability to comment, there are many occasions where Page or Group administrators would like to limit the distribution of content to minimise the risk of harm or incident.

Ability to edit user post content

Best practices around transparency and authenticity of engagement mean that community or social media managers would rarely edit user comments, however there are legitimate use cases where this is an important override function.

They include but are not limited to:

  • Removing personal or identifying information from an otherwise innocuous post to protect the privacy of individuals and remain legislatively compliant

  • Removing mentions of specific products and services on a Page or Group in a regulated industry, such as medicine and healthcare

  • Removing aspects of a post that may contain high-risk disinformation or other prohibited content (where the rest of the content carries no issues).

Currently Page or Group administrators have to hide or outright delete content such as the above, often removing replies or valuable interactions in the process. These edits should be visibly noted to support transparency and minimise inappropriate use.

Reported content to administrators

We understand the importance of allowing users to report content directly to Facebook. However, where a trained community manager or trained social media manager is present as administrator of a Facebook Page, we believe there should also be the ability to report content directly to the administrator - as per Groups functionality.

Community management includes incentivisation self-governance amongst users and participants, encouraging reporting of content that doesn’t align with social norms and shared values. Currently these reports are sent to Facebook, where they are managed using centralised and decontextualised systems.

Community or social media professionals can efficiently share moderation load, ensure moderation actions are contextually appropriate, and confidently work with participants to build healthy and safe spaces if this functionality was added.

Moderation listing and user access implications

Moderation isn’t binary and most professional community or social media managers ban users as a last resort. Professional moderation consequently often involves the creation of ‘watch lists’ - records noting inappropriate behaviour or content from an individual, warning that user and continuing to monitor in case the behaviour persists. Watch list functionality that can be supported with administrator designed, automated workflows would significantly improve effectiveness of this widespread approach.

Likewise, the ability to proactively white list or grey list users for pre-moderation or limited participation would assist in minimising risk, especially for sensitive topics or vulnerable communities (there is functionality to support a ‘near enough’ equivalent function in Groups, but not yet Pages).

Support with record keeping

Community managers (and social media managers who are performing moderation or community management functions) keep records around moderation as a best practice. This is important for risk management, regulatory compliance and cooperative working across teams or organisations.

The following functionality would reduce time in labor in record keeping around key moderation actions, and help create a broader culture of safety and transparency:

  • Ability to download comment threads for record keeping

  • Ability to auto-screenshot/save posts to user profile within Page or Group before deletion for record keeping (i.e. just as Facebook currently shows if someone has been muted previously, X post was removed should accompany this)

  • Moderation dashboards that offer a holistic view of moderation actions and user interaction history

Search enhancements for older content

Moderation often requires checking the posting and activity history of users. This is used to verify or reject claims, establish patterns of behaviour (e.g. harassment), and to holistically examine a users participation to a Page or Group.

Current search functionality is not always reliable and can be overly time consuming in moderation scenarios. We would like to see enhancements that allow more pin-pointed search. One example of this might be a calendar interface that takes an administrator directly to content posted on that day or within that time frame.

Page or Group batching for moderation actions

Unfortunately it is not uncommon for Page and Group administrators to find themselves the target of coordinated attacks and abuse. This occurs across individuals, amateur organisations and businesses or brands.

Community or social media managers would like the ability to ban user accounts across a collection of Pages or Groups at once - effectively moderating the batch as a whole. Presently this happens individually and can be labor intensive, particularly if those Pages or Groups are enduring a live assault by bad actors.

Ideally this grouping or batching of moderation actions could be extended to the administrators personal account, as attacks often escalate into this arena, especially if the Page or Group owner is a knowable and public entity.

Allow admins to view user history and authorise or deny Top Fan status

Recognition and reward is a key component of online moderation and strategic engagement. It is important community or audience owners can configure the context around this incentivisation.

Current Top Fan functionality rewards users for volume and frequency over factors such as quality, relevance and value generation. Professional administrators should be able to configure these settings to suit the purpose and culture of their communications or community, assign top fans and give custom badges/titles.

In lieu of this, Page owners would benefit greatly from being able to authorise or deny Top Fan status by viewing a history of that user's posts/comments. This would mitigate the instances of trolls and persistent bad actors being assigned Top Fan status, which in turn creating risk and cultural issues for social media or community managers (modelling and rewarding inappropriate behaviour, conflict and discussion surrounding the status and lack of context/control).

Ability to turn off the suggestion to create a Watch Party on video content

Facebook’s Watch Party functionality offers social benefits, however it is also an invitation to additional risk. For example, during the 2019 Christchurch incident and aftermath, Page administrators were seeing news bulletins with ‘Create Watch Party’ beneath them, contributing to the accelerated proliferation of that content.

We would like to see this auto-suggestion optional and/or overridable for Page and Group administrators, so they can adequately prepare for and oversee the associated risks if they choose to enable it for their participants.

If you need help assessing platform suitability for risk or other online community considerations, Australian Community Managers can help. If you’re building an online community tool, we can provide critical insights to help you build a safe and compelling tool community managers everywhere will love using.