How to manage misinformation in online communities
Misinformation is hard to escape online. The World Health Organisation has dubbed the spread of misinformation about COVID-19 an Infodemic, and that diagnosis barely scratches the surface of conspiracy theories or organised disinformation campaigns.
The 2020 Digital News Report revealed that 64 per cent of Australians are worried about knowing what’s real or fake online. The Australian Communication and Media Authority (ACMA) released a Misinformation position paper to guide tech platforms in establishing a code of conduct to help Australians identify the accuracy of information on their platforms and offer a complaints process. ACMA has stated that an Australian code should cover misinformation across all types of news and information (including advertising and sponsored content) that:
is of a public or semi-public nature
is shared or distributed via a digital platform
has the potential to cause harm to an individual, social group or the broader community.
Within this environment, Australian online community professionals are reporting a striking increase in the information and disinformation seeping into their digital spaces.
These can lead to conflict and negative experiences, which in turn can have a reputational effect or legal consequences for individuals and organisations.
There’s also a moral and social cost. When we invite people to gather online we have a duty-of-care over that experience. Information integrity is covered in the ACM Code of Ethics - research shows that if we don’t step in, we can make it much worse.
Organisations that host online communities can follow these steps to help meet the challenge of misinformation and disinformation.
Create strategic barriers
If you’re trying to build a community amidst noisy social media platforms, it’s hard to create a safe space. Public pages or groups are targeted by drive-by interactions from users that aren’t a member of the community or who may be out to spread harm. Most algorithms propagate content that sensationalises, and this frequently includes posts that feature misinformation or disinformation.
Hosting a private community with strategic barriers to entry (such as an application process) offers you more control over governance and culture. Communities on owned platforms typically offer a stronger position to defend against misinformation and other social harms. If you are based on public social media platforms explore options for purposeful and qualifying barriers to entry (which also improve community effectiveness overall).
Include in governance
All online communities need clear guidance around which behaviours are encouraged or unacceptable. In addition to managing legal risks, these help shape culture within the group and establish positive social norms. Warnings against bullying, harassment or hate speech are common, but most online communities don’t proactively caution against spreading misinformation. Make it clear misinformation and disinformation isn’t permitted and what action will be taken.
Don’t forget to factor in the specific regulatory requirements of your industry, for example, if you’re in health or medicine there are often tougher consequences for allowing misleading or false information to remain online. Have a position, a policy and a plan that all those looking after the space are clear on.
Consistent moderation
Guidelines are one thing - enforcing them consistently is another. Your community should be moderated with a consistent cadence to prove your guidelines aren’t mere window dressing.
You can use a combination of automated tools and human oversight to detect and remove content that misinforms, disinforms and may result in harms. For example, you might want to prevent links from being shared in the community, or require that they’re approved first. Update any filters with known misinformation or disinformation campaign content flags.
Discuss transparently
Many people accidentally share misinformation, so punitive measures like removal or banning aren’t always the best course of action. They may be the victims of a coordinated disinformation campaign. Nurturing change takes time and conversation.
If you spot misinformation being shared, consider facilitating discussion about it (this isn't always appropriate for all communities). Host a conversation about why it poses a risk to everyone and why it’s being removed. You might share links to official resources to help people build their awareness, or host an online event with a relevant expert on how to spot misinformation. Create content and conversation to help all participants understand what it is, why it matters and the role we all play in combating it.
Assess the source
If you can, determine whether you’re dealing with ordinary users spreading misinformation, or a coordinated disinformation campaign that may involve bots or impersonated accounts.
These questions will help:
Are the people posting new to your community?
Do other members know them?
Does their account appear fake?
Are they posting similar things elsewhere?
They may be an account created for disinformation targeting your users or organisation. If so, consider recording the activity (some notes and screenshots) in case authorities want to follow up, then delete and remove them permanently. There is not yet an official body to report misinformation or disinformation to within Australia.
Invest in community management
Professional online community management is a fast growing global occupation. These digital practitioners can aid in navigating the balance between focused engagement and a risky mess. They can’t solve the misinformation crisis alone, but they’re trained to help diffuse the problem and get ahead of it by taking the steps outlined here and others.
ACM collaborates with researchers in online community, social media and media, including those investigating misinformation. We welcome a conversation about your situation and how we may be able to assist.