Download document: Implementing effective moderation for self-harm and suicide content
139.6 kb - PDF
This information page provides guidance for sites and platforms hosting user-generated content on implementing effective moderation for self-harm and suicide content. All sites and platforms must moderate content to ensure that policies are upheld, and users are protected from material that could be potentially harmful.
Content moderation can be achieved through human moderation and using artificial intelligence (AI) approaches. AI approaches should be adopted if they are cost-effective and offer proportionate solutions to increase the speed and efficiency of moderation. However, companies should never rely solely on AI approaches. Instead, they should be used to prioritise content for human moderation.
Human moderation can be an effective way of detecting and responding to self-harm and suicide content online. All sites using human moderators must ensure that moderators receive high quality training and support. See Guidance for supporting the wellbeing of moderators for more information.
Artificial intelligence can be an effective mechanism for enabling early detection of self-harm and suicide content at scale, preventing wider distribution and enabling prompt interventions. AI approaches are particularly important for sites that have large quantities of content uploaded and shared regularly.
Detect and remove content that contains particular words or themes, such as methods of suicide
Detect inappropriate usernames when users first register
Prioritise and flag reports for human moderation and close false reports
Detect and prevent the upload of images that are known to be harmful
For guidance on using AI to keep online communities safe, see:
Download our information sheet for advice on how to effectively moderate self-harm and suicide content online:
139.6 kb - PDF