Download document: Developing and implementing self-harm and suicide content policies
150.0 kb - PDF
This page provides guidance for sites and platforms hosting user-generated content on developing policies around self-harm and suicide content. Companies must have clear and comprehensive policies for managing self-harm and suicide content on their platform. This can reduce the likelihood of harmful self-harm and suicide content being shared and ensures clear and robust mechanisms are in place to deal with it if it is.
A self-harm and suicide content policy sets out the expectations for users of what is and isn’t acceptable on the platform, and explains what will happen if these expectations are not met.
Having a robust policy in place can improve the safety of users on the platform and minimise potential harm. From a commercial perspective, these policies also mitigate against the reputational consequences that are likely to arise from hosting harmful content.
Policies should reflect the latest evidence around self-harm and suicide content and should be developed in consultation with subject matter experts (see our page: Promoting online excellence in suicide prevention through collaboration and transparency for further guidance).
Self-harm and suicide content will manifest in different ways depending on functionality. See Understanding self-harm and suicide content for risks associated with specific functionality.
Sites aimed at vulnerable users, such as children and young people, will require stricter content policies.
Responses to self-harm and suicide content will be influenced by principles of safeguarding, data protection and the removal of illegal content.
Companies should ensure they have the necessary resource to implement their policy effectively.
Polices should include clear definitions of self-harm and suicide in order to establish what content is in scope. For definitions of self-harm and suicide, see our information page: Understanding self-harm and suicide content.
It is important that policies are explicit about what content is and isn’t covered. While sharing experiences of self-harm and suicidal thinking and behaviour can be helpful for many people, in order to protect users, all sites and platforms should remove and limit access to self-harm and suicide content considered to be harmful.
Content considered harmful
Formats
Content policies should also cover how they will respond to self-harm and suicide content where less is known about the impact on users, such as:
For more information about the risks associated with different types of content, see our information page: Understanding self-harm and suicide content online.
Deciding whether self-harm or suicide content could be harmful for users can be complex. Whilst some types of content may be obviously harmful, other types may require more nuanced thinking and a judgement on what is appropriate for the platform. The following questions may help to decide which content should be prioritised for removal or review based on the impact it may have on users:
Does it show a risk of imminent threat to life?
Content of this kind should be urgently addressed. Consider whether immediate removal of the content could prevent the user from receiving urgent help from others.
Who is viewing the content?
The more vulnerable the audience, the stricter the content policy should be.
How graphic is it?
Content that contains graphic descriptions or depictions of self-harm or suicide should be prioritised for review as it can be distressing and triggering for other users.
Does it encourage, promote or glamorise self-harm or suicide?
Content that promotes self-harm and suicide or portrays them as effective ways of ending distress could encourage other users to try it.
Is there an evidence base that shows this type of content is harmful?
Is there research that indicates that the content may encourage people to harm themselves, or cause a contagion effect?
Does it stigmatise self-harm or suicide?
Content that shows prejudice against people experiencing self-harm or suicidal feelings could be triggering and hurtful, preventing users from reaching out and sharing their experiences.
How common is the content on the platform?
Some types of content, like a self-harm quote, may have minimal impact as an isolated post, but viewing large volumes of this content may have a much larger impact on users.
How are users reacting to it?
The way users respond to content should be considered – is it being reported? Is it being shared more widely?
There are multiple ways in which sites and platforms can respond to self-harm and suicide content, from monitoring the content, reducing access or removing it from the platform. Our information page: Reducing access to harmful self-harm and suicide content online provides further guidance.
For example templates of self-harm and suicide content policies, please contact Samaritans’ Online Harms Advisory Service.
Include the date and name of the team or person who reviewed the document.
All sites and platforms must translate their self-harm and suicide content policy into accessible community guidelines for users, explaining what content is and isn’t allowed on the site and the reasons for this.
Sites must also implement their policy effectively through content moderation, ensuring that content breaking community guidelines is detected and responded to safely. This can be achieved using human moderation and artificial intelligence (AI) approaches. Moderators should receive high-quality training and have clear, up-to-date guidelines to ensure their decisions are in-line with policy. More information about moderation can be found on our page: Thoughtful approaches to content moderation.
Self-harm and suicide content policies should be regularly reviewed to reflect emerging online harm issues and changes in platform functionality.
As best practice, policies should be reviewed at least annually, with updates made as needed in response to emerging evidence, online trends and changes in regulation. Companies should also regularly review repeatedly flagged content or issues and amend their policy if needed. Critical issues or gaps emerging should be immediately addressed.
Questions to consider when reviewing policy:
When updating policy, users should be made aware of any significant changes, explaining why and how the policy change effects the way they post or search for self-harm and suicide content.
Download our information sheet on developing and implementing policies around self-harm and suicide content:
150.0 kb - PDF