The technology already exists outside Europe on Facebook and Instagram.
Posts identified as harmful by the algorithm can be referred to human moderators, who choose whether to take further action – including directing the user to help organisations and informing emergency services.
But Instagram told the UK’s Press Association news agency that human referral was not currently part of the new tools in the UK and Europe because of data privacy considerations linked to the General Data Protection Regulation (GDPR).
The social media firm said implementing a referral process would be its next step.
“In the EU at the moment, we can only use that mix of sophisticated technology and human review element if a post is reported to us directly by a member of the community,” Instagram’s public policy director in Europe, Tara Hopkins, said.
She added that because in a small number of cases a judgement would be made by a human reviewer on whether to send additional resources to a user, this could be considered by regulators to be a “mental health assessment” and therefore a part of special category data, which receives greater protection under GDPR.
Facebook and Instagram have come under fire in recent years for a lack of regulation over suicide and self-harm material.
Molly’s father, Ian, has previously said the “pushy algorithms” of social media “helped kill my daughter”.
In September, social media companies including Facebook, Instagram, Google, YouTube, Twitter and Pinterest agreed to guidelines published by mental health charity Samaritans, in an effort to set industry standards on the issue.
“While we have seen a number of positive steps in the right direction in recent months, we know that there is still more work that needs to be done in order to tackle harmful online content,” said Lydia Grace, Samaritans programme manager for online harm.
“We need regulation to ensure technology platforms take swift action to remove harmful content and that they can use the tools at their disposal to do this, while ensuring vulnerable users can access supportive content when they need it.
“Our Online Excellence Programme aims to develop a hub of excellence in suicide prevention and the online environment. As part of this, we recently launched our new guidelines for the technology industry to help sites and platforms to create safer online spaces by minimising access to potentially harmful content relating to self-harm and suicide, and maximising opportunities for support.”
But Instagram said it also wanted to be a place where users could admit they have considered self-harm or suicide.
“It’s okay to admit that and we want there to be a space on Instagram and Facebook for that admission,” Ms Hopkins added.
“We’re told by experts that can help to destigmatise issues around suicide. It’s a balancing act and we’re trying to get to the right spot where we’re able to provide that kind of platform in that space, while also keeping people safe from seeing this kind of content if they’re vulnerable.”
Delivering fascinating, fun and informative, Cyber Safety and Personal Privacy Presentations, based on the currents trends, threats and dangers encountered by both children and parents. We are now also available for Data Protection and GDPR Consultancy