70% Increase in online grooming attributed to Facebook and Instagram
A 70% Increase in online grooming has been attributed to Facebook and Instagram, by the NSPCC. The UK children’s charity revealed the shocking figures in a recent report. The research was compiled from the results of a Freedom of Information request to 42 police forces across the country.
The report found there were 5,441 incidents of sexual communication with a child recorded between April 2020 and March 2021. The record high figure was up 69% from 2017 to 2018. In the last year alone, figures showed a 9% increase in online grooming offences in comparison to 2020.
Social media platforms need to be more proactive
Of serious concern, are the everyday platforms which are being used to target children, as almost half included Facebook and WhatsApp. A third of the cases were found to be on Instagram, and a quarter on Snapchat. These incredibly popular platforms are widely used by children all over the world.
The NSPCC are calling on the UK Government to take more proactive role to combat the rise of online grooming, by holding platforms more accountable. Given so many social media platforms have their European bases here in Ireland, the Irish Government could expediate these measures to protect children by holding the platforms more accountable through legislation here also.
Huge investment in technology is still required to enable platforms to successfully identify and disrupt illegal online activity. Unfortunately, current Artificial Intelligence alone cannot capture all the content being uploaded, human moderators are still required. However, this can have serious harmful mental health consequences for them, if exposed to the damaging content over an extended period.
DNA Fingerprinting is not 100% successful
There is also the obstacle of identifying existing content which has already been identified, but is still being circulated. While DNA Fingerprinting of an image will identify it and assist in its removal, if the image is altered, skewed, flipped, cropped, rotated, or has filters and colour applied, the alteration may prevent detection of the image using currently available technology.
Thankfully, US company DéjàVuAI has recently launched its new AI based image search tool which the company states can immediatley address current shortcomings in image identification. The technology has successful shown how it has a superior ability in identifying people, objects, places and even buildings, from a minimal amount of information. It is hoped the product could be integrated to existing platform structures to further strengthen the detection of child sexual abuse and exploitation content online.
Tip of the iceberg
Figures released by the NSPCC while shocking, are only the tip of the iceberg. It has been generally accepted by all interested parties, we have yet to see the amount of harm caused to children online due the Covid-19 pandemic. Also, social media platforms are releasing staggering figures in relation to the content they have identified and removed. In February 2021, Facebook reported it removed 20 million child sexual abuse images.
However, Facebook recently admitted it had missed child abuse content in the second half of 2020 because due to technical issues. The company stated it is now working to remove any content previously missed.
“Year after year tech firms’ failings result in more children being groomed and record levels of sexual abuse,” said Andy Burrows, head of child safety online policy at the NSPCC.
“To respond to the size and complexity of the threat, the Government must put child protection front and centre of legislation and ensure the Online Safety Bill does everything necessary to prevent online abuse.”
“Safety must be the yardstick against which the legislation is judged, and ministers’ welcome ambition will only be realised if it achieves robust measures to keep children truly safe now and in the future.”
New Safety Features Welcomed
The recent initiative by TikTok and Instagram to introduce new safety features to protect underage users are welcomed. But with some poorly designed legacy systems still in place, platforms are reportedly still attempting to catch. With a delay comes further abuse, and a continuous increase in the numbers of victims, as vulnerabilities continue to be exploited by sexual predators.
A spokes person for Facebook stated, “This is abhorrent behaviour, and we work quickly to find it, remove it and report it to the relevant authorities,”
“We also block adults from messaging under 18s they’re not connected with and have introduced technology that makes it harder for potentially suspicious accounts to find young people.”
“With tens of millions of people in the UK using our apps every day, we are determined to continue developing new ways to prevent, detect and respond to abuse.”
DéjàVuAI a possible solution
The addoption and use of the DéjàVuAI AI based image search tool, by social media platforms, child advocacy stakeholders, as well as law enforcement might assist in this battle. But ultimately, the focus for now needs to remain in assisting parents and children to better protect against this increasingly more dangerous threat.
Advice for parents
Strong parental control software along with constant conversation between parent and child are essential. Children should not have access to social media platforms if they are underage. Children also need to be reminded about the dangers of talking to strangers online. We can’t highlight enough how important these almost simple sounding measures are.
Parents should remember, the vast majority of images of sexually abused, or exploited children are being self-generated by the children themselves. Often children are directed on live streaming apps like WhatsApp on what the sexual predator wants the child to do. This frequently takes place in the child’s own bedroom or bathroom, all while their parents are only a few feet away. Hence, the importance of always restricting and monitoring a child’s online activity, using a reliable parental control software.