Two new tools will warn users about the risks of searching for and sharing content that exploits children, including the potential legal consequences of doing so
Facebook has announced a pair of new tools to help combat child abuse and exploitation content on its platform and apps. While one tool aims to curb the potentially malicious sharing of exploitative content, the other deals with its non-malicious sharing.
The former is a pop-up that will be shown to users who search on Facebook’s apps for terms that are typically associated with child exploitation. “The pop-up offers ways to get help from offender diversion organizations and shares information about the consequences of viewing illegal content,” wrote Antigone Davis, the Global Head of Safety at Facebook, in a blog ushering in the new tools.
Meanwhile, the second tool is a safety alert that will appear to people who have shared viral, meme child exploitative content. The alert will inform the user what harm sharing such content could lead to and will also add a warning about the legal ramifications of sharing such materials.
“We share this safety alert in addition to removing the content, banking it, and reporting it to NCMEC. Accounts that promote this content will be removed. We are using insights from this safety alert to help us identify behavioral signals of those who might be at risk of sharing this material, so we can also educate them on why it is harmful and encourage them not to share it on any surface — public or private,” Davis added.
“While criminals exploit social media and social networks to commit crimes involving child sexual abuse material, sex trafficking of a minor, and child sex tourism, the use of these platforms to facilitate child abductions is lesser-known,” said the Bureau.
RELATED READING: The best social networks for younger children
Facebook worked with experts on child exploitation, including the United States’ National Center for Missing and Exploited Children (NCMEC), to come up with research-backed taxonomy on classifying a person’s intent behind sharing such content. After evaluating 150 accounts reported to the NCMEC for posting child exploitative content, Facebook found that approximately more than 75% of people didn’t do so with malicious intent. Rather, they attempted to cause outrage or shared the content in poor humor. However, the company added the findings shouldn’t be taken as a “precise measure” of the child safety ecosystem.
The social network also amended its child safety policies to clarify that it will go on to delete Facebook profiles, Pages, groups, and Instagram accounts that share innocent images of children or are accompanied with comments, hashtags, or captions that include inappropriate signs of affection, or observations about the children in the photos.
“We’ve always removed content that explicitly sexualizes children, but content that isn’t explicit and doesn’t depict child nudity is harder to define. Under this new policy, while the images alone may not break our rules, the accompanying text can help us better determine whether the content is sexualizing children and if the associated profile, Page, group, or account should be removed,” the company explained.
Additionally, in order to simplify reporting content that violates Facebook’s child exploitation policies, the social media giant added the option “involves a child” under the “Nudity and Sexual Activity” category of reporting.
Social media have been ramping up their efforts to curb child abuse content and deployed various tools and measures to help achieve that goal. For example, last year Facebook expanded parental controls for Messenger Kids while TikTok introduced the Family Pairing feature.
To learn more about more dangers faced by children online as well as about how not only technology can help, head over to Safer Kids Online.