Analyzing the intention behind harm in order to prevent it

Understanding how and why people share child exploitative content is critical for combating it.

Analyzing the intention behind harm in order to prevent it.

Understanding how and why people share child exploitative content is critical for combating it. We conducted an in-depth analysis of illegal content we reported to best inform our response. The data learned from this research is used to deploy tools and launch new programs that reduce the sharing of such abhorrent material.

We analyzed two representative months of reports from Facebook and Instagram to the National Center for Missing and Exploited Children (NCMEC). Here's what we found:

  • More than 90% of this content was the same as, or visually similar to, previously reported content.
  • Copies of just six videos were responsible for more than half of the child-exploitative content we reported in that time period.

In other words, only a few pieces of content are responsible for many reports. This helped us realize that a greater understanding of intent could help prevent revictimization. With that in mind, we worked with leading experts on child exploitation, including NCMEC, to develop a research-backed taxonomy. This taxonomy helps us categorize a person's apparent intent in sharing such content.

We evaluated 150 accounts that we reported to NCMEC for uploading CSAM (child sexual abuse material) in July and August of 2020 and January 2021. We estimate that more than 75% did not exhibit malicious intent (i.e. did not intend to harm a child). Instead, these accounts appeared to share for other reasons, such as outrage or poor humor. While this study represents our best understanding, these findings should not be considered a precise measure of the child-safety ecosystem. Our work to understand intent is ongoing.

Our targeted solutions based on these findings include deploying a pop-up on Facebook aimed at reducing malicious searches for content. The warning appears when people initiate searches for terms associated with child exploitation. It offers child offender diversion resources from protection organizations and shares information about the consequences of viewing illegal content.

We've also deployed a safety alert to inform people who are sharing this content for reasons other than to harm a child. The feature is designed to alert users who have shared viral memes of child exploitative content. It warns that such sharing harms the victim, is against our policies and that there are legal consequences for sharing this material.

Additionally, we're working with public awareness experts to push our "Help Protect Children" campaign. No matter the reason, resharing this content is illegal and revictimizes the child. By reporting the content instead, users can help prevent further harm.

Because even one victim of these horrible crimes is too many, our work to understand intent is ongoing. We continue to roll out more targeted solutions for both our public platforms and private messaging.