Safety policies

We’re committed to making Meta technologies safe. We remove content that could contribute to a risk of harm to the physical security of persons. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on our technologies.

Community Standards

We encourage an environment of empathy and respect while giving voice to groups often marginalized or excluded from conversations. We believe in allowing discussion to occur freely while creating a safe environment for all who engage in acts of expression. To uphold this, we developed policies to promote a safe, welcoming environment for Meta users across the globe.

Facebook Community Standards

Facebook Community Standards provide a foundation for the voices using this platform each day. They define what is and isn’t okay to share. They are not meant to set a tone of restriction and silence, but rather to create a safe environment for expression.

Instagram Community Guidelines

Instagram Community Guidelines help create an authentic and safe place for inspiration and expression. Help us to continue to foster this community by following our guidelines.

Creating transparency and accountability.

To keep pace with changes happening online and off, we work with outside experts to gather feedback and refine our policies, tools and resources. We consult with safety experts from around the world who specialize in online safety and help inform our technologies, policies and resources. This includes the Safety Advisors, Global Women’s Safety Expert Advisors, Youth Advisors, suicide prevention experts, etc.

To uphold our philosophy of providing a voice to all users of our technologies, Meta also publishes a quarterly Community Standards Enforcement Report with the latest information on actions taken against content violating our policies.

Policies that form the foundation of our commitment to safety.

The following policies are part of our Community Standards.


Child Sexual Exploitation, Abuse and Nudity Policy

Facebook does not allow any content or activity that exploits or endangers children. The Child Sexual Exploitation, Abuse and Nudity Policy clearly outlines content and images considered a violation of this policy and our Community Standards, plus the actions we will take if someone is found to be in violation of these policies

Adult Sexual Exploitation Policy

We recognize the importance of Facebook as a place to discuss and draw attention to sexual violence and exploitation. In an effort to create space for this conversation and promote a safe environment, we allow victims to share their experiences, but remove content that depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation. We also remove content that displays, advocates for or coordinates sexual acts with non-consenting parties to avoid facilitating non-consensual sexual acts.

Suicide and Self Injury Policy

We consult with experts in suicide and self-injury to help inform our policies and enforcement, and work with organizations around the world to provide assistance to people in distress. While we do not allow people to intentionally or unintentionally celebrate or promote suicide or self-injury, we do allow people to discuss these topics because we want to be a space where people can share their experiences, raise awareness about these issues, and seek support from one another. We remove any content that encourages suicide or self-injury, including fictional content such as memes or illustrations and any self-injury content which is graphic, regardless of context.

Human Exploitation Policy

We remove content and disrupt activity that facilitates or coordinates the exploitation of humans, including human trafficking and smuggling. We define human trafficking as the business of depriving someone of liberty for profit. It is the exploitation of humans in order to force them to engage in commercial sex, labor or other activities against their will. It relies on deception, force and coercion, and degrades humans by depriving them of their freedom while economically or materially benefiting others.

The United Nations defines human smuggling as the procurement or facilitation of illegal entry into a state across international borders. Without necessity for coercion or force, it may still result in the exploitation of vulnerable individuals who are trying to leave their country of origin, often in pursuit of a better life.

While we need to be careful not to conflate human trafficking and smuggling, they can be related and exhibit overlap.

Bullying and Harassment Policy

Bullying is considered to be any activity where someone makes online threats or engages in malicious behavior against someone online. This policy includes protections against mass harassment targeting individuals. Bullying and harassment happen in many places and come in many different forms. Context is important to understand if someone feels unsafe from this behavior. We outline how we distinguish these behaviors from others to create greater safety on Facebook.

In addition to our policy guidelines, our bullying prevention section includes tools and resources for teens, parents and educators.

Hate Speech Policy

We define hate speech as a direct attack against people — rather than concepts or institutions— on the basis of what we call protected characteristics which include race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. The Hate Speech policy outlines what content we treat as Hate Speech and will be removed from our platforms.

Hate speech against the LGBTQ+ community is prohibited. To uphold our safety standards for the LGBTQ+ community, we have developed policies that include, at a minimum, removing content meant to degrade or shame someone for their sexual orientation or gender identity.

Privacy Violations and Image Privacy Policy

These policies outline how we safeguard personal information related to your identity and activity. The policy upholds our commitment to not allow personal information shared about you without consent and to prevent physical or financial harm as a result of personal information shared across our technologies.

Violent and Graphic Content Policy

We remove content that is particularly violent or graphic, such as videos depicting dismemberment, visible innards or charred bodies. We also remove content that contains sadistic remarks towards imagery depicting the suffering of humans and animals.

We know that people have different sensitivities with regard to graphic and violent imagery. For that reason, we add a warning label to some graphic or violent imagery so that people are aware it may be sensitive before they click through. We also restrict the ability for users under 18 to view such content.

How to report a violation of Meta safety policies.

Reporting violations makes it as easy as possible to alert us of misuse. Find answers to questions about reporting violations for each of these policies and other special circumstances.
.

Safety on WhatsApp

WhatsApp takes the safety and security of our users seriously, which is why we protect all of our users’ calls and messages with end-to-end encryption. We've built WhatsApp with strong safety, privacy, and security principles in mind. WhatsApp operates differently than public social media, and so we tackle abuse on our platform within the context of providing a private messaging service. WhatsApp was not designed as a platform to grow an audience, does not use algorithms to prioritize the order of messages people receive, and there is no in-app search or discoverability for unconnected people or groups. When we are made aware of abuse of our platform, we take action against those accounts, including by banning these accounts or contacting law enforcement.