Social media has enabled more voices to be heard, but some people use it to do harm.
That's why we have Community Standards that specify what's allowed on our apps, and we remove anything that breaks these rules. For content that doesn't violate our rules but has been rated false by independent fact-checkers, we reduce its distribution to help prevent it from going viral. We also provide context about what you see so you can make your own decisions on what to read, trust and share.
We consult with experts around the world to review and regularly update our standards on what is and is not allowed on Meta.
We've more than quadrupled our safety and security teams to more than 40,000 people since 2016.
We detect the majority of the content that we remove before anyone reports it to us.
We block millions of fake accounts from being created every day.
We give you control over your experience by allowing you to block, unfollow or hide people and posts.
You can moderate comments on your posts, and on Instagram, we warn you if you're about to post something that might be offensive so you can reconsider.
Learn how we update our Community Standards, measure results, work with others and prioritise content for review.
Today, we work with more than 90 partners covering over 60 languages around the world to review potentially false content.
We include warnings on posts that are rated false so that you can decide what to read or share.
When third-party fact-checkers label content false, we significantly reduce its distribution so fewer people see it.
Since the World Health Organization declared COVID-19 a global public health emergency, we've been working to connect people to accurate information from health authorities and taking aggressive steps to stop misinformation and harmful content from spreading.
People can report content to us or appeal certain decisions if they think we made a mistake in taking something down.
This global body of experts will independently review Meta's most difficult content decisions and their decisions will be binding.
Every quarter, we release a report providing metrics on how we're doing enforcing our content policies.