Keeping your account secure is our priority. We’re constantly improving our technologies to make sure that your account is protected.
On Facebook and Instagram, we use a variety of methods to identify content and accounts that violate our policies, and work with government, non-governmental organizations, and law enforcement agencies to understand new techniques that scammers deploy to circumvent our system.
Our ad review system relies primarily on automated tools to check ads and business assets against our policies. If we detect a violation of our scams policies, we will reject the ad before it is published. Beyond reviewing individual ads, we may also review and investigate advertiser behavior, such as the number of previous ad rejections and the severity of the type of violation, including attempts to get around our ad review process.
On Facebook and Instagram, we block the use of specific search terms related to scams, fake reviews, and known bait words. We also have measures in place to make groups/pages on Facebook that previously violated our policies less prominent in Feed and in recommendations. This includes content that contains clickbait links, engagement baits, links to websites that request unnecessary user data. We also exclude this type of content from being recommended across a range of surfaces.
If we determine that an account is likely associated with scam behavior, the account owner must complete a few actions to demonstrate that they are not operating a fake account or misrepresenting themselves. Until they do this, the account cannot be used to reach others. If the owner fails these checks, or if our reviewers determine that there is a violation of our policies, the account will be removed.
Our abuse-fighting team builds and constantly updates a combination of automated and manual systems that help us catch suspicious and/or inauthentic activity at various points of interaction on the site, including registration, friending and following, liking and messaging. We also require many businesses to undergo verification to confirm the identities of the business and its representatives before they can use certain tools or features.
Scams are often run by people who manually operate fake accounts. That’s why our efforts to detect and stop fake accounts are so crucial. To combat fake accounts, we deploy technology to prevent them from being created and also detect and remove them from our technologies. Our detection technology helps us block millions of attempts to create fake accounts every day and detect millions more often within minutes after creation.
In order to maintain a safe environment for people, we remove accounts or entities that are harmful to the community. We apply penalties that are designed to be proportionate to the severity of the violation and the risk of harm posed to the community. Continued violations, despite repeated warnings and restrictions, can lead to an account being disabled. We have built a combination of automated and manual systems to block and remove accounts that are used to persistently or egregiously abuse our Community Standards. We also disable accounts or entities that have been created or repurposed to evade our enforcement.
Off-platform, we work with our legal teams, local authorities and civil society partners to consider and take appropriate action against bad actors.
Meta has brought legal action against individuals responsible for using our technologies to scam people.