- Michel Protti, Chief Privacy Officer for Product
Our work on privacy is underpinned by our internal governance structures that embed privacy and data-use standards across the company’s operations. Externally, independent governance bodies provide oversight over our privacy program and practices.
As we continue to integrate privacy across the company, we're embedding privacy teams within product groups that will deepen the understanding of privacy considerations by providing expertise within each product and business group. These teams enable front-line ownership of privacy responsibilities across our products.
Led by Chief Privacy Officer, Product, Michel Protti, the Meta Privacy and Data Practices team is made up of dozens of teams, both technical and non-technical, focused on privacy and responsible data practices.
The Meta Privacy and Data Practices Team is at the center of our company’s efforts to maintain a comprehensive privacy program. Its mission –to instill responsible data practices across Meta– guides this work by ensuring people understand how Meta uses their data and trust that our products use their data responsibly.
The Meta Privacy and Data Practices Team is just one organization among many across the company that is responsible for privacy. There are thousands of people in different organizations and roles across Meta, including public policy and legal, who are working to embed privacy into all facets of our company operations. Getting privacy right is a deeply cross-functional effort, and we believe everyone at Meta is responsible for that effort.
Led by Erin Egan, Vice President & Chief Privacy Officer, Policy, the Privacy & Data Policy team leads our engagement in the global public discussion around privacy, including new regulatory frameworks, and ensures that feedback from governments and experts around the world is considered in our product design and data use practices, including during the course of our privacy review process.
As we work to improve our products and innovate on privacy, we are holding ourselves to the highest standards and working with policymakers and data protection experts to ensure we are meeting their expectations. To do so, the Privacy & Data Policy team consults with these groups through a variety of consultation mechanisms, including:
We also host a regular conversation series with leading privacy experts from around the world to discuss a range of pressing privacy policy topics.
The Privacy Legal team is embedded in the design and ongoing execution of our program and counsels on legal requirements during the course of our privacy review process.
The Privacy Committee is an independent committee of Meta’s Board of Directors that meets quarterly to ensure we live up to our privacy commitments. The Committee is made up of independent directors with a wealth of experience serving in similar oversight roles.
They receive regular briefings on the state of our privacy program and compliance with our FTC Order from our independent privacy assessor, whose job it is to review and report on our privacy program on an ongoing basis.
Internal Audit brings independent assurance on the overall health of our privacy program and the supporting control framework.
Part of ensuring that privacy is everyone’s responsibility at Meta is driving continuous privacy learning and education that spans training and internal privacy awareness campaigns.
A core component of our privacy education approach is delivered through our privacy training. Our privacy training covers the foundational elements of privacy and is designed to help everyone here at Meta develop the ability to recognize and consider privacy risks. Through its eLearning format, both our annual privacy training and our privacy training courses for new hires and new contingent workers provide scenario-based examples of privacy considerations aligned with Meta’s business operations and includes an assessment to test the understanding of the relevant privacy concepts. These trainings are updated and deployed annually to ensure relevant information is included in addition to core concepts.
In addition to our foundational required privacy training, we also maintain a catalog of all known privacy training deployed across Meta that spans topics relevant to people in specific roles. We will continue to invest in the Privacy Training program, as continuous learning opportunities across privacy and data topics are a critical component to instilling responsible data practices at Meta.
Another way we drive privacy education is through regular communication to employees. In addition to our Privacy training courses, we deliver ongoing privacy content through internal communication channels, updates from privacy leadership, internal Q&A sessions, and a dedicated Privacy Week.
During our dedicated Privacy Week, wedrive cross-company focus on privacy, feature internal and external speakers, and highlight key privacy concepts and priorities through engaging content and events that occur throughout the week.
Many of the privacy questions we confront don’t have easy or well-defined answers, and the best way to begin tackling those hard problems is by hearing from experts outside the company. We host outside experts to speak to our entire team about their work and perspective on privacy at Meta. It’s an opportunity for our entire privacy team to hear from a variety of privacy experts on important and complex topics on a regular basis.
When we participate in external privacy events like Data Privacy Day, we drive internal awareness and engagement through internal channels to ensure everyone has an opportunity to participate and learn about privacy.
We have created our Privacy Risk Management program to assess privacy risk about how we are collecting, using, sharing, and storing user data. We leverage this to identify risk themes, enhance our Privacy Program, and prepare for future compliance initiatives.
We have designed safeguards, including processes and technical controls, to address privacy risk, meet privacy expectations, and satisfy regulatory obligations.
We’ve established a privacy red team whose role is to proactively test processes and technology to identify potential privacy risks. The Privacy Red Team assumes the role of an external party attempting to circumvent our privacy controls and safeguards to steal confidential data to provide additional confidence in Meta’s approach to privacy.
No matter how robust our mitigations and safeguards, we also need a process to (1) identify when an event potentially undermines the confidentiality, integrity, or availability of data for which Meta is responsible, (2) investigate those situations, and (3) take any needed steps to address gaps we identify.
Our Incident Management program operates globally to oversee the processes by which we identify, assess, mitigate, and remediate privacy incidents. Although the Privacy and Data Practices team leads the incident management process, privacy incidents are everyone’s responsibility at Meta. Teams from across the company, including legal and product teams, play vital roles. We continue to invest time, resources, and energy in building a multi-layered program that is constantly evolving and improving and we highlight three components of our approach below.
We take a layered approach to protecting people and their information - implementing many safeguards to catch bugs. Given the scale at which Meta operates, we have invested heavily in building and deploying a wide range of automated tools that are intended to help us identify and remediate potential privacy incidents as early and quickly as possible. Incidents detected through these automated systems are flagged in real time to facilitate rapid response, and in some cases, can be self-remediated.
Of course, no matter how capable our automated systems become, the oversight and diligence of our employees always plays a critical role in helping to proactively identify and remediate incidents. Our engineering teams are constantly reviewing our systems to identify and fix incidents before they can impact people.
Since 2011, we have operated a bug bounty program in which external researchers help improve the security and privacy of our products and systems by reporting potential security vulnerabilities to us. The program helps us scale detection efforts and fix issues faster to better protect our community, and the bounties we pay to qualifying participants encourage more high-quality security research.
Over the past 10 years, more than 50,000 researchers joined this program and around 1,500 researchers from 107 countries have been awarded bounties. A number of them have since joined Meta’s security and engineering teams and continue this work protecting the Meta community.
While we’ve adopted a number of protections to guard against privacy incidents like unauthorized access to data, if an incident does occur, we believe that transparency is an important way to rebuild trust in our products, services, and processes. Accordingly, beyond fixing and learning from our mistakes, our Incident Management program includes steps to notify people where appropriate, such as a post in our Newsroom or our Privacy Matters blog about issues impacting our community, or working with law enforcement or other officials to address incidents we find.
Third parties are external partners who do business with Meta but aren’t owned or operated by Meta. These third parties typically fall into two major categories: those who provide a service for Meta (like vendors who provide website design support) (“third party service providers”) and those who build their businesses around our platform (like app or API developers). To mitigate privacy risks posed by third parties that receive access to personal information, we developed a dedicated third party oversight and management program, which is responsible for overseeing third party risks and implementing appropriate privacy safeguards.
We have developed a third party privacy assessment process for service providers to assess and mitigate privacy risk at Meta. Our process requires that these service providers are also bound by contracts containing privacy protections. Their risk profile determines how they are monitored, reassessed, and, where appropriate, which enforcement actions to take as a result of violations, including termination of the engagement.
We have designed a formal process for enforcing and offboarding third parties who violate their privacy or security obligation. This includes standards and technical mechanisms that support better developer practices across our platform, including:
Our External Data Misuse team consists of more than 100 people dedicated to detecting, investigating and blocking patterns of behavior associated with scraping. Scraping is the automated collection of data from a website or app and can be either authorized or unauthorized. Using automation to access or collect data from Meta’s platforms without our permission is a violation of our terms of service.
To help people understand how we work to guard against unauthorized scraping, we share ongoing updates around actions we’ve taken to protect against data misuse across our platforms and share ways people can best protect their data.
Going forward, we plan to continue to publish more about our approach to scraping as well as continuing our ongoing updates on actions we’re taking to address unauthorized scraping.
We have invested in infrastructure and tools to make it harder for scrapers to collect data from our services and more difficult to capitalize off of it if they do. Examples of these investments include rate limits and data limits. Rate limits cap the number of times anyone can interact with our products in a given amount of time, while data limits keep people from getting more data than they should need to use our products normally.
We changed how we use internally generated user and content identifiers after we observed that unauthorized scraping often involves guessing or purchasing such identifiers. We also created new, pseudonymized identifiers that help deter unauthorized data scraping by making it harder for scrapers to guess, connect, and repeatedly access data.
We have blocked billions of suspected unauthorized scraping actions per day across Facebook and Instagram. We have taken a variety of actions against unauthorized scrapers including disabling accounts and requesting that companies hosting scraped data delete it. The team has completed over 1000 investigations and pursued over 1200 enforcement actions from May 2020 through November 2022.
The Privacy Review process is a central part of developing new and updated products, services, and practices at Meta. Through this process, we assess how data will be used and protected as a part of new or updated products, services and practices. We work to identify potential privacy risks that involve the collection, use or sharing of personal information and develop mitigations for those risks. The goal of this process is to maximize the benefits of our products and services for our community, while also working upfront to identify and reduce any potential risks. It is a collaborative, cross-functional process led by our Privacy Review team with a dedicated group of internal privacy experts across legal, policy, and other cross functional teams with backgrounds in product, engineering, legal regulations, security and policy. This group is responsible for making Privacy Review decisions and recommendations.
As a part of the process, the cross-functional team evaluates potential privacy risks associated with a project and determines if there are any changes that need to happen before project launch to control for those risks. If there’s no agreement between the members of the cross-functional team on what needs to happen, the team escalates to a central leadership review, and further to the CEO, if needed for resolution.
The development of our new or modified products, services or practices through the Privacy Review process is guided by our internal privacy expectations, which include:
We have also invested in technical reviews and tooling to support operating the Privacy Review process at scale:
- Naomi Gleit, Head of Product
We put protecting users’ privacy at the heart of how we build and continuously update our products. We do that by building default settings and controls to make it easy for users to set the level of privacy they are most comfortable with. We also do it by putting privacy at the center of how we develop new products.
As part of Meta’s vision for a privacy-focused platform, we believe people’s personal, private communications with other people should be secure. We care deeply about providing the ability for people to communicate privately with their friends and loved ones where they have confidence that no one else can see into their personal conversations.
We currently provide private communication through WhatsApp, Messenger and Instagram Direct Messages (“DMs”). In WhatsApp, end-to-end encryption ensures only you and the person you’re communicating with can read or listen to what is sent, and nobody in between. And in Messenger and Instagram DMs, you have the option to protect your messages by end-to-end encryption just for you and the people you’re talking to.
In the past year, we have announced additional features in our optional end-to-end encryption chats in Messenger. In Messenger, we also announced that we started testing secure message storage and default end-to-end encryption. We expanded optional end-to-end encryption in Instagram DMs, prioritizing Ukraine at the start of 2022.
We expect future versions of Messenger, Instagram DMs, and WhatsApp to become the main ways people communicate on the Meta network. We’re focused on making all of these apps faster, simpler, and more secure for private conversations, including with end-to-end encryption. We plan to add more ways for people to interact privately with friends and groups. We also offer a number of communication platforms across Meta to bring people together more socially, in a less personal space - such as Facebook and the metaverse via Quest devices or Horizon Worlds, conversations across Facebook Group members through Community Chats, or watching videos together on a livestream. Some of these platforms may allow for user customization for individual preferences, or require additional integrity measures to ensure compliance with our Community Standards.
With more than 2 billion users, we are excited to give people more choices to protect their privacy. People should have the right to choose to personalize their experience and we have a responsibility to our users to set a clear, thorough approach to privacy by providing people with the safest private messaging experiences for messaging with friends.
We introduced the option to use disappearing messages on WhatsApp and Messenger. In a one-to-one chat, either person can turn disappearing messages on or off. In groups, admins will have the control.
Recognizing that young people have unique privacy needs, our product teams pay particular attention to youth privacy. Our goal is to provide services that respect the best interests of the young people who use them, in coordination and consultation with parents, regulators, policymakers, and civil society experts.
We employ a number of methods in our efforts to strike the right balance of giving young people the benefits of Meta products while also keeping them safe and ensuring our products generally have age-appropriate measures in place.
Some of our most recent efforts in youth privacy include considering different defaults by age on Facebook and Instagram, making it harder for potentially suspicious accounts to find young people, and limiting the options advertisers have to target ads for young people. We’re starting to test new privacy preserving ways to assure the age of our users, partnering with Yoti to allow users in certain regions and use cases to verify their age using facial age estimation technology. We also launched Family Center on Instagram and Quest, which include our first-ever supervision experiences to help parents and guardians become more involved in their teens’ online experiences through parental supervision tools and expert-backed educational resources. We directly engaged with young people, parents, guardians and experts to collaborate with us in the product development process through a TTC Labs global co-design program, and published an industry report that presents the key findings from the global initiative. For our youngest users, Messenger Kids offers an age-appropriate messaging and communications experience that gives parents controls to monitor and review aspects of their kids’ activity.
We want young people to enjoy Meta products while making sure we never compromise their privacy and safety. We’ll continue listening to young people, parents, lawmakers and other experts to build products that work for young people and are trusted by parents.
Meta’s new virtual reality headset, Meta Quest Pro, introduces inward-facing sensors to facilitate better communication and comfort, enhanced expression, and deeper immersion. One important feature that these sensors enable are eye tracking capabilities. As an integral part of our design process for the consent flow for eye tracking in Meta Quest Pro, we consulted privacy advocates to seek feedback on the design and language of the opt-in screen, as well as our plans to provide opt-in privacy controls for these features at both the system- and app-level. We discuss more about our approach to designing eye tracking responsibly and our work with the Global Research and Policy Community in a white paper we released at the launch of Meta Quest Pro.
In 2022, we published a white paper that discussed the privacy principles that we must consider as we work to improve the safety, security and integrity of our products. To illustrate how we think through these privacy principles in practice, we shared five case studies that detail some of the biggest safety and security challenges we face across the family of apps. For example, the case study on how we reduce hate speech on Facebook and Instagram explains that minimization is a fundamental privacy principle in how we have built automated systems to detect this Community Standard violation because we believe that we can accomplish our goals to remove hate speech by primarily using content data rather than personal account data in our enforcement.
Our work to communicate transparently includes providing external education to improve people’s understanding and awareness of our practices and ensuring information is accessible and easy to find.
To provide greater transparency and control to people, we’ve developed a number of privacy tools for people to understand what they share and how their information is used including :
- Michel Protti, Chief Privacy Officer for Product
The Meta Privacy & Data Practices and Infrastructure teams are closely partnering to build privacy-aware infrastructure – scalable and innovative infrastructure solutions that will enable engineers to more easily address privacy requirements as they build products. This investment will also allow us to increasingly use automation, rather than relying primarily on people and manual processes, to verify that we are meeting our privacy responsibilities.
We’re proactively reducing the amount of user data that we collect and use by deploying innovative tools and technology across Meta.
We’re committed to instilling good product and engineering practices across Meta to ensure we’re properly removing underutilized features without impacting the user experience. For example, we’ve launched tools that make it safer for engineers to delete products and code without accidentally creating bugs. These tools use automation and smart decision logic to identify what needs to be deleted manually by the engineers. In addition, we’re aligning our data collection and retention with product utility for users. We removed a handful of location-centric features on Facebook due to low usage: Sensitive Profile Fields, Location History, Background Location, Weather Alerts, and Nearby Friends. These steps enable Meta to delete data once it is no longer needed, and to avoid collecting unneeded data.
In addition to removing products and enhancing deletion practices, we continue to invest in privacy-enhancing technologies (PETs): technologies based on advanced cryptographic and statistical techniques that help to minimize the data we collect, process and use. Our Applied Privacy Technology team is dedicated to building PETs for use by teams across Meta, focused on key areas like de-identifying data at collection and enabling teams to implement end-to-end encryption in their products and services.
For example, our Anonymous Credential Service (ACS), a PET that we recently open-sourced, enables clients to authenticate users to its service without revealing user identity. We have leveraged anonymous credential technology to create a repository that makes this feature easy for engineers to deploy — helping to reduce fraud while still protecting privacy and minimizing the amount of data collected.
Building technical solutions that can adapt to evolving privacy expectations first requires we complete significant underlying technical work, including improving how we manage data across its lifecycle. One example of this is the extensive work we have done to expedite and strengthen our user-requested data deletion practices.
Users expect that their data will be properly deleted upon their request. However, the current approach to data deletion across industry is an onerous one, in which developers are required to manually write repetitive code that accounts for each update or change to a product and ensures that all the deletion logic still holds up. The complex architecture of modern distributed data storage also leaves room for potential error.
We have built a framework and infrastructure that is designed to alleviate the risk of potential error caused by developers working through an overly manual process. Engineers annotate the data being collected with the intended deletion behavior (e.g., "when a user deletes a post, also delete all the comments") and our framework and infrastructure handle the deletions that must run across multiple data stores with high reliability standards. Giving engineers the ability to use this technical infrastructure to make deletion implementation easier also helps us ensure that we address deletion early on in the product development process.
We are already processing billions of deletions every day using this infrastructure and are investing in comprehensive coverage across all of Meta’s systems.
AI powers back-end services like personalization, recommendation, and ranking that help enable a seamless, customizable experience for people who use our products and services. At Meta, we believe it’s important to empower people with tools and resources that help them to understand how AI operates and shapes their product experiences. We highlight a few examples below.
AI System Cards
We understand that in order to be transparent about our artificial intelligence systems, we have to be cognizant of each audience type. Users, ML developers and AI policy makers have different needs related to AI transparency and explainability. One of the ways we are exploring increased explainability is through AI system level transparency. We shared a step in this journey by publishing a prototype AI System Card tool that is designed to provide insight into an AI system’s underlying architecture and help better explain how the AI operates based on an individual’s history, preferences, settings, and more. The pilot System Card we developed, and continue to test, is for Instagram feed ranking. It breaks down the process of taking as-yet-unseen posts from accounts that a person follows and then ranking them based on how likely that person is to be interested in them. We also released an AI System Card about a different kind of system, one that identifies commerce trends via Meta’s large network of creators: Social Commerce Graph.
Now we are researching and testing our ability to create AI System Cards at scale, to make a repeatable process by which we are able to explain complex AI systems to increase user understanding about how the technologies they use every day work.
Developing New Tools to Make AI More Explainable
Making AI more explainable is a cross-industry, cross-disciplinary dialogue. Companies, regulators, and academics are all testing ways of better communicating how AI works through various forms of guidance and frameworks that can empower everyday people with more AI knowledge. Because AI systems are complex, it is both important and challenging to develop documentation that consistently addresses people’s need for transparency and their desire for simplicity in such explanations. As such, data sheets, model cards, System Cards, and fact sheets have different audiences in mind. We have done collaborative projects through our Open Loop Program with developers and regulators to co-develop and test a policy prototype on AI transparency & explainability focusing on the needs of different audiences and the levels of transparency they may desire throughout their product experiences. We have also worked with TTC Labs to publish a report “People-centric approaches to algorithmic explainability”.
Through this journey, we continue to focus on meaningful AI transparency initiatives that are useful to multiple audiences. We hope that AI System Cards can be understood by experts and non-experts, and can provide a unique in-depth view into the very complex world of AI systems to human interface in a repeatable and scalable way for Meta. Providing a framework that is technically accurate, able to capture the nuance of how AI systems operate at Meta’s scale, and is easily digestible for everyday people using our technologies is a delicate balance, especially as we continue to push the state of the art in the field.
“Getting privacy right is a continual, collective investment across our company, and is the responsibility of everyone at Meta to advance our mission.” - Michel Protti, Chief Privacy Officer for Product
Protecting users’ data and privacy is essential to our business and our vision for the future. To do so, we’re continually refining and improving our privacy program and our products, as we respond to evolving expectations and technological developments – working with policy makers and data protection experts to find solutions to unprecedented challenges – and sharing our progress as we do.