—Michel Protti, Chief Privacy Officer for Product
Our work on privacy is underpinned by our internal governance structures that embed privacy and data-use standards across the company’s operations.
As we continue to integrate privacy across the company, we’ve embedded privacy teams within product groups that deepen the understanding of privacy considerations by providing expertise within each product group. These teams enable front-line ownership of privacy responsibilities across our products.
Led by Chief Privacy Officer, Product, Michel Protti, the Privacy and Data Practices team is made up of dozens of teams, both technical and non-technical, focused on setting and maintaining privacy strategies and enabling the rest of the company to adhere to them.
The Privacy and Data Practices Team is at the center of our company’s efforts to maintain a comprehensive privacy program. Its mission—to instill responsible data practices across Meta—guides this work by ensuring people understand how Meta uses their data and trust that our products use their data responsibly.
The Privacy and Data Practices Team is just one organization among many across the company that is responsible for privacy. There are thousands of people in different organizations and roles across Meta, including public policy and legal, who are working to embed privacy into all facets of our company operations. Getting privacy right is a deeply cross-functional effort, and we believe everyone at Meta is responsible for that effort.
Led by Erin Egan, Vice President and Chief Privacy Officer, Policy, the Privacy and Data Policy team leads our engagement in the global public discussion around privacy, including new regulatory frameworks, and ensures that feedback from governments and experts around the world is considered in our product design and data use practices.
To do so, the Privacy and Data Policy team consults with these groups through a variety of consultation mechanisms, including:
We also host a regular conversation series and an annual privacy expert flyout with leading privacy experts from around the world to discuss a range of pressing privacy policy topics.
The Privacy Legal team is embedded in the design and ongoing execution of our program and advises on legal requirements during the course of our privacy review process.
The Privacy Committee is an independent committee of our Board of Directors that meets at least quarterly to ensure we're meeting our privacy commitments. The Committee is comprised of independent directors with a wealth of experience serving in similar oversight roles. At least once per quarter, they receive briefings on, among other things, the global policy landscape, the state of our privacy program, and the status of the independent third-party assessment of our privacy program.
Internal Audit brings independent assurance on the overall health of our privacy program and the supporting control framework.
Part of ensuring that everyone understands their role in protecting privacy at Meta is driving continuous privacy learning and education that spans training and internal privacy awareness campaigns.
A core component of our privacy education approach is delivered through our privacy training. Our privacy training covers the foundational elements of privacy and is designed to help everyone here at Meta develop the ability to recognize and consider privacy risks. Through its eLearning format, both our annual privacy training and our privacy training courses for new hires and new contingent workers provide scenario-based examples of privacy considerations aligned with our business operations and include an assessment to test the understanding of the relevant privacy concepts. These trainings are updated and deployed annually to ensure relevant information is included in addition to core concepts.
Alongside our foundational required privacy training, we also maintain a catalog of all known privacy training deployed across Meta that spans topics relevant to people in specific roles.
Another way we drive privacy education is through regular communication to employees. In addition to our privacy training courses, we deliver ongoing privacy content through internal communication channels, updates from privacy leadership, internal Q&A sessions, and a dedicated Privacy Week.
During our dedicated Privacy Week, we drive cross-company focus on privacy, feature internal and external speakers, and highlight key privacy concepts and priorities through engaging content and events.
When we participate in external privacy events like Data Privacy Day or our annual privacy expert flyout, we drive internal awareness and engagement through internal channels to ensure everyone has an opportunity to participate and learn about privacy.
We have a dedicated team whose job is to help ensure we comply with global privacy and data regulations. Our regulatory readiness process is structured according to key privacy topics or “privacy areas” (e.g., youth, sensitive data, consent, etc.) to help ensure that we’re addressing privacy requirements holistically.
We’ve created our Privacy Risk Management program to identify and assess privacy risks related to how we collect, use, share, and store user data. We leverage this process to identify risk themes, enhance our privacy program, and prepare for future compliance initiatives.
We’ve designed safeguards, including processes and technical controls, to address privacy risks. As a part of this effort, we conduct internal evaluations on both the design and effectiveness of the safeguards for mitigating privacy risk.
We’ve established a centralized Issue Management function to facilitate self-identification and remediation of privacy issues. This process spans the privacy issue management lifecycle from intake and triage, remediation planning, and closure with evidence.
We’ve established a privacy red team whose role is to proactively test our processes and technology to identify potential privacy risks. The Privacy Red Team assumes the role of external or internal parties attempting to circumvent our privacy controls and safeguards, which helps us proactively identify areas where we can improve our control environment.
No matter how robust our mitigations and safeguards, we also need a process to (1) identify when an event potentially undermines the confidentiality, integrity, or availability of data for which Meta is responsible, (2) investigate those situations, and (3) take any needed steps to address gaps we identify.
Our Incident Management program operates globally to oversee the processes by which we identify, assess, mitigate, and remediate privacy incidents. Although the Privacy and Data Practices team leads the incident management process, privacy incidents are everyone’s responsibility at Meta. Teams from across the company, including legal and product teams, play vital roles. We continue to invest time, resources, and energy in building a multi-layered program that is constantly evolving and improving and we highlight three components of our approach below.
We take a layered approach to protecting people and their information—which includes implementing safeguards designed to catch bugs proactively, before they can become a problem. Given the scale at which we operate, we have invested heavily in building and deploying a wide range of automated tools that are intended to help us identify and remediate potential privacy incidents as early and quickly as possible. These automated systems are designed to detect incidents in real time to facilitate rapid response.
Of course, no matter how capable our automated systems become, the oversight and diligence of our employees always plays a critical role in helping to proactively identify and remediate incidents. Our engineering teams regularly review our systems to identify and fix incidents before they can impact people.
Since 2011, we have operated a bug bounty program in which external researchers help improve the security and privacy of our products and systems by reporting potential security vulnerabilities to us. The program helps us scale detection efforts and fix issues faster to better protect our community, and the bounties we pay to qualifying participants encourage more high-quality security research.
Over the past 10 years, more than 50,000 researchers joined this program and around 1,500 researchers from 107 countries have been awarded bounties.
While we’ve adopted a number of protections to guard against privacy incidents like unauthorized access to data, if an incident does occur, we believe that transparency is an important way to rebuild trust in our products, services, and processes. Accordingly, beyond fixing and learning from our mistakes, our Incident Management program includes steps to notify people where appropriate, such as a post in our Newsroom or our Privacy Matters blog about issues impacting our community, or working with law enforcement or other officials to address incidents we find.
Third parties are external partners who do business with Meta but aren’t owned or operated by Meta. These third parties typically fall into two major categories: those who provide a service for Meta (like vendors who provide website design support) and those who build their businesses around our platform (like app or API developers). To mitigate privacy risks posed by third parties that receive access to personal information, we developed a dedicated third party oversight and management program, which is responsible for overseeing third party risks and implementing appropriate privacy safeguards.
We’ve also created a third party privacy assessment process for service providers to assess and mitigate privacy risk. Our process requires that these service providers are also bound by contracts containing privacy protections. Their risk profile determines how they are monitored, reassessed, and, where appropriate, which enforcement actions to take as a result of violations, including termination of the engagement.
We’ve designed a formal process for enforcing and offboarding third parties who violate their privacy or security obligations. This includes standards and technical mechanisms that support better developer practices across our platform, including:
Our External Data Misuse team is dedicated to detecting, investigating and blocking patterns of behavior associated with scraping. Scraping is the automated collection of data from a website or app and can be either authorized or unauthorized. Using automation to access or collect data from Meta’s platforms without our permission is a violation of our terms of service.
We continue to invest in infrastructure and tools to make it harder for scrapers to collect data from our services and more difficult to capitalize off of it if they do. Examples of these investments include rate limits and data limits. Rate limits cap the number of times anyone can interact with our products in a given amount of time, while data limits keep people from getting more data than they should need to use our products normally.
We leverage internally generated user and content identifiers after we observed that unauthorized scraping often involves guessing or purchasing such identifiers. We also use new pseudonymized identifiers that help deter unauthorized data scraping by making it harder for scrapers to guess, connect, and repeatedly access data.
We’ve blocked billions of suspected unauthorized scraping actions per day across Facebook and Instagram, and we’ve taken a variety of actions against unauthorized scrapers including disabling accounts and requesting that companies hosting scraped data delete it.
The Privacy Review process is a central part of developing new and updated products, services, and practices at Meta. Through this process, we assess how data will be used and protected as a part of new or updated products, services and practices. We review an average of 1,200 products, features and data practices per month across the company before they ship to assess and mitigate privacy risks.
As a part of the process, a cross-functional team of privacy experts evaluates potential privacy risks associated with a project and determines if there are any changes that need to happen before project launch to mitigate those risks. If there is a disagreement on the assessment of applicable risks or the proposed product mitigations, the process requires teams to escalate to product and policy leadership and ultimately the CEO for further evaluation and decision.
The development of our new or modified products, services or practices through the Privacy Review process is guided by our internal privacy expectations, which include:
We’ve also invested in verification reviews and a centralized platform to support operating the Privacy Review process at scale:
—Komal Lahiri, VP Privacy Review
We put protecting users’ privacy at the heart of how we build and continuously update our products. We do that by building default settings and controls to make it easy for users to set the level of privacy they are most comfortable with. We also do it by putting privacy at the center of how we develop new products.
Since 2016, Messenger has had the option for people to turn on end-to-end encryption. In 2023, we began to roll out default end-to-end encryption for all personal chats and calls on Messenger and Facebook.
This has taken years to deliver because we’ve taken the time to get it right. Our engineers, cryptographers, designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up. Enabling end-to-end encryption on Messenger meant fundamentally rebuilding many aspects of the application protocols to improve privacy, security, and safety while simultaneously maintaining the features that have made Messenger so popular. Our approach was to leverage prior learnings from both WhatsApp and Messenger’s Secret Conversations, and then iterate on our most challenging problems like muti-device capability, feature support, message history, and web support. Along the way, we introduced new privacy, safety and control features like app lock and delivery controls that let people choose who can message them, and improved existing safety features like report, block and message requests.
At its core, end-to-end encryption is about protecting people’s communications, so they can feel safe expressing themselves with their friends and loved ones. We worked closely with outside experts, academics, advocates and governments to identify risks and build mitigations to ensure that privacy and safety go hand-in-hand. We commissioned an independent human rights impact assessment and published an extensive white paper on Meta’s approach to safer private messaging on Messenger and Instagram DMs.
Ray-Ban Meta smart glasses let you snap a photo or video clip from your unique point of view, listen to music or take a call, and use smart features as they roll out—all without having to pull out your phone. Ray-Ban Meta smart glasses have been redesigned with a higher quality camera, improved audio and microphone systems, and new features, such as livestreaming and built-in Meta AI, so you don’t have to choose between capturing the moment and experiencing it.
Ray-Ban Meta smart glasses were built with privacy at their core, and serve as a clear proofpoint of our commitment to responsible innovation and privacy by design. We’ve incorporated stakeholder feedback—which we obtained early on from the moment we launched Ray-Ban Stories—in meaningful and tangible ways.
We launched new generative AI experiences, including AI stickers, image editing with AI, our AI assistant known as Meta AI, and AI Studio, where people can create their own custom AI. We’ve included important privacy measures as we built these promising new generative AI features and have developed a Generative AI Privacy Guide and other transparency resources for people to understand how we built our AI models, how our AI features work, and what choices and data privacy rights they have.
Last year, we updated our “Why am I seeing this?” tool, which aims to help people understand why they’re seeing the ads they do on Facebook and Instagram feeds. One key update was summarizing information into topics about how activity both on and off our technologies—such as liking a post on a friend’s Facebook page or visiting a sports website—may inform the machine learning models we use to shape and deliver the ads seen. We also introduced new examples and illustrations explaining how our machine learning models connect various topics to show relevant ads. Additionally, we introduced more ways for users to find our ads controls, providing the availability to access Ads Preferences from additional pages in the “Why am I seeing this ad?” tool.
We developed the Meta Content Library (MCL) and API, new research tools that provide qualified researchers access to additional publicly-available content across Facebook and Instagram, in privacy-protective ways.
These tools provide researchers access to near real-time public data, including content from Pages, Groups and Events on Facebook, as well as from creator and business accounts on Instagram. Details about the content, such as the number of reactions, shares, comments and, for the first time, post view counts are also available. Researchers can search, explore and filter that content on both a graphical User Interface (UI) or through a programmatic API.
We partnered with the Inter-University Consortium for Political and Social Research (ICPSR) at the University of Michigan’s Social Media Archive (SOMAR) initiative to share public data from our platforms, where access to restricted data is tightly controlled, and authorized users agree to strict data use and confidentiality terms to gain access.
Our work to communicate transparently includes providing external education to improve people’s understanding and awareness of our practices and ensuring information is accessible and easy to find.
To provide greater transparency and control to people, we’ve developed a number of privacy tools for people to understand what they share and how their information is used including:
—Michel Protti, Chief Privacy Officer for Product
We continue to build privacy-aware infrastructure—scalable and innovative infrastructure solutions that enable engineers to more easily address privacy requirements as they build products. Privacy-aware infrastructure allows us to increasingly use automation, rather than rely primarily on people and manual processes, to verify that we are meeting our privacy responsibilities.
We’re proactively reducing the amount of user data that we collect and use by deploying innovative tools and technology across Meta. We continue to invest in privacy-enhancing technologies (PETs)—technologies based on advanced cryptographic and statistical techniques that help to minimize the data we collect, process and use and have been working to open source this work in cases where useful for the broader ecosystem, including on PETs for AI through PyTorch. Additionally, our investments in PETs helped enable a new cryptographic security feature on WhatsApp that helps to verify that your connection and conversation is secure based on key transparency. This feature reduces the possibility of a third party impersonating the person or business a user wants to connect and share encrypted messages with, by checking the validity of public keys in the conversation against a server-side directory that stores public keys with user information and then providing a publicly available, privacy-preserving audit record for anyone to verify that data has not been deleted or modified in the directory.
Similarly, we developed a framework for code and asset removal which guides engineers through deprecating a product safely and efficiently. Deprecating products is a complex feat involving internal and external dependencies, including dependencies on other Meta products that may not themselves be in scope for removal. To address this, our Systematic Code and Asset Removal Framework (SCARF) includes a workflow management tool that saves engineers time by identifying dependencies as well as the correct order of tasks for cleaning up a product. In addition, SCARF includes subsystems for safely removing dead code as well as unused data types.
SCARF powers thousands of human-led deprecation projects alongside the millions of code and data assets it has cleaned automatically. It is additionally useful for our privacy teams, who use the tool to monitor progress of ongoing product deprecations and ensure that they are completed in a timely manner.
“Getting privacy right is a continual, collective investment across our company, and is the responsibility of everyone at Meta to advance our mission.” —Michel Protti, Chief Privacy Officer for Product
Protecting users’ data and privacy is essential to our business and our vision for the future. To do so, we’re continually refining and improving our privacy program and our products, as we respond to evolving expectations and technological developments—working with policy makers and data protection experts to find solutions to unprecedented challenges—and sharing our progress as we do.