Ava-256 dataset
The Ava-256 dataset and its associated development toolkit enable, for the first time, research into large-scale end-to-end photorealistic telepresence at scale.
256
dome sessions
256
head-mounted display sessions
>200
million images
Abstract
Meta Reality Labs Research introduces Ava-256, a dataset of 256 subjects collected in both a high-resolution dome, as well as from a commercially-available head-mounted display (HMD). In addition to images, we provide keypoints, segmentations, tracked meshes and other assets necessary for creating and driving high-quality photorealistic avatars.
We supply code and pre-trained models for avatar creation, along with out-of-the-box avatar driving from a HMD. Additionally, we offer a pipeline, assets and benchmarks to evaluate both tasks. Ava-256 provides a toolkit to bootstrap research into end-to-end photorealistic telepresence at scale.
Models and code included in release
Universal decoder
We provide a multi-identity model of personalized relightable head avatars based on a mixture of volumetric primitives with a consistent expression latent space.
Universal encoder
We provide a multi-identity model that is able to drive the universal decoder with head-mounted camera (HMC) images.