VoluMe

Authentic 3D Video Calls from Live Gaussian Splat Prediction

International Conference on
Computer Vision 2025

Martin de La Gorce Charlie Hewitt Tibor Takács Robert Gerdisch Zafiirah Hosenie Givi Meishvili Marek Kowalski Thomas J Cashman Anotnio Criminisi

Paper arXiv Video

Abstract

Virtual 3D meetings offer the potential to enhance copresence, increase engagement and thus improve effectiveness of remote meetings compared to standard 2D video calls. However, representing people in 3D meetings remains a challenge; existing solutions achieve high quality by using complex hardware, making use of fixed appearance via enrolment, or by inverting a pre-trained generative model. These approaches lead to constraints that are unwelcome and ill-fitting for videoconferencing applications.

We present the first method to predict 3D Gaussian reconstructions in real time from a single 2D webcam feed, where the 3D representation is not only live and realistic, but also authentic to the input video. By conditioning the 3D representation on each video frame independently, our reconstruction faithfully recreates the input video from the captured viewpoint (a property we call authenticity), while generalizing realistically to novel viewpoints. Additionally, we introduce a stability loss to obtain reconstructions that are temporally stable on video sequences.

We show that our method delivers state-of-the-art accuracy in visual quality and stability metrics compared to existing methods, and demonstrate our approach in live one-to-one 3D meetings using only a standard 2D camera and display. This demonstrates that our approach can allow anyone to communicate volumetrically, via a method for 3D videoconferencing that is not only highly accessible, but also realistic and authentic.

3D Video Calls

To enable videoconferencing where users have the rich, authentic and unconstrained representation that is familiar from 2D video, without the need for specialized hardware, we need a 3D representation that is:

  • Authentic: generates images that match the input video foreground (with every detail of, for example, clothes, hair, and glasses from the present moment), when rendered from the original camera's viewpoint
  • Realistic: generates plausible and realistic reconstructions when rendered in new views, while supporting the full diversity of human communication (e.g. including diverse hair, headwear and accessories).
  • Live: runs in real time on consumer devices.
  • Stable: generates predictions that are stable with respect to time and as the viewpoint changes, to avoid flickering.

We develop a Gaussian Splatting-based method which achieves these goals, and demonstrate it in a live 3D video call application. On the user's device we perform face detection and run a lightweight UNet to predict Gaussian splats for each pixel in the head region. The predicted splats are then rendered in real time to the user's display, and can be viewed from any angle. The applications runs at 28 FPS on a standard laptop with a NVIDIA 4090 Mobile GPU.

Inference Pipeline

Results

Training Methodology

Our UNet architecture builds on Splatter Images with a focus on improving quality and reducing execution time to enable real-time 3D video calls. We achieve this by:

  • Predicting multiple Gaussians per pixel to enhance the fidelity of the 3D representation
  • Using homography-based ROI extraction to normalize inputs and enable us to minimize network size
  • Reducing the size of the network significantly and incorporating optimizable layers and direct color sampling to improve quality
  • Using scale correction and stability loss during training to reduce jitter and account for depth ambiguity in monocular inputs
Training Pipeline

The VoluMe model is trained exclusively on synthetic data generated using our synthetic human data generation pipeline. This enables us to use diverse, multi-view data with perfect camera parameters, and which emulates a huge variety in-the-wild scenarios. This is extremely difficult to achieve with real data, and enables our approach to achieve high quality and stability, while also generalizing well to a variety of people and environments.

BibTeX

@misc{delagorce2025volume,
    title={{VoluMe} -- Authentic 3D Video Calls from Live Gaussian Splat Prediction},
    author={Martin de La Gorce and Charlie Hewitt and Tibor Takacs and Robert Gerdisch and Zafiirah Hosenie and Givi Meishvili and Marek Kowalski and Thomas J. Cashman and Antonio Criminisi},
    year={2025},
    eprint={2507.21311},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2507.21311},
}