Holistic performance capture without the hassle
ACM Transactions on Graphics
SIGGRAPH Asia 2024
Our approach combines machine-learning models for dense-landmark and parameter prediction with model-fitting to provide a robust, accurate and adaptable system. Our method supports registration of the face, body and hands; in isolation, and together in a single take.
Our parametric model captures body and hand pose, body and face shape, and facial expression.
We can also track tongue articulation and eye gaze.
Our method achieves state-of-the-art results on a number of 3D reconstruction benchmarks.
Motion capture shoots typically require specialist hardware, skilled experts and a lot of time to get right. This can make them expensive and challenging to manage in a tight production schedule. Our method aims to eliminate this inconvenience by providing a marker-less, calibration-free solution that can be used with off-the-shelf hardware. This allows for quick and easy capture of high-quality motion data in a variety of environments.
Using just two uncalibrated mobile-phone cameras we can achieve high quality results in world-space.
Our method even works with a single, moving camera in an unconstrained environment with arbitrary clothing.
Our method is trained exclusively on synthetic data, generated using a conventional computer graphics pipeline. The three datasets used in the paper are available to download here.
SynthBody can be used for tasks such as skeletal tracking and body pose prediction.
SynthFace can be used for tasks such as facial landmark and head pose prediction or face parsing.
SynthHand can be used for tasks such as hand pose prediction or landmark regression.
@article{hewitt2024look, title={Look Ma, no markers: holistic performance capture without the hassle}, author={Hewitt, Charlie and Saleh, Fatemeh and Aliakbarian, Sadegh and Petikam, Lohit and Rezaeifar, Shideh and Florentin, Louis and Hosenie, Zafiirah and Cashman, Thomas J and Valentin, Julien and Cosker, Darren and Baltru\v{s}aitis, Tadas}, journal={ACM Transactions on Graphics (TOG)}, volume={36}, number={6}, year={2024}, publisher={ACM New York, NY, USA}, articleno={235}, numpages={12}, }