3D Facial Performance Capture From A Single RGBD Camera
MetadataShow full item record
Realistic facial animation remains one of the most challenging problems in computer graphics, where facial performance capture of real people has been a key component. The current state-of-the-art technologies used to capture facial performances are far too expensive and cumbersome for general users, which limits the potential applications of performance capture. The primary contribution of this dissertation is to propose two systems that are suitable for common users to capture facial performance using a single low-cost device. Our first system focuses on large-scale facial performance reconstruction from a single RGBD image. Our goal is to accurately reconstruct global transformation, as well as large-scale deformations from the images provided by a single shot of a Microsoft Kinect camera. With the combination of a robust facial feature detector and an image-based registration method, our system is automatic, robust and accurate to reconstruct facial movements. The result face meshes are topology consistent and with dense correspondences. Since people are natural experts of native human expressions and can distinguish subtle differences, e.g. dynamic facial wrinkles, we propose a second system combining our performance capture with a 3D scanning system to add person-specific high-resolution details in an efficient and effective way. We demonstrate the power of our proposed systems by testing on both real and synthetic data, as well as a commercially available motion capture system. Results show that the proposed systems generate believable and comparable results. We believe the proposed systems should be useful and applicable for general as well as professional users.
facial data analysis
nonrigid surface registration
Chen, Yen-Lin (2013). 3D Facial Performance Capture From A Single RGBD Camera. Doctoral dissertation, Texas A & M University. Available electronically from