Show simple item record

dc.contributor.advisorChai, Jinxiang
dc.creatorGuo, Peihong
dc.date.accessioned2020-09-04T18:56:52Z
dc.date.available2020-09-04T18:56:52Z
dc.date.created2018-05
dc.date.issued2018-04-26
dc.date.submittedMay 2018
dc.identifier.urihttps://hdl.handle.net/1969.1/188895
dc.description.abstractFacial rigs are essential to facial animation in movies, games and virtual reality applications. Among various facial models, blendshape models are widely adopted in many applications because artists can easily create complicated facial expressions through linear interpolation. However, creating high quality blendshapes for a specific subject is still a challenging task. It typically requires hours of manual work of a well-trained artists to create hundreds of blendshapes in order to achieve good visual quality. Several semi-automatic and automatic systems have been proposed to generate personalized facial rigs previously; however, such systems usually requires specialized hardware (e.g., laser scanner) or specific input (e.g., well-lit high resolution video). This thus limits the application of such systems to a wider audience. We present an automatic facial rigging system for generating person-specific 3D facial blendshapes from images in the wild (e.g., Internet images of Hillary Clinton), where the face shape, pose, expression, and illuminations are all unknown. Our system initializes the 3D blendshapes with sparse facial features detected from the input images using a mutli-linear model and then refines the blendshapes via per-pixel shading cues with a new blendshape retargeting algorithm. Finally, we introduce a new algorithm for recovering detailed facial features from the input images. To handle large variations of face poses and illuminations in the input images, we also develop a set of failure detection schemes that can robustly filter out inaccurate results in each step. Our method greatly simplifies the 3D facial rigging process and generates a more faithful face shape and expression of the subject than multi-linear model fitting. We validate the robustness and accuracy of our system using images of a dozen subjects that exhibit significant variations of face shapes, poses, expressions, and illuminations.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectBlendshapesen
dc.subjectFacial animationen
dc.subjectFace modelingen
dc.titleAutomatic Reconstruction of High-Fidelity Blendshape Models from Images in the Wilden
dc.typeThesisen
thesis.degree.departmentComputer Science and Engineeringen
thesis.degree.disciplineComputer Scienceen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberKeyser, John
dc.contributor.committeeMemberKlappenecker, Andreas
dc.contributor.committeeMemberGildin, Eduardo
dc.type.materialtexten
dc.date.updated2020-09-04T18:56:53Z
local.etdauthor.orcid0000-0002-7235-8816


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record