作者: Barry-John Theobald , Iain A. Matthews , Jeffrey F. Cohn , Steven M. Boker
关键词: Avatar 、 Computer facial animation 、 Face (geometry) 、 Parametric model 、 Computer science 、 Gesture 、 Facial expression 、 Active appearance model 、 Computer vision 、 Expression (mathematics) 、 Artificial intelligence
摘要: Active Appearance Models (AAMs) are generative parametric models commonly used to track, recognise and synthesise faces in images video sequences. In this paper we describe a method for transferring dynamic facial gestures between subjects real-time. The main advantages of our approach that: 1) the mapping is computed automatically does not require high-level semantic information describing expressions or visual speech gestures. 2) simple intuitive, allowing be transferred rendered 3) mapped expression can constrained have appearance target producing expression, rather than source imposed onto face. 4) Near-videorealistic talking new created without cost recording processing complete training corpus each. Our system enables face-to-face interaction with an avatar driven by AAM actual person real-time show examples arbitrary expressive frames cloned across different subjects.