作者: Bailin Deng , Juyong Zhang , Yudong Guo , Hongrui Cai , Yihua Chen
DOI: 10.1007/S41095-021-0215-Y
关键词:
摘要: Face views are particularly important in person-to-person communication. Differenes between the camera location and face orientation can result undesirable facial appearances of participants during video conferencing. This phenomenon is noticeable when using devices where front-facing placed unconventional locations such as below display or within keyboard. In this paper, we take a stream from single RGB input, generate that emulates view virtual at designated location. The most challenging issue problem corrected often needs out-of-plane head rotations. To address challenge, reconstruct 3D shape re-render it into synthesized frames according to output with natural appearance real time, propose several novel techniques including accurate eyebrow reconstruction, high-quality blending image background, template-based reconstruction glasses. Our system works well for different lighting conditions skin tones, handle users wearing Extensive experiments user studies demonstrate our method provides results.