作者: Caio José Dos Santos Brito , Kenny Mitchell
关键词:
摘要: Preparing datasets for use in the training of real-time face tracking algorithms HMDs is costly. Manually annotated facial landmarks are accessible regular photography datasets, but introspectively mounted cameras VR have incompatible requirements with these existing datasets. Such include operating ergonomically at close range wide angle lenses, low-latency short exposures, and near infrared sensors. In order to train a suitable solver without costs producing new data, we automatically repurpose an landmark dataset specialist HMD camera intrinsics radial warp reprojection. Our method separates into local regions source photos, i.e., mouth eyes more accurate correspondence locations underneath inside fully functioning HMD. We combine per-camera solved yield live animated avatar driven from user’s expressions. Critical robustness achieved measures region segmentation, blink detection pupil tracking. quantify results against unprocessed provide empirical comparisons commercial trackers.