作者: Benjamin Nuernberger , Kuo-Chin Lien , Tobias Hollerer , Matthew Turk
DOI: 10.1109/3DUI.2016.7460046
关键词: Vocabulary 、 Computer science 、 Viewpoints 、 Gesture 、 Rendering (computer graphics) 、 Human–computer interaction 、 Augmented reality 、 User interface 、 Gesture recognition 、 Annotation
摘要: A 2D gesture annotation provides a simple way to annotate the physical world in augmented reality for range of applications such as remote collaboration. When rendered from novel viewpoints, these annotations have previously only worked with statically positioned cameras or planar scenes. However, if camera moves and is observing an arbitrary environment, can easily lose their meaning when shown viewpoints due perspective effects. In this paper, we present new approach towards solving problem by using enhanced interpretation. By first classifying which type user drew, show that it possible render 3D conforms more original intention than traditional methods. We determined generic vocabulary important gestures collaboration scenario running Amazon Mechanical Turk study 88 participants. Next, designed real-time method automatically handle two most common — arrows circles give detailed analysis ambiguities must be handled each case. Arrow are interpreted identifying anchor points scene surface normals better rendering. For circle gestures, energy function help infer object interest both image cues geometric cues. Results indicate our outperforms previous approaches conveying drawing different viewpoints.