Interpreting 2D gesture annotations in 3D augmented reality

作者: Benjamin Nuernberger , Kuo-Chin Lien , Tobias Hollerer , Matthew Turk

DOI: 10.1109/3DUI.2016.7460046

关键词: VocabularyComputer scienceViewpointsGestureRendering (computer graphics)Human–computer interactionAugmented realityUser interfaceGesture recognitionAnnotation

摘要: A 2D gesture annotation provides a simple way to annotate the physical world in augmented reality for range of applications such as remote collaboration. When rendered from novel viewpoints, these annotations have previously only worked with statically positioned cameras or planar scenes. However, if camera moves and is observing an arbitrary environment, can easily lose their meaning when shown viewpoints due perspective effects. In this paper, we present new approach towards solving problem by using enhanced interpretation. By first classifying which type user drew, show that it possible render 3D conforms more original intention than traditional methods. We determined generic vocabulary important gestures collaboration scenario running Amazon Mechanical Turk study 88 participants. Next, designed real-time method automatically handle two most common — arrows circles give detailed analysis ambiguities must be handled each case. Arrow are interpreted identifying anchor points scene surface normals better rendering. For circle gestures, energy function help infer object interest both image cues geometric cues. Results indicate our outperforms previous approaches conveying drawing different viewpoints.

参考文章(30)
Luke Olsen, Faramarz F. Samavati, Mario Costa Sousa, Joaquim A. Jorge, Technical Section: Sketch-based modeling: A survey Computers & Graphics. ,vol. 33, pp. 85- 103 ,(2009) , 10.1016/J.CAG.2008.09.013
Matthew Tait, Mark Billinghurst, The Effect of View Independence in a Collaborative AR System conference on computer supported cooperative work. ,vol. 24, pp. 563- 589 ,(2015) , 10.1007/S10606-015-9231-8
D.D. Hoffman, W.A. Richards, Parts of recognition Cognition. ,vol. 18, pp. 227- 242 ,(1987) , 10.1016/0010-0277(84)90022-2
Patrick Paczkowski, Julie Dorsey, Holly Rushmeier, Min H. Kim, Paper3D: bringing casual 3D modeling to a multi-touch interface user interface software and technology. pp. 23- 32 ,(2014) , 10.1145/2642918.2647416
Simon Christoph Stein, Florentin Worgotter, Markus Schoeler, Jeremie Papon, Tomas Kulvicius, Convexity based object partitioning for robot applications international conference on robotics and automation. pp. 3213- 3220 ,(2014) , 10.1109/ICRA.2014.6907321
Steffen Gauglitz, Benjamin Nuernberger, Matthew Turk, Tobias Höllerer, In touch with the remote world: remote collaboration with augmented reality drawings and virtual navigation virtual reality software and technology. pp. 197- 205 ,(2014) , 10.1145/2671015.2671016
David Kirk, Danae Stanton Fraser, Comparing remote gesture technologies for supporting collaborative physical tasks human factors in computing systems. pp. 1191- 1200 ,(2006) , 10.1145/1124772.1124951
Michael Tsang, George W. Fitzmaurice, Gordon Kurtenbach, Azam Khan, Bill Buxton, Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display user interface software and technology. pp. 111- 120 ,(2002) , 10.1145/571985.572001
Shunichi Kasahara, Valentin Heun, Austin S. Lee, Hiroshi Ishii, Second surface: multi-user spatial collaboration system based on augmented reality international conference on computer graphics and interactive techniques. pp. 20- ,(2012) , 10.1145/2407707.2407727
Seungwon Kim, Gun A. Lee, Nobuchika Sakata, Comparing pointing and drawing for remote collaboration international symposium on mixed and augmented reality. pp. 1- 6 ,(2013) , 10.1109/ISMAR.2013.6671833