Smart-Pockets

作者: Radu-Daniel Vatavu

DOI: 10.1016/J.IJHCS.2017.01.005

关键词: VisualizationGesture recognitionHuman–computer interactionGestureFashion designMobile deviceComputer scienceSet (psychology)Digital contentMultimedia

摘要: This work introduces Smart-Pockets, a new set of whole-body gesture recognition techniques that enables users to access their personal digital content efficiently for visualization on ambient displays. Smart-Pockets works by recognizing users' body-deictic gestures entailing pockets, which associations between specific pockets and anchored those has been managed priori. The pocket metaphor we explore in this links using physical containers (i.e., pockets) placed at convenient locations the user's body, have specifically devised over decades fashion design store carry people's belongings comfortably conveniently. Consequently, are fast, require absolutely no precision perform effectively, robustly recognized user-independent scenarios with training required from user display. Also, technique is flexible easily extensible other containers, such as bags hand-held objects, demonstrate form Smart-Containers. We evaluate accuracy several gestures, report +99% classification explicit segmentation. discuss kinematic performance Smart-Containers show average time 2.2s comparable production touch smart mobile devices much smaller than produce gestures. Beyond practical implications advancing knowledge gesture-based interface interactions, believe contributions introduced concept will also foster developments pointing community attention toward (i) more examination potential class i.e., body-deictics, (ii) how public displays, an important preliminary step before actual interaction, (iii) inspiring examine creative objects visualized HighlightsSmart-Pockets displays.The creates link content.Smart-Pockets employ type hybrid deictic gestures.Smart-Pockets actions fast (2.2s) (+99%).The extendable

参考文章(132)
Jacob O. Wobbrock, Lisa Anthony, A lightweight multistroke recognizer for user interface prototypes graphics interface. pp. 245- 252 ,(2010)
Itaru Kuramoto, Takuya Ishibashi, Keiko Yamamoto, Yoshihiro Tsujino, Stand Up, Heroes! : Gamification for Standing People on Crowded Public Transportation Design, User Experience, and Usability. Health, Learning, Playing, Cultural, and Cross-Cultural User Experience. pp. 538- 547 ,(2013) , 10.1007/978-3-642-39241-2_59
Miguel A. Nacenta, Ricardo Jota, Sheelagh Carpendale, Saul Greenberg, Joaquim A. Jorge, A comparison of ray pointing techniques for very large displays graphics interface. pp. 269- 276 ,(2010) , 10.11575/PRISM/30630
Gilles Bailly, Robert Walter, Jörg Müller, Tongyan Ning, Eric Lecolinet, Comparing Free Hand Menu Techniques for Distant Displays Using Linear, Marking and Finger-Count Menus Human-Computer Interaction – INTERACT 2011. pp. 248- 262 ,(2011) , 10.1007/978-3-642-23771-3_19
Radu-Daniel Vatavu, Stefan-Gheorghe Pentiuc, Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience european conference on interactive tv. pp. 183- 187 ,(2008) , 10.1007/978-3-540-69478-6_24
Alexander Meschtscherjakov, Wolfgang Reitberger, Thomas Mirlacher, Hermann Huber, Manfred Tscheligi, AmIQuin - An Ambient Mannequin for the Shopping Environment Lecture Notes in Computer Science. ,vol. 5859, pp. 206- 214 ,(2009) , 10.1007/978-3-642-05408-2_25
Radu-Daniel Vatavu, Laurent Grisoni, Stefan-Gheorghe Pentiuc, Gesture Recognition Based on Elastic Deformation Energies Gesture-Based Human-Computer Interaction and Simulation. pp. 1- 12 ,(2009) , 10.1007/978-3-540-92865-2_1