Using the transferable belief model for multimodal input fusion in companion systems

作者: Felix Schüssel , Frank Honold , Michael Weber

DOI: 10.1007/978-3-642-37081-6_12

关键词: User interfaceMultimodal interactionMultimodal fusionArtificial intelligenceMachine learningGraphical user interfaceGestureTransferable belief modelRobustness (computer science)Evidential reasoning approachComputer science

摘要: Systems with multimodal interaction capabilities have gained a lot of attention in recent years. Especially so called companion systems that offer an adaptive, user interface show great promise for natural human computer interaction. While more and sophisticated sensors become available, current capable accepting inputs (e.g. speech gesture) still lack the robustness input interpretation needed systems. We demonstrate how evidential reasoning can be applied domain graphical interfaces order to provide such reliability expected by users. For this purpose existing approach using Transferable Belief Model from robotic is adapted extended.

参考文章(16)
P. Smets, Data fusion in the transferable belief model international conference on information fusion. ,vol. 1, ,(2000) , 10.1109/IFIC.2000.862713
Bruno Dumas, Denis Lalanne, Sharon Oviatt, Multimodal Interfaces: A Survey of Principles, Models and Frameworks Human Machine Interaction. pp. 3- 26 ,(2009) , 10.1007/978-3-642-00437-7_1
Anna Esposito, Rüdiger Hoffmann, Vincent C Müller, Alessandro Vinciarelli, Cognitive Behavioural Systems Springer Berlin Heidelberg. ,(2012) , 10.1007/978-3-642-34584-5/PAGE/1#
P. Smets, The combination of evidence in the transferable belief model IEEE Transactions on Pattern Analysis and Machine Intelligence. ,vol. 12, pp. 447- 458 ,(1990) , 10.1109/34.55104
Philip R. Cohen, Michael Johnston, David McGee, Sharon Oviatt, Jay Pittman, Ira Smith, Liang Chen, Josh Clow, QuickSet: Multimodal Interaction for Simulation Set-up and Control conference on applied natural language processing. pp. 20- 24 ,(1997) , 10.3115/974557.974562
Norbert Pfleger, Context based multimodal fusion international conference on multimodal interfaces. pp. 265- 272 ,(2004) , 10.1145/1027933.1027977
Pradeep K. Atrey, M. Anwar Hossain, Abdulmotaleb El Saddik, Mohan S. Kankanhalli, Multimodal fusion for multimedia analysis: a survey Multimedia Systems. ,vol. 16, pp. 345- 379 ,(2010) , 10.1007/S00530-010-0182-0
Philip R. Cohen, Michael Johnston, David McGee, Sharon Oviatt, Jay Pittman, Ira Smith, Liang Chen, Josh Clow, QuickSet: multimodal interaction for distributed applications acm multimedia. pp. 31- 40 ,(1997) , 10.1145/266180.266328
Bruno Dumas, Denis Lalanne, Rolf Ingold, Description languages for multimodal interaction: a set of guidelines and its illustration with SMUIML Journal on Multimodal User Interfaces. ,vol. 3, pp. 237- 247 ,(2010) , 10.1007/S12193-010-0043-3
Bruno Dumas, Rolf Ingold, Denis Lalanne, Benchmarking fusion engines of multimodal interactive systems Proceedings of the 2009 international conference on Multimodal interfaces - ICMI-MLMI '09. pp. 169- 176 ,(2009) , 10.1145/1647314.1647345