作者: Felix Schüssel , Frank Honold , Michael Weber
DOI: 10.1007/978-3-642-37081-6_12
关键词: User interface 、 Multimodal interaction 、 Multimodal fusion 、 Artificial intelligence 、 Machine learning 、 Graphical user interface 、 Gesture 、 Transferable belief model 、 Robustness (computer science) 、 Evidential reasoning approach 、 Computer science
摘要: Systems with multimodal interaction capabilities have gained a lot of attention in recent years. Especially so called companion systems that offer an adaptive, user interface show great promise for natural human computer interaction. While more and sophisticated sensors become available, current capable accepting inputs (e.g. speech gesture) still lack the robustness input interpretation needed systems. We demonstrate how evidential reasoning can be applied domain graphical interfaces order to provide such reliability expected by users. For this purpose existing approach using Transferable Belief Model from robotic is adapted extended.