作者: Efthimios Alepis , Maria Virvou
DOI: 10.1007/978-3-642-53851-3_8
关键词:
摘要: The purpose of this chapter is to investigate how an object oriented (OO) architecture can be adapted cope with multimodal emotion recognition applications mobile interfaces. A large obstacle in direction the fact that phones differ from desktop computers since are not capable performing demanding processing required as recognition. To surpass fact, our approach, transmit all data collected a server which responsible for performing, among other, we have created, combines evidence multiple modalities interaction, namely device’s keyboard and microphone, well user stereotypes. All information classified into well-structured objects their own properties methods. resulting detection platform re-transmitting different sources during human–computer interaction. interface has been used test bed affective interaction educational m-learning application.