作者: Antonio Origlia , Francesco Cutugno , Roberto Rinaldi
DOI:
关键词:
摘要: In this paper we extend a multimodal framework based on speech and gestures to include emotional information by means of anger detection. recent years interaction has become great interest thanks the increasing availability mobile devices allowing number different modalities. Taking intelligent decisions is complex task for automated systems as multimodality requires procedures integrate events be interpreted single intention user it must take into account that kinds could come from channel in case speech, which conveys user’s intentions using syntax prosody both.