作者: Haythem Balti , Adel S. Elmaghraby
DOI: 10.1109/ISSPIT.2013.6781926
关键词:
摘要: We propose a framework for speech emotion detection that maps acoustic features into high level descriptors integrates time context. Our uses three different algorithms to integrate the temporal The first method is based on averaging of original features. second algorithm derives by clustering data using self-organizing (SOMs) and computing average activity distribution map. third multi resolution window analysis SOMs compute 2-D map emotions trajectories representing behavior Using standard emotional database K-nearest neighbors classifier, we show proposed efficient analysis, visualization classification emotions.