作者: Ao Zhang , Shaojuan Wu , Xiaowang Zhang , Shizhan Chen , Yuchun Shu
DOI: 10.1109/ICTAI50040.2020.00083
关键词:
摘要: Emotional intelligence is a crucial part for human-machine dialogue system. However, the existing research on mainly faces three problems: (1) focus content level of each response while ignoring impact emotional factors in multi-turn dialogue; (2) lacking scalability and adaptability that only emotion categories specified by users are generated single-turn (3) it difficult to capture perceive fine-grained emotions speaker's state according context. To address these problems, we propose an expression model (EmoEM), which combines emotion-semantic graph with multitask learning mechanism, applying generator based seq2seq network convolution (GCN) generate more natural personalized responses structured manner. Generally, EmoEM considers constructing describe explicit implicit dynamically. Then, applied neural network, improve semantic consistency text quality dialogue. Moreover, multi-task mechanism introduced enhance obtain expected responses. The experimental results show outperforms several baselines BLEU, diversity expression.