作者: Christophe Cerisara , Ladislav Lenc , Pavel Král , Jiří Martínek
DOI:
关键词:
摘要: In this paper we exploit cross-lingual models to enable dialogue act recognition for specific tasks with a small number of annotations. We design transfer learning approach and validate it on two different target languages domains. compute turn embeddings both CNN multi-head self-attention model show that the best results are obtained by combining all sources transferred information. further demonstrate proposed methods significantly outperform related DA approaches.