作者: Lei Sha , Sujian Li , Baobao Chang , Zhifang Sui
DOI: 10.1007/978-3-319-47674-2_24
关键词:
摘要: Recognizing Textual Entailment (RTE) plays an important role in NLP applications like question answering, information retrieval, etc. Most previous works either use classifiers to employ elaborately designed features and lexical similarity or bring distant supervision reasoning technique into RTE task. However, these approaches are hard generalize due the complexity of feature engineering prone cascading errors data sparsity problems. For alleviating above problems, some work LSTM-based recurrent neural network with word-by-word attention recognize textual entailment. Nevertheless, did not make full knowledge base (KB) help reasoning. In this paper, we propose a deep architecture called Multi-task Knowledge Assisted LSTM (MKAL), which aims conduct implicit inference assistant KB predicate-to-predicate detect entailment between predicates. addition, our model applies multi-task further improve performance. The experimental results show that proposed method achieves competitive result compared work.