作者: Fabio Petroni , Vassilis Plachouras , Timothy Nugent , Jochen L. Leidner
DOI: 10.18653/V1/N18-1042
关键词:
摘要: The widespread use of word embeddings is associated with the recent successes many natural language processing (NLP) systems. key approach popular models such as word2vec and GloVe to learn dense vector representations from context words. More recently, other approaches have been proposed that incorporate different types contextual information, including topics, dependency relations, n-grams, sentiment. However, these typically integrate only limited additional often in ad hoc ways. In this work, we introduce attr2vec, a novel framework for jointly learning words attributes based on factorization machines. We perform experiments information. Our experimental results text classification task demonstrate using attr2vec Part-of-Speech (POS) tags improves compared independently. Moreover, train dependency-based show they exhibit higher similarity between functionally related traditional approaches.