作者: Bernd Bohnet , Emily Pitler , Goncalo Simoes , Daniel Andor , Joshua Maynez
DOI:
关键词:
摘要: The rise of neural networks, and particularly recurrent has produced significant advances in part-of-speech tagging accuracy. One characteristic common among these models is the presence rich initial word encodings. These encodings typically are composed a character-based representation with learned pre-trained embeddings. However, do not consider context wider than single it only through subsequent layers that or sub-word information interacts. In this paper, we investigate use networks sentence-level for character word-based representations. particular show optimal results obtained by integrating sensitive representations synchronized training meta-model learns to combine their states. We present on morphological state-of-the-art performance number languages.