作者: Prathyusha Jwalapuram , Shafiq Joty , Youlin Shen
DOI: 10.18653/V1/2020.EMNLP-MAIN.177
关键词: Machine translation 、 BLEU 、 Artificial intelligence 、 Benchmark (computing) 、 Fine-tuning 、 Natural language processing 、 Class (biology) 、 Pronoun 、 Computer science
摘要: Popular Neural Machine Translation model training uses strategies like backtranslation to improve BLEU scores, requiring large amounts of additional data and training. We introduce a class conditional generative-discriminative hybrid losses that we use fine-tune trained machine translation model. Through combination targeted fine-tuning objectives intuitive re-use the has failed adequately learn from, performance both sentence-level contextual without using any data. target improvement pronoun translations through our evaluate models on benchmark testset. Our shows 0.5 WMT14 IWSLT13 De-En testsets, while achieves best results, improving from 31.81 32 testset, 32.10 33.13 with corresponding improvements in translation. further show generalizability method by reproducing two language pairs, Fr-En Cs-En.