基于Transformer的序列转换模型是当前性能最优的机器翻译模型之一。该模型在生成机器译文时,通常从左到右逐个生成目标词,这使得当前位置词的生成不能利用译文中该词之后未生成词的信息,导致机器译文解码不充分从而降低译文质量。为了缓解上述问题,该文提出了基于重解码的神经机器翻译模型,该模型将已生成的机器译文作为目标语言近似上下文环境,对译文中每个词依次进行重解码,重解码时Transformer 解码器中遮挡多头注意力仅遮挡已生成译文中的当前位置词,因此,重生成的每个词都能充分利用目标语言的上下文信息。在多个WMT机器翻译评测任务测试集上的实验结果表明: 使用基于重解码的神经机器翻译方法显著提高了机器译文质量。
Abstract
The Transformer is one of the best performing machine translation models. Generating tokens one by one from left to right, this approach lacks the guidance of future contextual information. To alleviate this issue, we propose a neural machine translation model based on re-decoding. The model treats the generated machine translation outputs as approximate contextual environment of the target language, and then re-decodes each token in the machine translation output successively. The masked multi-head attention of the Transformer decoder only masks the current position token in the generated translation output. As a result, every token re-decoded can make full use of its contextual information. Experimental results on several test sets from the WMT show that the quality of machine translation is improved significantly by leveraging the re-decoding.
关键词
神经机器翻译 /
编码器—解码器模型 /
重解码 /
遮挡多头注意力 /
Transformer
{{custom_keyword}} /
Key words
neural machine translation /
encoder-decoder model /
re-decode /
masked multi-head attention /
Transformer
{{custom_keyword}} /
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[1] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Proceedings of the NIPS, 2017: 5998-6008.
[2] Jonas G, Michael A, David G, et al. Convolutional sequence to sequence learning[C]//Proceedings of the ICML, 2017: 1243-1252.
[3] Dzmitry B, KyungHyun C, Yoshua B. Neural machine translation by jointly learning to align and translate[C]//Proceedings of the ICLR, 2015: 1-15.
[4] Watanabe T, Sumita E. Bidirectional decoding for statistical machine translation[C]//Proceedings of the COLING, 2002: 1-7.
[5] Finch A, Sumita E. Bidirectional phrase-based statistical machine translation[C]//Proceedings of the EMNLP, 2009: 1124-1132.
[6] Lemao L, Masao U, Andrew F,et al. Agreement on target-bidirectional neural machine translation[C]//Proceedings of the NAACL, 2016: 411-416.
[7] Sennrich R, Haddow B, Birch A. Edinburgh neural machine translation systems for WMT16[C]//Proceedings of the WMT, 2016: 371-376.
[8] Zoph B, Knight K. Multi-source neural translation[C]//Proceedings of the NAACL, 2016: 30-34.
[9] Zhou L, Zhang J, Zong C, et al. Sequence generation: From both sides to the middle[C]//Proceedings of the IJCAI, 2019: 5471-5477.
[10] Zheng Z, Huang S, Tu Z, et al. Dynamic past and future for neural machine translation[C]//Proceedings of the EMNLP, 2019: 931-914.
[11] Zhang Z, Wu S, Liu S, et al. Regularizing neural machine translation by target-bidirectional agreement[C]//Proceedings of the NAACL, 2019: 443-450.
[12] Fan K, Wang J, Li B, et al. “Bilingual expert” can find translation errors[C]//Proceedings of the AAAI, 2019: 6367-6374.
[13] Zhao W, Wang L, Shen K, et al. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data[C]//Proceedings of the NAACL, 2019: 156-165.
[14] Huang X, Liu Y, Luan H, et al. Learning to copy for automatic post-editing[C]//Proceedings of the EMNLP, 2019: 6122-6132.
[15] Jaehun S, Jong-Hyeok L. Multi-encoder transformer network for automatic post-editing[C]//Proceedings of the WMT, 2018: 840-845.
[16] Rico S, Barry H, Alexandra B. Neural machine translation of rare words with subword units[C]//Proceedings of the ACL, 2016: 1715-1725.
[17] Matthew S, Bonnie D, Richard S,et al. A study of translation edit rate with targeted human annotation[C]//Proceedings of AMTA, 2006: 223-231.
[18] Ott M, Edunov S, Baevski A, et al. Fairseq: A fast, extensible toolkit for sequence modeling[C]//Proceedings of the NAACL, 2019: 48-53.
[19] Sun M, Jiang B, Xiong H, et al. Baidu neural machine translation systems for WMT19[C]//Proceedings of the WMT, 2019: 374-381.
[20] Guo X, Liu C, Li X, et al. Kingsoft's neural machine translation system for WMT19[C]//Proceedings of the WMT, 2019: 196-202.
[21] Nathan N, Kyra Y, Alexei B, et al. Facebook FAIR's WMT19 news translation task submission[C]//Proceedings of the WMT, 2019: 314-319.
[22] Marcin J. Microsofttranslator at WMT 2019: Towards large-scale document-level neural machine translation[C]//Proceedings of the WMT, 2019: 225-233.
[23] Jan R, Christian H, Yunsu K, et al. The RWTH Aachen University machine translation systems for WMT 2019[C]//Proceedings of the WMT, 2019: 349-355.
[24] Amirhossein T, Ruchit A, Matteo N, et al. Multi-source transformer for automatic post-editing [C]//Proceedings of the 5th Italian Conference on Computational Linguistics CLiC-it, 2018: 366-371.
[25] Santanu P, Nico H, Antonio K, et al. A transformer-based multi-source automatic post-editing system[C]//Proceedings of the WMT, 2018: 827-835.
[26] Maja P. Hjerson: An open source tool for automatic error classification of machine translation output[J]The Prague Bulletin of Mathematical Linguistics, 2011, 96: 59-67.
{{custom_fnGroup.title_cn}}
脚注
{{custom_fn.content}}
基金
国家自然科学基金(61662031,61462044)
{{custom_fund}}