利用依存句法关系改进神经译文质量估计

叶娜,黎天宇,蔡东风,徐佳

PDF(1826 KB)
PDF(1826 KB)
中文信息学报 ›› 2021, Vol. 35 ›› Issue (9) : 46-57.
机器翻译

利用依存句法关系改进神经译文质量估计

  • 叶娜,黎天宇,蔡东风,徐佳
作者信息 +

Dependency Relationship Enhanced Neural Machine Translation Quality Estimation

  • YE Na, LI Tianyu, CAI Dongfeng, XU Jia
Author information +
History +

摘要

译文质量估计技术是指在无参考译文的情况下对机器译文进行评价的方法。近年来,深度学习技术取得了重大突破,融合深度学习技术的神经译文质量估计方法逐渐取代了传统的译文质量估计方法成为主流。神经译文质量估计模型具有一定的隐式学习源语言句法结构的能力,但无法从语言学的角度有效地捕捉句子内部的句法关系。该文提出了一种将源语句的句法关系信息显式融入神经译文质量估计的方法,在源语言的依存句法关系和译文质量之间建立联系。实验结果表明,该文提出的句法关系特征能够提高译文质量估计模型的准确性。同时还提取了多个层面的语言学特征,在不同的网络模型中进行融合,并从多个角度分析了不同特征所起到的效果。最后使用集成学习算法,将多个有效模型进行融合,获得了最佳性能。

Abstract

Translation quality estimation (QE) technology refers to evaluating machine translation results without reference translations. Current neural translation quality estimation models can implicitly learn the syntactic structure of the source language, but they cannot effectively capture the syntactic relationships within sentences from the perspective of linguistics. This paper proposes a method to integrate the syntactic relationship information of the source sentence into neural translation quality estimation, jointly considering the internal dependency relationships of the source language and the translation quality. Experimental results show that the syntactic feature can improve the performance of the model. Finally, we used an ensemble learning algorithm to integrate multiple other linguistic features to obtain the best performance.

关键词

译文质量估计 / 依存句法关系 / 特征融合 / 集成学习

Key words

translation quality estimation / dependency relationship / feature fusion / ensemble learning

引用本文

导出引用
叶娜,黎天宇,蔡东风,徐佳. 利用依存句法关系改进神经译文质量估计. 中文信息学报. 2021, 35(9): 46-57
YE Na, LI Tianyu, CAI Dongfeng, XU Jia. Dependency Relationship Enhanced Neural Machine Translation Quality Estimation. Journal of Chinese Information Processing. 2021, 35(9): 46-57

参考文献

[1] Specia L, Shah K, De Souza J G C, et al. QuEst-A translation quality estimation framework[C]//Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2013: 79-84.
[2] Han A L F, Lu Y, Wong D F, et al. Quality estimation for machine translation using the joint method of evaluation criteria and statistical modeling[C]//Proceedings of the 8th Workshop on Statistical Machine Translation, 2013: 365-372.
[3] Kreutzer J, Schamoni S, Riezler S. Quality estimation from scratch (quetch): Deep learning for word-level translation quality estimation[C]//Proceedings of the 10th Workshop on Statistical Machine Translation., 2015: 316-322.
[4] Patel R N. Translation quality estimation using recurrent neural network[J]. arXiv preprint arXiv:1610.04841, 2016.
[5] Kim H, Lee J H, Na S H. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation[C]//Proceedings of the 2nd Conference on Machine Translation, 2017: 562-568.
[6] Wang J, Fan K, Li B, et al. Alibaba submission for WMT18 quality estimation task[C]//Proceedings of the 3rd Conference on Machine Translation: Shared Task Papers, 2018: 809-815.
[7] Li M, Xiang Q, Chen Z, et al. A unified neural network for quality estimation of machine translation[J]. IEICE Transactions on Information and Systems, 2018, 101(9): 2417-2421.
[8] Bojar O, Buck C, Federmann C, et al. Findings of the 2014 workshop on statistical machine translation[C]//Proceedings of the 9th Workshop on Statistical Machine Translation, 2014: 12-58.
[9] Bojar O, Chatterjee R, Federmann C, et al. Findings of the 2016 conference on machine translation[C]//Proceedings of the 1st Conference on Machine Translation: Volume 2, Shared Task Papers, 2016: 131-198.
[10] Ondrej B, Chatterjee R, Christian F, et al. Findings of the 2017 conference on machine translation[C]//Proceedings of the 2nd Conference on Machine Translation. The Association for Computational Linguistics, 2017: 169-214.
[11] Bojar O, Federmann C,Fishel M, et al. Findings of the 2018 conference on machine translation[C]//Proceedings of the 3rd Conference on Machine Translation: Shared Task Papers, 2018: 272-303.
[12] Hardmeier C, Nivre J, Tiedemann J. Tree kernels for machine translation quality estimation[C]//Proceedings of the 7th Workshop on Statistical Machine Translation, Montréal, Canada, Association for Computational Linguistics, 2012: 109-113.
[13] Rubino R, Foster J, Wagner J, et al. Dcu-symantec submission for the WMT 2012 quality estimation task[C]//Proceedings of the 7th Workshop on Statistical Machine Translation, 2012: 138-144.
[14] Specia L, Giménez J. Combining confidence estimation and reference-based metrics for segment-level MT evaluation[C]//Proceedings of the 9th Conference of the Association for Machine Translation in the Americas. 2010.
[15] Kaljahi R, Foster J, Roturier J, et al. Quality estimation of English-French machine translation: A detailed study of the role of syntax[C]//Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers, 2014: 2052-2063.
[16] Kozlova A, Shmatova M, Frolov A. Ysda participation in the WMT'16 quality estimation shared task[C]//Proceedings of the 1st Conference on Machine Translation: Volume 2, Shared Task Papers. 2016: 793-799.
[17] Martins A F T, Junczys-Dowmunt M, Kepler F N, et al. Pushing the limits of translation quality estimation[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 205-218.
[18] Kim H, Lee J H. A recurrent neural networks approach for estimating the quality of machine translation output[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 494-498.
[19] Kim H, Jung H Y, Kwon H, et al. Predictor-estimator: Neural quality estimation based on target word prediction for machine translation[J]. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 2017, 17(1): 1-22.
[20] 孙潇, 朱聪慧, 赵铁军. 融合翻译知识的机器翻译质量估计算法[J]. 智能计算机与应用, 2019, 9(02):279-283.
[21] Hokamp C. Ensembling factored neural machine translation models for automatic post-editing and quality estimation[J]. arXiv preprint arXiv:1706.05083, 2017.
[22] Ye N, Wang Y, Cai D. Incorporating syntactic knowledge in neural quality estimation for machine translation[C]//Proceedings of China Conference on Machine Translation. Springer, Singapore, 2019: 23-34.
[23] 陆金梁,张家俊.基于多语言预训练语言模型的译文质量估计方法[J].厦门大学学报(自然科学版), 2020, 59(02):151-158.
[24] Peters M E, Neumann M, Iyyer M, et al. Deep contextualized word representations[J]. arXiv preprint arXiv:1802.05365, 2018.
[25] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, 2019: 4171-4186.
[26] Yang Z, Dai Z, Yang Y, et al. Xlnet: Generalized autoregressive pretraining for language understanding[J]. arXiv preprint arXiv:1906.08237, 2019.
[27] Dai Z, Yang Z, Yang Y, et al. Transformer-xl: Attentive language models beyond a fixed-length context[J]. arXiv preprint arXiv:1901.02860, 2019.
[29] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Neural Computation, 1997, 9(8): 1735-1780.
[30] Snover M, Dorr B, Schwartz R, et al. A study of translation edit rate with targeted human annotation[C]//Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, 2006: 223-231.

基金

教育部人文社会科学研究青年基金(19YJC740107);国家自然科学基金(U1908216);辽宁省重点研发计划(2019JH2/10100020)
PDF(1826 KB)

1446

Accesses

0

Citation

Detail

段落导航
相关文章

/