Abstract:Translation quality estimation (QE) technology refers to evaluating machine translation results without reference translations. Current neural translation quality estimation models can implicitly learn the syntactic structure of the source language, but they cannot effectively capture the syntactic relationships within sentences from the perspective of linguistics. This paper proposes a method to integrate the syntactic relationship information of the source sentence into neural translation quality estimation, jointly considering the internal dependency relationships of the source language and the translation quality. Experimental results show that the syntactic feature can improve the performance of the model. Finally, we used an ensemble learning algorithm to integrate multiple other linguistic features to obtain the best performance.
[1] Specia L, Shah K, De Souza J G C, et al. QuEst-A translation quality estimation framework[C]//Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2013: 79-84. [2] Han A L F, Lu Y, Wong D F, et al. Quality estimation for machine translation using the joint method of evaluation criteria and statistical modeling[C]//Proceedings of the 8th Workshop on Statistical Machine Translation, 2013: 365-372. [3] Kreutzer J, Schamoni S, Riezler S. Quality estimation from scratch (quetch): Deep learning for word-level translation quality estimation[C]//Proceedings of the 10th Workshop on Statistical Machine Translation., 2015: 316-322. [4] Patel R N. Translation quality estimation using recurrent neural network[J]. arXiv preprint arXiv:1610.04841, 2016. [5] Kim H, Lee J H, Na S H. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation[C]//Proceedings of the 2nd Conference on Machine Translation, 2017: 562-568. [6] Wang J, Fan K, Li B, et al. Alibaba submission for WMT18 quality estimation task[C]//Proceedings of the 3rd Conference on Machine Translation: Shared Task Papers, 2018: 809-815. [7] Li M, Xiang Q, Chen Z, et al. A unified neural network for quality estimation of machine translation[J]. IEICE Transactions on Information and Systems, 2018, 101(9): 2417-2421. [8] Bojar O, Buck C, Federmann C, et al. Findings of the 2014 workshop on statistical machine translation[C]//Proceedings of the 9th Workshop on Statistical Machine Translation, 2014: 12-58. [9] Bojar O, Chatterjee R, Federmann C, et al. Findings of the 2016 conference on machine translation[C]//Proceedings of the 1st Conference on Machine Translation: Volume 2, Shared Task Papers, 2016: 131-198. [10] Ondrej B, Chatterjee R, Christian F, et al. Findings of the 2017 conference on machine translation[C]//Proceedings of the 2nd Conference on Machine Translation. The Association for Computational Linguistics, 2017: 169-214. [11] Bojar O, Federmann C,Fishel M, et al. Findings of the 2018 conference on machine translation[C]//Proceedings of the 3rd Conference on Machine Translation: Shared Task Papers, 2018: 272-303. [12] Hardmeier C, Nivre J, Tiedemann J. Tree kernels for machine translation quality estimation[C]//Proceedings of the 7th Workshop on Statistical Machine Translation, Montréal, Canada, Association for Computational Linguistics, 2012: 109-113. [13] Rubino R, Foster J, Wagner J, et al. Dcu-symantec submission for the WMT 2012 quality estimation task[C]//Proceedings of the 7th Workshop on Statistical Machine Translation, 2012: 138-144. [14] Specia L, Giménez J. Combining confidence estimation and reference-based metrics for segment-level MT evaluation[C]//Proceedings of the 9th Conference of the Association for Machine Translation in the Americas. 2010. [15] Kaljahi R, Foster J, Roturier J, et al. Quality estimation of English-French machine translation: A detailed study of the role of syntax[C]//Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers, 2014: 2052-2063. [16] Kozlova A, Shmatova M, Frolov A. Ysda participation in the WMT'16 quality estimation shared task[C]//Proceedings of the 1st Conference on Machine Translation: Volume 2, Shared Task Papers. 2016: 793-799. [17] Martins A F T, Junczys-Dowmunt M, Kepler F N, et al. Pushing the limits of translation quality estimation[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 205-218. [18] Kim H, Lee J H. A recurrent neural networks approach for estimating the quality of machine translation output[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 494-498. [19] Kim H, Jung H Y, Kwon H, et al. Predictor-estimator: Neural quality estimation based on target word prediction for machine translation[J]. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 2017, 17(1): 1-22. [20] 孙潇, 朱聪慧, 赵铁军. 融合翻译知识的机器翻译质量估计算法[J]. 智能计算机与应用, 2019, 9(02):279-283. [21] Hokamp C. Ensembling factored neural machine translation models for automatic post-editing and quality estimation[J]. arXiv preprint arXiv:1706.05083, 2017. [22] Ye N, Wang Y, Cai D. Incorporating syntactic knowledge in neural quality estimation for machine translation[C]//Proceedings of China Conference on Machine Translation. Springer, Singapore, 2019: 23-34. [23] 陆金梁,张家俊.基于多语言预训练语言模型的译文质量估计方法[J].厦门大学学报(自然科学版), 2020, 59(02):151-158. [24] Peters M E, Neumann M, Iyyer M, et al. Deep contextualized word representations[J]. arXiv preprint arXiv:1802.05365, 2018. [25] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, 2019: 4171-4186. [26] Yang Z, Dai Z, Yang Y, et al. Xlnet: Generalized autoregressive pretraining for language understanding[J]. arXiv preprint arXiv:1906.08237, 2019. [27] Dai Z, Yang Z, Yang Y, et al. Transformer-xl: Attentive language models beyond a fixed-length context[J]. arXiv preprint arXiv:1901.02860, 2019. [29] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Neural Computation, 1997, 9(8): 1735-1780. [30] Snover M, Dorr B, Schwartz R, et al. A study of translation edit rate with targeted human annotation[C]//Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, 2006: 223-231.