融合多粒度特征的低资源语言词性标注和依存分析联合模型

陆杉,毛存礼,余正涛,高盛祥,黄于欣,王振晗

PDF(3343 KB)
PDF(3343 KB)
中文信息学报 ›› 2023, Vol. 37 ›› Issue (7) : 13-22.
语言分析与计算

融合多粒度特征的低资源语言词性标注和依存分析联合模型

  • 陆杉1,2,毛存礼1,2,余正涛1,2,高盛祥1,2,黄于欣1,2,王振晗1,2
作者信息 +

A Joint Modelof POS Tagging and Dependency Parsing with Multi-Granularity Features for Low-resource Language

  • LU Shan1,2, MAO Cunli1,2, YU Zhengtao1,2, GAO Shengxiang1,2, HUANG Yuxin1,2, WANG Zhenhan1,2
Author information +
History +

摘要

研究低资源语言的词性标注和依存分析对推动低资源自然语言处理任务有着重要的作用。针对低资源语言词嵌入表示,已有工作并没有充分利用字符、子词层面信息编码,导致模型无法利用不同粒度的特征。对此,该文提出融合多粒度特征的词嵌入表示,利用不同的语言模型分别获得字符、子词以及词语层面的语义信息,将三种粒度的词嵌入进行拼接,达到丰富语义信息的目的,缓解由于标注数据稀缺导致的依存分析模型性能不佳的问题。进一步将词性标注和依存分析模型进行联合训练,使模型之间能相互共享知识,降低词性标注错误在依存分析任务上的线性传递。以泰语、越南语为研究对象,在宾州树库数据集上的试验表明,该文方法相比于基线模型的UAS、LAS、POS均有明显提升。

Abstract

The part-of-speech tagging and dependency parsing of low-resource languages plays an important role in promoting low-resource language processing. For low-resource languages, we propose a word embedding representation that integrates multi-granularity features, and apply different language models at the character, sub-word and word level. Word embeddings in three granularities are then combined to enrich semantic information. The part-of-speech tagging and dependency parsing model are further jointly trained, sharing knowledge with each other to alleviate the pipeline error accumulation. Taking Thai and Vietnamese from the Penn Treebank data set, the proposed method significantly out-performs the baseline model according to UAS, LAS, and POS index.

关键词

低资源语言 / 词性标注 / 依存分析 / 多粒度特征 / 联合模型

Key words

low-resource language / part-of-speech tag / dependency parsing / multi-granularity feature / joint model

引用本文

导出引用
陆杉,毛存礼,余正涛,高盛祥,黄于欣,王振晗. 融合多粒度特征的低资源语言词性标注和依存分析联合模型. 中文信息学报. 2023, 37(7): 13-22
LU Shan, MAO Cunli, YU Zhengtao, GAO Shengxiang, HUANG Yuxin, WANG Zhenhan. A Joint Modelof POS Tagging and Dependency Parsing with Multi-Granularity Features for Low-resource Language. Journal of Chinese Information Processing. 2023, 37(7): 13-22

参考文献

[1] NIVRE J. An efficient algorithm for projective dependency parsing[C]//Proceedings of the International Conference on Parsing Technologies, 2003: 149-160.
[2] MCDONALD R,LERMAN K, PEREIRA F. Multilingual dependency analysis with a two-stage discriminative parser[C]//Proceedings of the 10th Conference on Computational Natural Language Learning, 2006: 216-220.
[3] CHEN D, MANNING C D. A fast and accurate dependency parser using neural networks[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014: 740-750.
[4] DYER C, BALLESTEROS M, LING W, et al. Transition-based dependency parsing with stack long short-term memory[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 2015: 334-343.
[5] KIPERWASSER E, GOLDBERG Y. Simple and accurate dependency parsing using bidirectional LSTM feature representations[J]. Transactions of the Association for Computational Linguistics, 2016, 4: 313-327.
[6] VANIA C,GRIVAS A, LOPEZ A. What do character-level models learn about morphology? The case of dependency parsing[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018: 2573-2583.
[7] DEVLIN J, CHANG M W, LEE K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding [C]//Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Minnesota: Association for Computational Linguistics, 2019: 4171-4186.
[8] JAWAHAR G,SAGOT B, SEDDAH D. What does BERT learn about the structure of language?[C]//Proceedings of the ACL 2019-57th Annual Meeting of the Association for Computational Linguistics, 2019.
[9] TOUTANOVA K, KLEIN D, MANNING C D, et al. Feature-rich part-of-speech tagging with a cyclic dependency network[C]//Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, 2003: 252-259.
[10] TSUBOI Y. Neural networks leverage corpus-wide information for part-of-speech tagging[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014: 938-950.
[11] HUANG Z, XU W, YU K. Bidirectional LSTM-CRF models for sequence tagging[J].arXiv preprint arXiv:1508.01991, 2015.
[12] KANN K, BJERVA J, AUGENSTEIN I, et al. Character-level supervision for low-resource POS tagging[C]//Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, 2018: 1-11.
[13] DOZAT T, MANNING C D. Deep biaffine attention for neural dependency parsing[C]//Proceedings of ICLR,2017:1-8.
[14] SINGKUL S, WORARATPANYA K. Thai dependency parsing with character embedding[C]//Proceedings of the 11th International Conference on Information Technology and Electrical Engineering. IEEE, 2019: 1-5.
[15] NGUYEN D Q. A neural joint model for Vietnamese word segmentation, POS tagging and dependency parsing[C]//Proceedings of ALTA, 2019: 28-34.
[16] NGUYEN D Q,VERSPOOR K. An improved neural network model for joint POS tagging and dependency parsing[C]//Proceedings of the CoNLL Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, 2018: 81-91.
[17] YAN H,QIU X, HUANG X. A graph-based model for joint Chinese word segmentation and dependency parsing[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 78-92.
[18] SRIVASTAVA N, HINTON G,KRIZHEVSKY A, et al. Dropout: A simple way to prevent neural networks from overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
[19] STRAKA M. UDPipe 2.0 prototype at CoNLL 2018 UD shared task[C]//Proceedings of the CoNLL Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, 2018: 197-207.
[20] KONDRATYUK D, STRAKA M. 75 Languages, 1 model: Parsing universal dependencies universally[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 2779-2795.

基金

国家自然科学基金(62166023,U21B2027,61866019);云南省重大科技专项计划项目(202103AA080015,202302AD080003,202002AD080001)
PDF(3343 KB)

Accesses

Citation

Detail

段落导航
相关文章

/