融合细粒度上下文信息的互动论点对识别方法

何宇航,鲍建竹,徐睿峰,孙洋,赵琴

PDF(2077 KB)
PDF(2077 KB)
中文信息学报 ›› 2023, Vol. 37 ›› Issue (10) : 130-138.
计算论辩专栏

融合细粒度上下文信息的互动论点对识别方法

  • 何宇航,鲍建竹,徐睿峰,孙洋,赵琴
作者信息 +

Incorporating Fine-grained Context for Interactive Argument Pair Identification

  • HE Yuhang, BAO Jianzhu, XU Ruifeng, SUN Yang, ZHAO Qin
Author information +
History +

摘要

作为论辩挖掘领域中的重要研究问题,互动论点对识别旨在从对话文本中理解对话双方的观点,并识别出互动的论点对。现有基于深度学习的方法通过融合上下文信息取得了不错的效果,但是这些方法往往仅考虑整体上下文,忽略了上下文中可能存在的噪声文本,缺乏对论点间互动关系的细粒度建模能力。针对上述问题,该文首先基于语义相似度筛选过滤上下文,而后构建基于上下文的对偶互动图,从而细粒度地挖掘论点-上下文、论点-论点之间潜在的互动模式,以提高互动论点对识别性能。在CMV公开数据集上的实验结果显示,该文提出的方法取得了优于现有模型的性能,并具有一定的可解释性。

Abstract

As an important issue in argument mining research, interactive argument pair identification aims to understand the views between two parties and identify the interactive argument pairs from dialogue text. Although the existing deep learning-based methods have achieved good results by incorporating contextual information, most of them tend to simply consider the whole context, which ignore the possible noisy context and lack fine-grained modeling ability of the interaction between arguments and their context. To address above problems, this paper proposes to firstly filter the noisy context based on semantic similarity, and then constructs a dual interaction graph, so as to identify the potential interaction patterns of argument-context and argument-argument in a fine-grained manner. Experimental results on the CMV public dataset show that the proposed method outperforms the existing models, with a certain degree of interpretability.

关键词

互动论点对识别 / 细粒度建模 / 上下文信息

Key words

interactive argument pair identification / fine-grained modeling / context information

引用本文

导出引用
何宇航,鲍建竹,徐睿峰,孙洋,赵琴. 融合细粒度上下文信息的互动论点对识别方法. 中文信息学报. 2023, 37(10): 130-138
HE Yuhang, BAO Jianzhu, XU Ruifeng, SUN Yang, ZHAO Qin. Incorporating Fine-grained Context for Interactive Argument Pair Identification. Journal of Chinese Information Processing. 2023, 37(10): 130-138

参考文献

[1] 石岳峰,王熠,张岳. 深度学习在论辩挖掘任务中的应用[J]. 中文信息学报, 2022, 36(7): 1-12,23.
[2] TAN C, NICULAE V, DANESCU-NICULESCU-MIZIL C, et al.Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions[C]//Proceedings of the 25th International Conference on World Wide Web, 2016: 613-624.
[3] YUAN J, WEI Z, ZHAO D, et al.Leveraging argumentation knowledge graph for interactive argument pair identification[G]//Proceedings of the Association for Computational Linguistics: ACL-IJCNLP, 2021: 2310-2319.
[4] WU Y, LIU P. ACE: A context-enhanced model for interactive argument pair identification[C]//Proceedings of CCF International Conference on Natural Language Processing and Chinese Computing, 2021: 569-578.
[5] GENG Y, LI S, ZHANG F, et al.Context aware and data augmented transformer for interactive argument pair identification[C]//Proceedings of CCF International Conference on Natural Language Processing and Chinese Computing, 2021: 579-589.
[6] MORIO G, OZAKI H, MORISHITA T, et al.End-to-end argument mining with cross corpora multi task learning[J]. Transactions of the Association for Computational Linguistics, 2022, 10: 639-658.
[7] JI L, WEI Z, LI J, et al. Discrete argument representation learning for interactive argument pair identification[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 5467-5478.
[8] JI L, WEI Z, HU X, et al.Incorporating argument-level interactions for persuasion comments evaluation using co-attention model[C]//Proceedings of the 27th International Conference on Computational Linguistics, 2018: 3703-3714.
[9] BAO J, FAN C, WU J, et al. A neural transition-based model for argumentation mining[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 6354-6364.
[10] WEI Z, XIA Y, LI C, et al. A preliminary study of disputation behavior in online debating forum[C]//Proceedings of the 3rd Workshop on Argument Mining. 2016: 166-171.
[11] CHAKRABARTY T, HIDEY C, MURESAN S, et al. AMPERSAND: Argument mining for PERSuAsive online discussions[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 2933-2943.
[12] HABERNAL I, GUREVYCH I. Which argument is more convincing?: Analyzing and predicting convincingness of web arguments using bidirectional LSTM[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 1589-1599.
[13] CHENG L, WU T, BING L, et al.Argument pair extraction via attention guided multilayer multi-cross encoding[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 6341-6353.
[14] BAO J, SUN J, ZHU Q, et al.Have my arguments been replied to?: Argument pair extraction as machine reading comprehension[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022: 29-35.
[15] YUAN J, WEI Z, GAO Y, et al. Overview of SMP-CAIL2020-argmine: The interactive argument-pair extraction in judgement document challenge[J]. Data Intelligence, 2021, 3(2): 287-307.
[16] REIMERS N, GUREVYCH I. Sentence-BERT: Sentence embeddings using Siamese BERT-networks[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 3982-3992.
[17] VELICˇKOVIC' P, CUCURULL G, CASANOVA A, et al.Graph attention networks[C/OL]//Proceedings of the International Conference on Learning Representation. 2018. https://openreview.net/forum?id=rJXMPikCz.
[18] LOSHCHILOV I, HUTTER F. Decoupled weight decay regularization[C/OL]//Proceedings of the International Conference on Learning Representation. 2019. https://openreview.net/forum?id=Bkg6RiCgY7.
[19] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: A simple way to prevent neural networks from overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
[20] DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 4171-4186.

基金

国家自然科学基金(62176076,62006062);深圳市技术攻关项目(JSGG20210802154400001);深圳市基础研究重点项目(JCYJ20220818102415032)
PDF(2077 KB)

Accesses

Citation

Detail

段落导航
相关文章

/