基于跨证据文本实体关系构建的事实核查研究

贺彦程,徐冰,朱聪慧

PDF(1493 KB)
PDF(1493 KB)
中文信息学报 ›› 2024, Vol. 38 ›› Issue (3) : 93-101,112.
信息抽取与文本挖掘

基于跨证据文本实体关系构建的事实核查研究

  • 贺彦程,徐冰,朱聪慧
作者信息 +

Cross-Evidence Entity Relation Reasoning Model for Fact Checking

  • HE Yancheng, XU Bing, ZHU Conghui
Author information +
History +

摘要

事实核查是指基于证据文本的虚假信息检测任务,目前已有的研究方法主要是将声明文本与证据文本拼接后输入预训练模型进行分类判断,或者通过单一节点的全连接图进行推理判断。这些方法忽略了证据文本间的远距离语义关联和其包含的噪声干扰。针对以上问题,该文提出了一种基于跨证据文本实体关系的图卷积神经网络模型(Cross-Evidence Entity Relation Reasoning Model,CERM)。该模型以多个证据文本的实体共现关系为基础,聚合不同实体对象的语义结构信息,同时减小噪声信息干扰,有效提升模型的虚假信息判别能力。实验结果证明,在公开数据集上该文提出的方法在通用评测指标上均优于现有的对比模型,验证了CERM模型在事实核查研究任务上的有效性。

Abstract

Fact checking is defined as the task of detecting false information based on evidence. To address the long-range semantic association between evidences, this paper proposes a cross-evidence entity relation reasoning model(CERM for short). The CERM model constructs a graph neural network centered on the entity relationship between evidence, and aggregates the semantic structure information of the same entity by the same entity link between different evidence texts. Experiments on a public fact verification benchmark show that the proposed model is superior to the existing models in general evaluation indicators.

关键词

事实核查 / 图卷积神经网络 / 实体关系

Key words

fact checking / graph convolutional neural network / entity relation

引用本文

导出引用
贺彦程,徐冰,朱聪慧. 基于跨证据文本实体关系构建的事实核查研究. 中文信息学报. 2024, 38(3): 93-101,112
HE Yancheng, XU Bing, ZHU Conghui. Cross-Evidence Entity Relation Reasoning Model for Fact Checking. Journal of Chinese Information Processing. 2024, 38(3): 93-101,112

参考文献

[1] THORNE J, VLACHOS A. Automated fact checking: Task formulations,methods and future directions[C]//Proceedings of the 27th International Conference on Computational Linguistics, 2018: 3346-3359.
[2] 谢艺菲,卢琪,刘鑫,等. 基于图的多层次注意力事实验证算法[J]. 计算机工程与应用, 2021, 57(10): 146-153.
[3] OSHIKAWA R, QIAN J, WANG W Y. A survey on natural language processing for fake news detection[C]//Proceedings of the 12th Language Resources and Evaluation Conference, 2020: 6086-6093.
[4] CHEN J, BAO Q, SUN C, et al. LOREN: Logic-regularized reasoning for interpretable fact verification[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(10): 10482-10491.
[5] BIAN T, XIAO X, XU T, et al. Rumor detection on social media with bi-directional graph convolutional networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(01): 549-556.
[6] THORNE J, VLACHOS A, CHRISTODOULOPOULOS C, et al. FEVER: A large-scale dataset for fact extraction and verification[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018: 809-819.
[7] BEKOULIS G, PAPAGIANNOPOULOU C, DELIGIANNIS N. A review on fact extraction and verification[J]. ACM Computing Surveys, 2021, 55(1): 1-35.
[8] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6000-6010.
[9] YONEDA T, MITCHELL J, WELBL J, et al. Ucl machine reading group: Four factor framework for fact finding (hexaf)[C]//Proceedings of the 1st Workshop on Fact Extraction and VERification, 2018: 97-102.
[10] KRUENGKRAI C, YAMAGISHI J, WANG X. A multi-level attention model for evidence-based fact checking[C]//Proceedings of the Association for Computational Linguistics: ACL-IJCNLP, 2021: 2447-2460.
[11] TYMOSHENKO K, MOSCHITTI A. Strong and light baseline models for fact-checking joint inference[C]//Proceedings of the Association for Computational Linguistics: ACL-IJCNLP, 2021: 4824-4830.
[12] PARK E,LEE J H, HYEON J D, et al. SISER: Semantic-infused selective graph reasoning for fact verification[C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 1367-1378.
[13] WANG Y, QIAN S, HU J, et al. Fake news detec-
tion via knowledge-driven multimodal graph convolutional networks[C]//Proceedings of the International Conference on Multimedia Retrieval, 2020: 540-547.
[14] ZHOU J, HAN X, YANG C, et al. GEAR: Graph-based evidence aggregating and reasoning for fact verification[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 892-901.
[15] ZHONG W, XU J, TANG D, et al. Reasoning over semantic-level graph for fact checking[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 6170-6180.
[16] LIU Z, XIONG C, SUN M, et al. Fine-grained fact verification with kernel graph attention network[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 7342-7351.
[17] HANSELOWSKI A, ZHANG H, LI Z, et al. UKP-athene: Multi-sentence textual entailment for claim verification[C]//Proceedings of the 1st Workshop on Fact Extraction and VERification, 2018: 103-108.
[18] CHEN Q, ZHU X, LING Z H, et al. Enhanced LSTM for natural language inference[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1657-1668.
[19] NIE Y, CHEN H, BANSAL M. Combining fact extraction and verification with neural semantic matching networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(01): 6859-6866.
[20] KENTON J D M W C, TOUTANOVA L K. BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of NAACL-HLT, 2019: 4171-4186.
[21] ZHUANG L, WAYNE L, YA S, et al. A robustly optimized BERT pre-training approach with post-training[C]//Proceedings of the 20th Chinese National Conference on Computational Linguistics, 2021: 1218-1227.
[22] WAN H, CHEN H, DU J, et al. A DQN-based approach to finding precise evidences for fact verification[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 1030-1039.
[23] YI F, CHRISTOPHER T, REVANTH G R. InfoSurgeon: Cross-mediafine-grained information consistency checking for fake news detection[C]//Proceedings of the Association for Computational Linguistics, 2021: 1683-1698.[24] DUN Y, TU K, CHEN C, et al. Kan: Knowledge-aware attention network for fake news detection[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(1): 81-89.
[25] SI J, ZHOU D, LI T, et al. Topic-aware evidence reasoning and stance-aware aggregation for fact verification[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 1612-1622.
[26] BASTINGS J, TITOV I, AZIZ W, et al. Graph convolutional encoders for syntax-aware neural machine translation[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2017: 1957-1967.
[27] WU M, PAN S, ZHU X, et al. Domain-adversarial graph neural networks for text classification[C]//Proceedings of the IEEE International Conference on Data Mining. IEEE, 2019: 648-657.
[28] REIMERS N, GUREVYCH I. Sentence-bert: Sentence embeddings using Siamese bert-networks[J]. arXiv Preprint arXiv:1908.10084, 2019.
[29] CHEN J, BAO Q, SUN C, et al. LOREN: Logic-regularized reasoning for interpretable fact verification[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(10): 10482-10491.

基金

国家重点研究与发展计划(2020YFB1406902)
PDF(1493 KB)

613

Accesses

0

Citation

Detail

段落导航
相关文章

/