杨亮,周逢清,张琍,毛国庆,易斌,林鸿飞. 基于生成对抗网络的控辩焦点识别[J]. 中文信息学报, 2020, 34(9): 89-96.
YANG Liang, ZHOU Fengqing, ZHANG Li, MAO Guoqing, YI Bin, LIN Hongfei. Argument Recognition Based on Generative Adversarial Networks. , 2020, 34(9): 89-96.
Argument Recognition Based on Generative Adversarial Networks
YANG Liang1, ZHOU Fengqing1, ZHANG Li2, MAO Guoqing3, YI Bin1, LIN Hongfei1
1.College of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning 116024, China; 2.Beijing Institute of Computer Technology and Application, Beijing 100854, China; 3.Beijing GridSum Technology Co., Ltd., Beijing 100083, China
Abstract:In the process of trial in the field of justice, the prosecution and the defense often hold different views around the argument of the case, which is also the key factors to the final judgment of the case. To identify the arguments in the cases, this paper introduce the text summarization model since the composition of the argument mostly depends on the analysis and summary of the case text. We construct the generation model of the argument by combining the generative adversarial network, and then obtain the argument of the case. Experimented on the real judicial data obtained from the website of China Judgements Online, the results show that the proposed model improves the accuracy in the task of argument recognition. This method can be applied as an auxiliary role in the pre-court preplan and trial of the case for procuratorial personnel in real application.
[1] 新华网.中国裁判文书网总访问量突破100亿次[N/OL].[2017-08-24].http://www.xinhuanet.com/2017-08/24/c_1121538482.htm. [2] Hua X, Wang L. Neural argument generation augmented with externally retrieved evidence[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018: 219-230. [3] Durmus Esin, Cardie Claire. A corpus for modeling user and language effects in argumentation on online debating[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 602-607. [4] Shohreh Haddadan, Elena Cabrio, Serena Villata. Yes, we can! Mining arguments in 50 years of US presidential campaign debates[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 4684-4690. [5] Jo Y, Poddar S, Jeon B, et al. Attentive interaction model: Modeling changes in view in argumentation[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018(1): 103-116. [6] Luhn H P. The automatic creation of literature abstracts[J]. IBM Journal of Research and Development, 1958, 2(2): 159-165. [7] Kupiec J, Pedersen J, Chen F. A trainable document summarizer[C]//Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Seattle, Washington, United States, 1995: 68-73. [8] Kanapala A, Jannu S, Pamula R. Summarization of legal judgments using gravitational search algorithm[J]. Neural Computing and Applications, 2019, 31(12): 8631-8639. [9] Mihalcea R, Tarau P. TextRank: Bringing order into text[C]//Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Barcelona, Spain: Association for Computational Linguistic, 2004: 404-411. [10] Sutskever I, Vinyals O, Le Q V. Sequence to sequensce learning with neural networks[C]//Proceedings of Advances in Neural Information Processing Systems. Montreal, Canada, 2014: 3104-3112. [11] Rush A M, Chopra S, Weston J. A neural attention model for abstractive sentence summarization[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal: Association for Computational Linguistic, 2015: 379-389. [12] Chopra S, Auli M, Rush A M. Abstractive sentence summarization with attentive recurrent neural networks[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 93-98. [13] Hochreciter. S, Schnidhuber J. Long short-term memory[J]. Neural Computation,1997,9(8): 1735-1780. [14] Song S, Huang H, Ruan T. Abstractive text summarization using LSTM-CNN based deep learning[J]. Multimedia Tools and Applications, 2019, 78(1): 857-875. [15] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proceedings of the International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2014: 2672-2680. [16] Kusner M J, Hernández-Lobato J M. Gans for sequences of discrete elements with the gumbel-softmax distribution[J]. arXiv preprint arXiv: 1611.04051, 2016. [17] Yu L, Zhang W, Wang J, et al. SeqGAN: Sequence generative adversarial nets with policy gradient[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco: AAAI Press, 2017: 2852-2858. [18] Arjovsky, Martin, Soumith Chintal, et al. Wasserstein gan.[J].arXiv preprint arXiv: 1701.07875,2017. [19] Guo, Jiaxian, et al. Long text generation via adversarial Training with leaked information.[J] arXiv preprint arXiv: 1709.08624,2017.