石航,刘瑞芳,刘欣瑜,陈泓宇. 基于文章和近答案句信息的问题生成模型[J]. 中文信息学报, 2021, 35(8): 127-134.
SHI Hang, LIU Ruifang, LIU Xinyu, CHEN Hongyu. Question Generation Based on Paragraph and Close-Answer Context. , 2021, 35(8): 127-134.
基于文章和近答案句信息的问题生成模型
石航,刘瑞芳,刘欣瑜,陈泓宇
北京邮电大学 人工智能学院,北京 100876
Question Generation Based on Paragraph and Close-Answer Context
SHI Hang, LIU Ruifang, LIU Xinyu, CHEN Hongyu
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China
Abstract:The automatic question generation task aims to generate natural language questions for a paragraph of text. Existing neural question generation models mainly focused on using one sentence or the whole paragraph with target answer as the input. To better utilize the context of the target answer, this paper proposes a multi-input hierarchical attention sequence to sequence network to capture more valuable sentence information and richer semantic information of paragraph to generate high-quality questions. Experiments on SQuAD show that our method is better than the state-of-the-art in terms of BLEU4, and the response rate of this method is obviously better than the baseline system.
[1] Heilman M, Smith N A. Good question! Statistical ranking for question generation[C]//Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2010: 609-617. [2] Mostafazadeh N, Misra I, Devlin J, et al. Generating natural questions about an image[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 1802-1813. [3] Zhu H, Dong L, Wei F, et al. Learning to ask unanswerable questions for machine reading comprehension[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,2019: 4238-4248. [4] Sultan M A,Chandel S, Astudillo R F, et al. On the importance of diversity in question generation for QA[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 5651-5656. [5] Heilman M. Automatic factual question generation from text [D]. Ph. D. Thesis, Carnegie Mellon University, 2011. [6] Mannem P, Prasad R, Joshi A. Question generation from paragraphs at upenn: qgstec system description [C]//Proceedings of Question Generation 2010: The 3rd Workshop on Question Generation, 2010: 84-91. [7] Du X, Shao J,Cardie C. Learning to ask: Neural question generation for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1342-1352. [8] Zhao Y, Ni X, Ding Y, et al. Paragraph-level neural question generation with maxout pointer and gated self-attention networks[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018: 3901-3910. [9] 谭红叶,孙秀琴,闫真,等. 基于答案及其上下文信息的问题生成模型[J]. 中文信息学报, 2020, 34 (5): 74-81. [10] Jia X, Zhou W, Xu S U N, et al. How to ask good questions? Try to leverage paraphrases[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 6130-6140. [11] Kriangchaivech K, Wangperawong A. Question generation by transformers[J]. arXiv preprint arXiv: 1909.05017, 2019. [12] Dong L, Yang N, Wang W, et al. Unified language model pre-training for natural language understanding and generation[C]//Proceedings of the 33rd Conference on Neural Information Processing Systems, 2019: 13063-13075. [13] Rajpurkar P, Zhang J, Lopyrev K, et al. Squad: 100,000+ questions for machine comprehension of text[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016: 2383-2392. [14] Liu B, Zhao M,Niu D, et al. Learning to generate questions by learning what not to generate[C]//Proceedings of the World Wide Web Conference 2019, 2019: 1106-1118. [15] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of NAACLHLT 2019, 2019: 4171-4186. [16] Zhou Q, Yang N, Wei F, et al. Neural question generation from text: A preliminary study[C]//Proceedings of the National CCF Conference on Natural Language Processing and Chinese Computing. Springer, Cham, 2017: 662-671. [17] Kim Y, Lee H, Shin J, et al. Improving neural question generation using answer separation[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019, 33: 6602-6609. [18] Wang W, Yang N, Wei F, et al. Gated self-matching networks for reading comprehension and question answering[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 189-198. [19] Pennington J,Socher R, Manning C D. Glove: Global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014: 1532-1543. [20] Papineni K, Roukos S, Ward T, et al. BLEU: A method for automatic evaluation of machine translation[C]//Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002: 311-318. [21] Denkowski M, Lavie A. Meteor universal: Language specific translation evaluation for any target language[C]//Proceedings of the 9th Workshop on Statistical Machine Translation, 2014: 376-380. [22] Lin C Y. Rouge: A package for automatic evaluation of summaries[C]//Proceedings of the ACL, 2004: 74-81. [23] Seo M, Kembhavi A, Farhadi A, et al. Bidirectional attention flow for machine comprehension[C]//Proceedings of the ICLR, 2017: 1-13.