面向学科题目的文本分析方法与应用研究综述

黄振亚,刘淇,陈恩红,林鑫,何理扬,刘嘉聿,王士进

PDF(6967 KB)
PDF(6967 KB)
中文信息学报 ›› 2022, Vol. 36 ›› Issue (10) : 1-16.
综述

面向学科题目的文本分析方法与应用研究综述

  • 黄振亚1,2,刘淇1,2,陈恩红1,2,林鑫1,2,何理扬1,2,刘嘉聿1,2,王士进2,3
作者信息 +

A Survey on Text Analysis Methods and Applications for Educational Questions

  • HUANG Zhenya1,2, LIU Qi1,2, CHEN Enhong1,2, LIN Xin1,2, HE Liyang1,2, LIU Jiayu1,2, WANG Shijin2,3
Author information +
History +

摘要

分析学科题目含义、模拟人类解决问题,是当前“人工智能+教育”融合研究的重要方向之一。近年来,智能教育系统的快速发展积累了大量学科题目资源,为相关研究提供了数据支撑。为此,利用大数据分析与自然语言处理相关的技术,研究者提出了大量面向学科题目的文本分析方法,开展了许多重要的智能应用任务,对探索人类知识学习等认知能力具有重要意义。该文围绕智能教育与自然语言处理交叉领域,介绍了若干代表性研究任务,包括题目质量分析、机器阅读理解、数学题问答、文章自主评分等,并对相应研究进展进行阐述和总结;此外,对相关数据集和开源工具包进行了总结和介绍;最后,展望了多个未来研究方向。

Abstract

One of the important research directions on the integration of artificial intelligence into pedagogy is analyzing the meanings of educational questions and simulating how humans solve problems. In recent years, a large number of educational question resources have been collected, which provides the data support of the related research. Leveraging the big data analysis and natural language processing related techniques, researchers propose many specific text analysis methods for educational questions, which are of great significance to explore the cognitive abilities of how human master knowledge. In this paper, we summarize several representative topics, including question quality analysis, machine reading comprehension, math problem solving, and automated essay scoring. Moreover, we introduce the relevant public datasets and open-source toolkits. Finally, we conclude by anticipating several future directions.

关键词

学科题目 / 题目质量分析 / 机器阅读理解 / 数学题问答 / 文章自主评分

Key words

educational questions / question quality analysis / machine reading comprehension / math problem solving / automated essay scoring

引用本文

导出引用
黄振亚,刘淇,陈恩红,林鑫,何理扬,刘嘉聿,王士进. 面向学科题目的文本分析方法与应用研究综述. 中文信息学报. 2022, 36(10): 1-16
HUANG Zhenya, LIU Qi, CHEN Enhong, LIN Xin, HE Liyang, LIU Jiayu, WANG Shijin. A Survey on Text Analysis Methods and Applications for Educational Questions. Journal of Chinese Information Processing. 2022, 36(10): 1-16

参考文献

[1] 焦李成. 下一代人工智能的挑战与思考[J]. 智能系统学报, 2020, 15(06): 1185-1187.
[2] Roll I, Wylie R. Evolution and revolution in artificial intelligence in education[J]. International Journal of Artificial Intelligence in Education, 2016, 26(2): 582-599.
[3] Fuchs L S, Fuchs D, Hamlett C L, et al. Effects of expert system consultation within curriculum-based measurement: Using a reading maze task[J]. Exceptional Children, 1992, 58(5): 436-450.
[4] Hontangas P, Ponsoda V, Olea J, et al. The choice of item difficulty in self-adapted testing[J]. European Journal of Psychological Assessment, 2000, 16(1): 3.
[5] Liu S, Zhang X, Zhang S, et al. Neural machine reading comprehension: Methods and trends[J]. Applied Sciences, 2019,9(18): 3698.
[6] Zhang D, Wang L, Zhang L, et al. The gap of semantic parsing: A survey on automatic math word problem solvers[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 42(9): 2287-2305.
[7] Ke Z, Ng V. Automatedessay scoring: A survey of the state of the art[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019: 6300-6308.
[8] 黄振亚. 面向个性化学习的数据挖掘方法与应用研究[D]. 合肥: 中国科学技术大学博士学位论文, 2020.
[9] Wu F, Qiao Y, Chen J H, et al. Mind: A large-scale dataset for news recommendation[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 3597-3606.
[10] Xu G, Meng Y, Qiu X, et al. Sentiment analysis of comment texts based on BiLSTM[J]. IEEE Access, 2019, 7: 51522-51532.
[11] Huang W, Chen E, Liu Q, et al. Hierarchical multi-label text classification: An attention-based recurrent network approach[C]//Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019: 1051-1060.
[12] Liu Q, Huang Z, Huang Z, et al. Finding similar exercises in online education systems[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,2018: 1821-1830.
[13] Liu Q, Tong S, Liu C, et al. Exploiting cognitive structure for adaptive learning[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019: 627-635.
[14] Huang Z, Liu Q, Chen E, et al. Question difficulty prediction for READING problems in standard tests[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017: 1352-1359.
[15] Qiu Z, Wu X, Fan W. Question difficulty prediction for multiple choice problems in medical exams[C]//Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019: 139-148.
[16] Xue K, Yaneva V, Runyon C, et al. Predicting the difficulty and response time of multiple choice questions using transfer learning[C]//Proceedings of the 15th Workshop on Innovative Use of NLP for Building Educational Applications, 2020: 193-197.
[17] Yin Y, Liu Q, Huang Z, et al. Quesnet: A unified representation for heterogeneous test questions[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019: 1328-1336.
[18] 黄仔. 基于多模态学习的试题建模方法与应用研究[D]. 合肥: 中国科学技术大学博士学位论文, 2019.
[19] Pelnek R. Measuring similarity of educational items: An overview[J]. IEEE Transactions on Learning Technologies, 2019, 13(2): 354-366.
[20] Rihk J, Pelnek R. Measuring similarity of educational items using data on learners' performance[C]//Proceedings of the 10th International Conference on Educational Data Mining, 2017: 16-23.
[21] 刘淇, 陈恩红, 朱天宇, 等. 面向在线智慧学习的教育数据挖掘技术研究[J]. 模式识别与人工智能, 2018, 31(01): 77-90.
[22] Liu Q, Huang Z, Yin Y, et al. Ekt: Exercise-aware knowledge tracing for student performance prediction[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 33(1): 100-115.
[23] Hermann K M, Kocisky T, Grefenstette E, et al. Teaching machines to read and comprehend[J]. Advances in Neural Information Processing Systems, 2015, 28: 1693-1701.
[24] Hill F, Bordes A, Chopra S, et al. The goldilocks principle: Reading children's books with explicit memory representations[J/OL]. arXiv preprint arXiv:1511.02301, 2015.
[25] Suster S, Daelemans W. CLICR: A dataset of clinical case reports for machine reading comprehension[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,2018: 1551-1563.
[26] Richardson M, Burges C J C, Renshaw E. MCtest: A challenge dataset for the open-domain machine comprehension of text[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2013: 193-203.
[27] Lai G, Xie Q, Liu H, et al. RACE: Large-scale reading comprehension dataset from examinations[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2017: 785-794.
[28] Rajpurkar P, Zhang J, Lopyrev K, et al. SQuAd: 100,000+ questions for machine comprehension of text[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing,2016: 2383-2392.
[29] Trischler A, Wang T, Yuan X, et al. NewsQA: A machine comprehension dataset[C]//Proceedings of the 2nd Workshop on Representation Learning for NLP, 2017: 191-200.
[30] Joshi M, Choi E, Weld D S, et al. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1601-1611.
[31] Saha A, Aralikatte R, Khapra M M, et al. DuoRC: Towards complex language understanding with paraphrased reading comprehension[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018: 1683-1693.
[32] Cui Y, Liu T, Che W, et al. Aspan-extraction dataset for Chinese machine reading comprehension[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 5883-5889.
[33] Bajaj P, Campos D, Craswell N, et al. MSMARCO: A human generated machine reading comprehension dataset[J/OL]. arXiv preprint arXiv:1611.09268, 2016.
[34] Koc^isk T, Schwarz J, Blunsom P, et al. The narrativeQA reading comprehension challenge[J]. Transactions of the Association for Computational Linguistics, 2018, 6: 317-328.
[35] Dunn M, Sagun L, Higgins M, et al. SearchQA: A new q&a dataset augmented with context from a search engine[J/OL]. arXiv preprint arXiv:1704.05179, 2017.
[36] He W, Liu K, Liu J, et al. DuReader: A Chinese machine reading comprehension dataset from real-world applications[C]//Proceedings of the Workshop on Machine Reading for Question Answering, 2018: 37-46.
[37] Huang D, Shi S, Lin C Y, et al. How well do computers solve math word problems?Large-scale dataset construction and evaluation[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 887-896.
[38] Wang Y, Liu X, Shi S. Deep neural solver for math word problems[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2017: 845-854.
[39] Koncel-Kedziorski R, Roy S, Amini A, et al. MAWPS: A math word problem repository[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 1152-1157.
[40] Amini A, Gabriel S, Lin S, et al. MathQA: Towards interpretable math word problem solving with operation-based formalisms[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 2357-2367.
[41] Miao S Y, Liang C C, Su K Y. Adiverse corpus for evaluating and developing English math word problem solvers[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 975-984.
[42] Saxton D, Grefenstette E, Hill F, et al. Analysing mathematical reasoning abilities of neural models[J/OL]. arXiv preprint arXiv:1904.01557, 2019.
[43] Yannakoudakis H, Briscoe T, Medlock B. A new dataset and method for automatically grading ESOL texts[C]//Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011: 180-189.
[44] Dzikovska M O, Nielsen R, Brew C, et al. Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge[C]//Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics: Proceedings of the 7th International Workshop on Semantic Evaluation, 2013: 263-274.
[45] Blanchard D, Tetreault J, Higgins D, et al. TOEFL11: A corpus of non-native English[J/OL]. ETS Research Report Series, 2013,(2): 1-15. https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/j.2333-8504.2013.tb02331.x[2022-07-18].
[46] Lin X, Huang Z, Zhao H, et al. Hms: A hierarchical solver with dependency-enhanced understanding for math word problem[C]//Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021: 4232-4240.
[47] Alagumalai S, Curtis D D. Classical test theory[M].Dordrecht: Springer, 2005: 1-14.
[48] DiBello L V, Roussos L A, Stout W. Review of cognitively diagnostic assessment and a summary of psychometric models[J]. Handbook of Statistics, 2006, 26:970-1030.
[49] Beck J, Stern M, Woolf B P. Using the student model to control problem difficulty[C]//Proceedings of the User Modeling, 1997: 277-288.
[50] Kubinger K D, Gottschall C H. Item difficulty of multiple choice tests dependant on different item response formats-an experiment in fundamental research on psychological assessment[J]. Psychology Science, 2007, 49(4): 361.
[51] Devlin J, Chang M W, Lee K, et al. Bert: Pretraining of deep bidirectional transformers for language understanding[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 4171-4186.
[52] Huang Z, Lin X, Wang H, et al. DisenQNet: Disentangled representation learning for educational questions[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021: 696-704.
[53] Lehnert W G. The process of question answering[D]. PhD diss., New Haven: Yale University, 1977.
[54] Hirschman L, Light M, Breck E, et al. Deep read: A reading comprehension system[C]//Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, 1999: 325-332.
[55] Sachan M, Dubey K, Xing E, et al. Learning answer-entailing structures for machine comprehension[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 2015: 239-249.
[56] Narasimhan K, Barzilay R. Machine comprehension with discourse relations[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 2015: 1253-1262.
[57] Wang H, Bansal M, Gimpel K, et al. Machine comprehension with syntax, frames, and semantics[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 2015: 700-706.
[58] Berant J, Srikumar V, Chen P C, et al. Modeling biological processes for reading comprehension[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014: 1499-1510.
[59] Chen D, Fisch A, Weston J, et al. Reading wikipedia to answer open-domain questions[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1870-1879.
[60] Seo M, Kembhavi A, Farhadi A, et al. Bidirectional attention flow for machine comprehension[J/OL]. arXiv preprint arXiv:1611.01603, 2016.
[61] Cui Y, Chen Z, Wei S, et al. Attention-over-attention neural networks for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 593-602.
[62] Wang W, Yang N, Wei F, et al. Gated self-matching networks for reading comprehension and question answering[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 189-198.
[63] Pan B, Li H, Zhao Z, et al. Memen: Multi-layer embedding with memory networks for machine comprehension[J/OL]. arXiv preprint arXiv:1707.09098, 2017.
[64] Zheng B, Wen H, Liang Y, et al. Document modeling with graph attention networks for multi-grained machine reading comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 6708-6718.
[65] Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners[J/OL]. OpenAI Blog, 2019, 1(8): 9. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf[2022-07-18].
[66] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017, 30: 5998-6008.
[67] Li H, Zhang X, Liu Y, et al. D-net: A pre-training and fine-tuning framework for improving the generalization of machine reading comprehension[C]//Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 2019: 212-219.
[68] Cui Y, Che W, Liu T, et al. Cross-lingual machine reading comprehension[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 1586-1595.
[69] Gong H, Shen Y, Yu D, et al. Recurrent chunking mechanisms for long-text machine reading comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 6751-6761.
[70] Luo H, Li S W, Yu S, et al. Cooperative learning of zero-shot machine reading comprehension[J/OL]. arXiv preprint arXiv:2103.07449, 2021.
[71] Kadlec R, Schmid M, Bajgar O, et al. Text understanding with the attention sum reader network[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 908-918.
[72] Dhingra B, Liu H, Yang Z, et al. Gated-attention readers for text comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1832-1846.
[73] Xu Y, Liu J, Gao J, et al. Dynamic fusion networks for machine reading comprehension[J/OL]. arXiv preprint arXiv:1711.04964, 2017.
[74] Wang S, Jiang J. Machine comprehension using match-lstm and answer pointer[J/OL]. arXiv preprint arXiv:1608.07905, 2016.
[75] Xiong C, Zhong V, Socher R. Dynamic coattention networks for question answering[J/OL]. arXiv preprint arXiv:1611.01604, 2016.
[76] Shen Y, Huang P S, Gao J, et al. Reasonet: Learning to stop reading in machine comprehension[C]//Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017: 1047-1055.
[77] Yu S, Indurthi S R, Back S, et al. A multi-stage memory augmented neural network for machine reading comprehension[C]//Proceedings of the Workshop on Machine Reading for Question Answering, 2018: 21-30.
[78] Chen D. Neural reading comprehension and beyond[D]. PhD diss., San Francisco: Stanford University, 2018.
[79] Fletcher C R. Understanding and solving arithmetic word problems: a computer simulation[J]. Behavior Research Methods, Instruments, & Computers, 1985, 17(5): 565-571.
[80] Yuhui M, Ying Z, Guangzuo C, et al. Frame-based calculus of solving arithmetic multi-step addition and subtraction word problems[C]//Proceedings of the 2nd International Workshop on Education Technology and Computer Science, 2010: 476-479.
[81] Mukherjee A, Garain U. A review of methods for automatic understanding of natural language mathematical problems[J]. Artificial Intelligence Review, 2008, 29(2): 93-122.
[82] Hosseini M J, Hajishirzi H, Etzioni O, et al. Learning to solve arithmetic word problems with verb categorization[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014: 523-533.
[83] Liang C C, Wong Y S, Lin Y C, et al. A meaning-based statistical English math word problem solver[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018: 652-662.
[84] Roy S, Roth D. Solving general arithmetic word problems[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2015: 1743-1752.
[85] Wang L, Zhang D, Gao L, et al. Mathdqn: Solving arithmetic word problems via deep reinforcement learning[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018, 5545-5552.
[86] Wang L, Zhang D, Zhang J, et al. Template-based math word problem solvers with recursive neural networks[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019: 7144-7151.
[87] Li J, Wang L, Zhang J, et al. Modeling intra-relation in math word problems with different functional multi-head attentions[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 6162-6167.
[88] Xie Z, Sun S. A goal-driven tree-structured neural model for math word problems[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019: 5299-5305.
[89] Zhang J, Wang L, Lee R K W, et al. Graph-to-tree learning for solving math word problems[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 3928-3937.
[90] Wu Q, Zhang Q, Fu J, et al. A knowledge-aware sequence-to-tree network for math word problem solving[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2020: 7137-7146.
[91] Shen Y, Jin C. Solving math word problems with multi-encoders and multi-decoders[C]//Proceedings of the 28th International Conference on Computational Linguistics, 2020: 2924-2934.
[92] Cao Y, Hong F, Li H, et al. A bottom-up DAG structure extraction model for math word problems [C]//Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021: 39-46.
[93] Qin J, Liang X, Hong Y, et al. Neural-symbolic solver for math word problems with auxiliary tasks[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 5870-5881.
[94] Wu Q, Zhang Q, Wei Z, et al. Math word problem solving with explicit numerical values[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 5859-5869.
[95] Huang S, Wang J, Xu J, et al. Recall and learn: A memory-augmented solver for math word problems[C]//Proceedings of the Association for Computational Linguistics, 2021: 786-796.
[96] Liang Z, Zhang J, Shao J, et al. Mwp-bert: A strong baseline for math word problems[J/OL]. arXiv preprint arXiv:2107.13435, 2021.
[97] Kim H, Hwang J, Yoo T, et al. Improving a graph-to-tree model for solving math word problems[C]//Proceedings of the 16th International Conference on Ubiquitous Information Management and Communication, 2022: 1-7.
[98] Shen J, Yin Y, Li L, et al. Generate and rank: A multi-task framework for math word problems[C]//Proceedings of the Association for Computational Linguistics, 2021: 2269-2279.
[99] Roy S, Roth D. Unit dependency graph and its application to arithmetic word problem solving[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017: 3082-3088.
[100] Huang Z, Liu Q, Gao W, et al. Neural mathematical solver with enhanced formula structure[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2020: 1729-1732.
[101] 付瑞吉, 王栋, 王士进, 等. 面向作文自动评分的优美句识别[J]. 中文信息学报, 2018, 32(06): 88-97.
[102] 何屹松, 孙媛媛, 张凯, 等. 计算机智能辅助评分系统定标集选取和优化方法研究[J]. 中国考试, 2020(01): 30-36.
[103] Page E B. The imminence of grading essays by computer[J]. The Phi Delta Kappan, 1966, 47(5): 238-243.
[104] Yannakoudakis H, Briscoe T. Modeling coherence in ESOL learner texts[C]//Proceedings of the 7th Workshop on Building Educational Applications Using NLP, 2012: 33-43.
[105] Farra N, Somasundaran S, Burstein J. Scoring persuasive essays using opinions and their targets[C]//Proceedings of the 10th Workshop on Innovative Use of NLP for Building Educational Applications, 2015: 64-74.
[106] Chen H, He B. Automated essay scoring by maximizing human-machine agreement[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2013: 1741-1752.
[107] Taghipour K, Ng H T. A neural approach to automated essay scoring[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016: 1882-1891.
[108] Dong F, Zhang Y. Automatic features for essay scoring: an empirical study[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016: 1072-1077.
[109] Dong F, Zhang Y, Yang J. Attention-based recurrent convolutional neural network for automatic essay scoring[C]//Proceedings of the 21st Conference on Computational Natural Language Learning, 2017: 153-162.
[110] Tay Y, Phan M, Tuan L A, et al. Skipflow: incorporating neural coherence features for end-to-end automatic text scoring[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018: 5948-5955.
[111] Lun J, Zhu J, Tang Y, et al. Multiple data augmentation strategies for improving performance on automatic short answer scoring[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020: 13389-13396.
[112] Liu J, Xu Y, Zhu Y. Automated essay scoring based on two-stage learning[J/OL]. arXiv preprint arXiv:1901.07744, 2019.
[113] Nadeem F, Nguyen H, Liu Y, et al. Automated essay scoring with discourse-aware neural models[C]//Proceedings of the 14th Workshop on Innovative Use of NLP for Building Educational Applications, 2019: 484-493.
[114] Yang R, Cao J, Wen Z, et al. Enhancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and ranking[C]//Proceedings of the Association for Computational Linguistics, 2020: 1560-1569.
[115] Wang Y, Wang C, Li R, et al. On the use of BERT for automated essay scoring: joint learning of multi-scale essay representation[J/OL]. arXiv preprint arXiv:2205.03835, 2022.
[116] Wu J, Yang Y, Deng C, et al. Sogou machine reading comprehension toolkit[J/OL]. arXiv preprint arXiv:1903.11848, 2019.
[117] Lan Y, Wang L, Zhang Q, et al. Mwptoolkit: An open-source framework for deep learning-based math word problem solvers[C]//Proceedings of the 36th AAAI Conference on Artificial Intelligence, 2022: 13188-13190.
[118] Zesch T, Horbach A.Escrito: An NLP-enhanced educational scoring toolkit[C]//Proceedings of the 11th International Conference on Language Resources and Evaluation, 2018: 2310-2316.
[119] Wang X, Huang W, Liu Q, et al. Fine-grained similarity measurement between educational videos and exercises[C]//Proceedings of the 28th ACM International Conference on Multimedia, 2020: 331-339.
[120] Chen P, Lu Y, Zheng V W, et al. Knowedu: A system to construct knowledge graph for education[J]. IEEE Access, 2018, 6: 31553-31563.
[121] Pan L, Li C, Li J, et al. Prerequisite relation learningfor concepts in MOOCS[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1447-1456.
[122] Huang X, Liu Q, Wang C, et al. Constructing educational concept maps with multiple relationships from multi-source data[C]//Proceedings of the IEEE International Conference on Data Mining, 2019: 1108-1113.

基金

国家自然科学基金(62106244,U20A20229,61922073);中央高校基本科研业务费专项资金(WK2150110021)
PDF(6967 KB)

1616

Accesses

0

Citation

Detail

段落导航
相关文章

/