基于LSTM的语言学习长期记忆预测模型

叶峻峣,苏敬勇,王耀威,徐勇

PDF(3022 KB)
PDF(3022 KB)
中文信息学报 ›› 2022, Vol. 36 ›› Issue (12) : 133-138,148.
信息抽取与文本挖掘

基于LSTM的语言学习长期记忆预测模型

  • 叶峻峣1,苏敬勇1,2,王耀威2,徐勇1
作者信息 +

A Long-term Memory Prediction Model for Language Learning via LSTM

  • YE Junyao1, SU Jingyong1,2, WANG Yaowei2, XU Yong1
Author information +
History +

摘要

间隔重复是一种在语言学习中常见的记忆方法,通过设置不同的复习间隔,让学习者在相应的时间点进行练习,以达到理想的记忆效果。为了设置合适的复习间隔,需要预测学习者的长期记忆。该文提出了一种基于长短时记忆网络(LSTM)的语言学习长期记忆预测模型,从学习者的记忆行为历史中提取统计特征和序列特征,使用LSTM对记忆行为序列进行学习,并将其应用于半衰期回归(Half-Life Regression,HLR)模型,预测外语学习者对单词的回忆概率。实验收集了90亿条真实的记忆行为数据,评估模型及特征的影响,发现相较于统计特征,序列特征包含更多的有效信息。该文提出的LSTM-HLR模型与最先进的模型相比,误差降低了50%。

Abstract

Spaced repetition is a common mnemonic method in language learning. In order to decide proper review intervals for a desired memory effect, it is necessary to predict the learners’ long-term memory. This paper proposes a long-term memory prediction model for language learning via LSTM. We extract statistical features and sequence features from the memory behavior history of learners. The LSTM is used to learn the memory behavior sequence. The half-life regression model is applied to predict the probability of foreign language learners' recall of words. Upon the 9 billion pieces of real memory behavior data collected for evaluation, the sequence features are revealed more informative than statistical features. Compared with the state-of-the-art models, the error of the proposed LSTM-HLR model is significantly reduced by 50%.

关键词

间隔重复 / 语言学习 / 长期记忆 / LSTM

Key words

spaced repetition / language learning / long-term memory / LSTM

引用本文

导出引用
叶峻峣,苏敬勇,王耀威,徐勇. 基于LSTM的语言学习长期记忆预测模型. 中文信息学报. 2022, 36(12): 133-138,148
YE Junyao, SU Jingyong, WANG Yaowei, XU Yong. A Long-term Memory Prediction Model for Language Learning via LSTM. Journal of Chinese Information Processing. 2022, 36(12): 133-138,148

参考文献

[1] Ebbinghaus H. Memory: A contribution to experimental psychology[J]. Annals of Neurosciences, 2013, 20(4): 155-156.
[2] Cepeda N J, Vul E, Rohrer D, et al. Spacing effects in learning: A temporal ridgeline of optimal retention[J]. Psychological Science, 2008, 19(11): 1095-1102.
[3] Smolen P, Zhang Y, Byrne J H. The right time to learn: Mechanisms and optimization of spaced learning[J]. Nature Reviews Neuroscience, 2016, 17(2): 77-88.
[4] Roediger H L, Butler A C. The critical role of retrieval practice in long-term retention[J]. Trends in Cognitive Sciences, 2011, 15(1): 20-27.
[5] Maddox G B, Balota D A, Coane J H, et al. The role of forgetting rate in producing a benefit of expanded over equal spaced retrieval in young and older adults[J]. Psychology and Aging, 2011, 26(3): 661-670.
[6] Settles B, Meeder B. A trainable spaced repetition model for language learning[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 1848-1858.
[7] Leitner S.So lernt man leben[M]. München, Zürich: Droemer-Knaur, 1974.
[8] Woiniak P A. Optimization of learning: A new approach and computer application[D]. Master's Thesis, Poznan: University of Technology in Poznan, 1990.
[9] Woiniak P A, Gorzelanczyk E J. Optimization of repetition spacing in the practice of learning[J]. Acta Neurobiologiae Experimentalis, 1994, 54(1): 59-62.
[10] Reddy S, Labutov I, Banerjee S, et al. Unbounded human learning: Optimal scheduling for spaced repetition[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data mining, 2016: 1815-1824.
[11] Pashler H, Cepeda N, Lindsey R V, et al. Predicting the optimal spacing of study: A multiscale context model of memory[J]. Advances in Neural Information Processing Systems, 2009, 22: 1321-1329.
[12] Zaidi A, Caines A, Moore R, et al. Adaptive forgetting curves for spaced repetition language learning[C]//Proceedings of the 21st AIED International Conference on Artificial Intelligence in Education, 2020: 358-363.
[13] Tabibian B, Upadhyay U, De A, et al. Enhancing human learning via spaced repetition optimization[C]//Proceedings of the National Academy of Sciences, 2019, 116(10): 3988-3993.
[14] Reddy S, Levine S, Dragan A. Accelerating human learning with deep reinforcement learning[C]//Proceedings of the NIPS Workshop on Teaching Machines, Robots, and Humans, 2017.
[15] Sinha S.Using deep reinforcement learning for personalizing review sessions on E-learning platforms with spaced repetition[D]. Master's Thesis, Stockholm: KTH Royal Institute of Technology, 2019.
[16] Yang Z, Shen J, Liu Y, et al. TADS: Learning time-aware scheduling policy with dyna-style planning for spaced repetition[C]//Proceedings of the 43rd ACM SIGIR International Conference on Research and Development in Information Retrieval, 2020: 1917-1920.
[17] Hunziker A, Chen Y, Mac Aodha O, et al. Teaching multiple concepts to a forgetful learner[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2020,6: 4025-4036.
[18] Dietterich T G. Approximate statistical tests for comparing supervised classification learning algorithms[J]. Neural Computation, 1998, 10(7): 1895-1923.
[19] Bjork R A, Bjork E L, et al. A new theory of disuse and an old theory of stimulus fluctuation[J]. From Learning Processes to Cognitive Processes: Essays in Honor of William K. Estes, 1992, 2: 35-67.

基金

广东省自然科学基金(2022A1515010800)
PDF(3022 KB)

2648

Accesses

0

Citation

Detail

段落导航
相关文章

/