融合注意力LSTM的神经张量分解推荐模型

李晶晶,夏鸿斌,刘渊

PDF(5007 KB)
PDF(5007 KB)
中文信息学报 ›› 2021, Vol. 35 ›› Issue (5) : 91-100.
信息抽取与文本挖掘

融合注意力LSTM的神经张量分解推荐模型

  • 李晶晶1,夏鸿斌1,2,刘渊1,2
作者信息 +

Neural Tensor Factorization Recommendation Model Based on Attention LSTM

  • LI Jingjing1, XIA Hongbin1,2, LIU Yuan1,2
Author information +
History +

摘要

针对结合深度学习模型的协同过滤算法未考虑关联数据的多维交互随时间动态变化的问题,该文提出一种融合时间交互学习和注意力长短期记忆网络的张量分解推荐模型(LA-NTF)。通过采用基于注意力机制的长短期记忆网络从项目文本信息中提取项目的潜在向量,然后使用融合注意力机制的长短期记忆网络来表征用户—项目关系数据在时间上的多维交互,最后将用户—项目—时间三维张量嵌入多层感知器中,学习不同潜在因子之间的非线性结构特征,从而预测用户对项目的评分。在两个真实数据集上的大量实验表明,与其他传统方法和基于神经网络的矩阵分解模型相比,方根误差(RMSE)和平均绝对误差(MAE)指标均有明显提升,说明LA-NTF模型可显著改善各种动态关系数据的评级预测任务。

Abstract

Current collaborative filtering algorithms combined with deep learning models fail to consider the problem of dynamical change over time of multi-dimension interaction of linked data. This paper proposes a tensor factorization recommendation model that combines time interaction learning and long short-term memory networks with attention (LA-NTF). Firstly, the long short-term memory network with attention mechanism is applied to extract the latent vector of the item from the item text information. Secondly, the multi-dimension interaction of user-item relational data in time is characterized by the long short-term memory networks with attention mechanism. Finally, the user-item-time 3D tensor is embedded in the multi-layer perceptron to learn the non-linear structural features between different latent factors, to predict the user's rating of the item. Experiments on two real-world datasets show that RMSE and MAE indicators significantly outperform neural network based factorization models and other traditional methods, indicating that the significant improvement in rating prediction task on various dynamic relational data by our LA-NTF model.

关键词

注意力机制 / 长短期记忆网络 / 时间交互学习 / 推荐系统 / 张量分解

Key words

attention mechanism / long short-term memory network / temporal interaction learning / recommendation system / tensor factorization

引用本文

导出引用
李晶晶,夏鸿斌,刘渊. 融合注意力LSTM的神经张量分解推荐模型. 中文信息学报. 2021, 35(5): 91-100
LI Jingjing, XIA Hongbin, LIU Yuan. Neural Tensor Factorization Recommendation Model Based on Attention LSTM. Journal of Chinese Information Processing. 2021, 35(5): 91-100

参考文献

[1] Lian D, Ge Y, Zhang F, et al. Scalable content-aware collaborative filtering for location recommendation[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 30(6): 1122-1135.
[2] 张志鹏, 张尧, 任永功. 基于时间相关度和覆盖权重的协同过滤推荐算法[J]. 模式识别与人工智能,2019,32(4): 289-297.
[3] Lops P, Jannach D, Musto C, et al. Trends in content-based recommendation[J]. User Modeling and User-Adapted Interaction, 2019, 29(2): 239-249.
[4] 冀振燕, 皮怀雨, 姚伟娜. 融合多源异构数据的混合推荐模型[J].北京邮电大学学报,2019,42(1): 132-138.
[5] Qingji T, Hao W, Cong W, et al. A personalized hybrid recommendation strategy based on user behaviors and its application[C]//Proceedings of the International Conference on Security. IEEE, 2018: 181-186.
[6] Mnih A, Salakhutdinov R R. Probabilistic matrix factorization[C]//Proceedings of Advances in Neural Information Processing Systems,2008: 1257-1264.
[7] He X, Liao L, Zhang H, et al. Neural collaborative filtering[C]//Proceedings of the 26th International Conference on World Wide Web. ACM, 2017: 173-182.
[8] Shan Y, Hoens T R, Jiao J, et al. Deep crossing: Web-scale modeling without manually crafted combinatorial features[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016: 255-262.
[9] Chen H, Li J. Adversarial tensor factorization for context-aware recommendation[C]//Proceedings of the 13th ACM Conference on Recommender Systems, 2019: 363-367.
[10] Zhou P, Shi W, Tian J, et al.Attention-based bidirectional long short-term memory networks for Relation classification[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 207-212.
[11] Liu Q, Zeng Y, Mokhosi R, et al. STAMP: Short-term attention/memory priority model for session-based recommendation[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2018: 1831-1839.
[12] Wu C Y, Ahmed A, Beutel A, et al. Recurrent recommender networks[C]//Proceedings of the 10th ACM International Conference on Web Search and Data Mining. ACM, 2017: 495-503.
[13] Liu J, Wang D, Ding Y. PHD: A probabilistic model of hybrid deep collaborative filtering for recommender systems[C]//Proceedings of Asian Conference on Machine Learning, 2017: 224-239.
[14] 罗洋,夏鸿斌,刘渊. 融合注意力LSTM的协同过滤推荐算法[J]. 中文信息学报, 2019, 33(12): 110-118.
[15] Wang X, Yu L, Ren K, et al. Dynamic attention deep model for article recommendation by learning human editors’ demonstration[C]//Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discover and Data Mining. Halifax, Canada, 2017: 2051-2059.
[16] Wu X, Shi B, Dong Y, et al. Neural tensor factorization for temporal interaction learning[C]//Proceedings of the 12th ACM International Conference on Web Search and Data Mining. ACM, 2019: 537-545.
[17] Chen J, Zhang H, He X, et al. Attentive collaborative filtering: multimedia recommendation with item-and component-level attention[C]//Proceedings of the 40th International ACM SIGIR Conference. ACM, 2017: 335-344.
[18] Ioffe S, Szegedy C. Batch Normalization: Accelerating deep network training by reducing internal covariate shift[C]//Proceedings of the ICML, 2015: 448-456.
[19] Shimodaira H. Improving predictive inference under covariate shift by weighting the log-likelihood function[J]. Journal of Statistical Planning and Inference, 2000, 90(2): 227-244.

基金

国家科学支撑计划(2015BAH54F01);国家自然科学基金(61672264)
PDF(5007 KB)

1678

Accesses

0

Citation

Detail

段落导航
相关文章

/