现有的大多数序列推荐算法,将每个项目映射到一个向量进行表示,当项目数量过于庞大时,其项目嵌入表存在内存效率问题;另外很多序列推荐算法与一些过度参数化的网络相结合,导致训练过程中存在参数冗余的问题,影响模型的计算速度和性能。针对以上问题,该文设计了一种轻量级的序列推荐算法,以取得相比以往方法更高的内存效率。首先使用动态组合嵌入方法,通过互补分区生成一组更小的基嵌入表,并使用商余技巧和权重分配动态地生成最终的项目嵌入。其次,为了避免参数冗余,引入动态卷积网络和双头自注意力来提取用户的短期和长期偏好。结合以上两个部分,得到一种轻量级的序列推荐算法DCE-DCN,并且在三个公开数据集Beauty、Yelp和MovieLens-1M上设置充分的实验验证了算法的有效性。
Abstract
Most of the existing sequential recommendation algorithms map each item to a vector for representation, which suffers from a memory efficiency problem in its item embedding table amd a parameter redundancy issue in the training process. This paper designs a lightweight sequential recommendation algorithm to achieve higher memory efficiency. Firstly, the Dynamic Compositional Embedding method is used to generate a set of smaller base embedding tables through complementary partitions, and the final item embedding is dynamically generated by Quotient-Remainder trick and weight allocation. Secondly, Dynamic Convolution Network and Twin-head Self-Attention are introduced to extract users’ short-term and long-term preferences to avoid parameter redundancy. Combining the above two parts, a lightweight sequential recommendation algorithm DCE-DCN is obtained, and experimental results on three public datasets Beauty, Yelp and MovieLens-1M are reported.
关键词
序列推荐 /
组合嵌入 /
动态卷积神经网络 /
轻量级
{{custom_keyword}} /
Key words
sequential recommendation /
compositional embedding /
dynamic convolutional neural network /
lightweight
{{custom_keyword}} /
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[1] 王建芳,苗艳玲,韩鹏飞,等. 一种基于信任机制的概率矩阵分解协同过滤推荐算法[J]. 小型微型计算机系统, 2019, 40(01): 31-35.
[2] 黄立威,江碧涛,吕守业,等. 基于深度学习的推荐系统研究综述[J]. 计算机学报, 2018, 41(07): 1619-1647.
[3] HIDASI B, KARATZOGLOU A, BALTRUNAS L, et al. Session-based recommendations with recurrent neural networks[J]. Computer Ence, 2015: 40-44.
[4] KANG W-C, MCAULEY J. Self-attentive sequential recommendation[C]//Proceedings of IEEE International Conference on Data Mining, 2018: 197-206.
[5] TANG J, WANG K. Personalized top-n sequential recommendation via convolutional sequence embedding[C]//Proceedings of the 11th ACM International Conference on Web Search and Data Mining, 2018: 565-573.
[6] LIU H, ZHAO X, WANG C, et al. Automated embedding size search in deep recommender systems[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2020: 2307-2316.
[7] SHI H J M, MUDIGERE D, NAUMOV M, et al. Compositional embeddings using complementary partitions for memory-efficient recommendation systems[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020: 165-175.
[8] LIU S, GAO C, CHEN Y, et al. Learnable embedding sizes for recommender systems[C]//Proceedings of ICLK, 2021: 1-17.
[9] 蒋仕艺. 序列推荐算法的研究综述[J]. 现代计算机, 2021(06): 78-81.
[10] HE R, KANG W C, MCAULEY J. Translation-based recommendation[C]//Proceedings of the 11th ACM Conference on Recommender Systems, 2017: 161-169.
[11] HE R, MCAULEY J. Fusing similarity models with markov chains for sparse sequential recommendation[C]//Proceedings of the 16th International Conference on Data Mining, 2016: 191-200.
[12] RENDLE S, FREUDENTHALER C, SCHMIDT-THIEME L. Factorizing personalized Markov chains for next-basket recommendation[C]//Proceedings of the 19th International Conference on World Wide Web, 2010: 811-820.
[13] 袁涛,牛树梓,李会元. 一种基于CW-RNN的多时间尺度序列建模推荐算法[J]. 中文信息学报, 2020, 34(06): 97-105.
[14] LI J, WANG Y, MCAULEY J. Time interval aware self-attention for sequential recommendation[C]//Proceedings of the 13th International Conference on Web Search and Data Mining, 2020: 322-330.
[15] 袁涛,牛树梓,李会元. 动态层次Transformer序列推荐算法[J]. 中文信息学报, 2022, 36(01): 117-126.
[16] CHEN T, YIN H, ZHENG Y, et al. Learning elastic embeddings for customizing on-device recommenders[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021: 138-147.
[17] LIAN D, WANG H, LIU Z, et al. Lightrec: A memory and search-efficient recommender system[C]//Proceedings of the Web Conference, 2020: 695-705.
[18] WU F, FAN A, BAEVSKI A, et al. Pay less attention with lightweight and dynamic convolutions[C]//Proceedings of ICLK, 2019: 1-14.
[19] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6000-6010.
[20] CHOLLET F. Xception: Deep learning with depthwise separable convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1251-1258.[21] SIFRE L, MALLAT S. Rigid-motion scattering for texture classification[J]. Computer Science, 2014, 3559: 501-515.
[22] WU Z, LIU Z, LIN J, et al. Lite transformer with long-short range attention[C]//Proceedings of ICLK, 2020: 1-13.
[23] KINGMA D P, BA J. Adam: A method for stochastic optimization[C]//Proceedings of ICLK, 2015: 1-15.
[24] CHEN T, YIN H, CHEN H, et al. Air: Attentional intention-aware recommender systems[C]//Proceedings of the IEEE 35th International Conference on Data Engineering, 2019: 304-315.
[25] HIDASI B, QUADRANA M, KARATZOGLOU A, et al. Parallel recurrent neural network architectures for feature-rich session-based recommendations[C]//Proceedings of the 10th ACM Conference on Recommender Systems, 2016: 241-248.
[26] LIN J, PAN W, MING Z. FISSA: Fusing item similarity models with self-attention networks for sequential recommendation[C]//Proceedings of 14th ACM Conference on Recommender Systems, 2020: 130-139.
{{custom_fnGroup.title_cn}}
脚注
{{custom_fn.content}}
基金
国家自然科学基金(61462049)
{{custom_fund}}