情感导向对话回复生成模型

马志强,王春喻,贾文超,杜宝祥

PDF(2737 KB)
PDF(2737 KB)
中文信息学报 ›› 2023, Vol. 37 ›› Issue (8) : 104-114.
问答与对话

情感导向对话回复生成模型

  • 马志强1,2,王春喻1,贾文超1,杜宝祥1
作者信息 +

Emotion Guided Dialogue Response Generation Model

  • MA Zhiqiang1,2, WANG Chunyu1, JIA Wenchao1, DU Baoxiang1
Author information +
History +

摘要

情感对话生成模型在生成回复中并未考虑对话双方的情感变化,导致生成回复的情感存在不确定性,降低了用户的对话意愿。该文基于情感诱发相关研究提出了一种情感导向回复生成模型EGRGM,模型添加了情感导向门,控制机器情感状态合理生成,逐步引导用户情感向目标情感自然转化。实验以NLPCC 2018的开放领域多轮对话语料为基础,基于GPT-2构建情感导向对话回复生成实验系统,开展了情感引导、逻辑关联以及对话持续等实验。实验结果表明,模型在情感引导方面与最优模型相比提高了13%,能够根据对话历史和指定的引导目标情感极性,控制用户回复生成的情感,实现对用户情感的引导。

Abstract

The emotion dialogue generation model does not consider the change of emotions of both parties in the dialogue in generating responses, which leads to uncertainty in the emotions of the generated responses and reduces the use's willingness to dialogue. In this paper, an emotion-guided reply generation model EGRGM is proposed based on emotion-evoking related research, and the model adds an emotion-guided gate to control the rational generation of machine emotion states, gradually guiding the user's emotions towards the natural transformation of the target emotion. Theexperiment dataset was constructed with the open-domain multi-round dialogue corpus from NLPCC2018, experiments on emotion guidance, logical association and dialogue persistence are conducted. The experimental results show that the model improves the emotion guidance score by 13% compared with the optimal model, indicating that the model is able to control the generation of the user's emotional reply and realize the guidance of the user's emotion according to the conversation history and the specified target emotional polarity.

关键词

情感对话 / 回复生成 / 情感表达 / 情感导向

Key words

emotion dialogue / response generation / emotion express / emotion guidance

引用本文

导出引用
马志强,王春喻,贾文超,杜宝祥. 情感导向对话回复生成模型. 中文信息学报. 2023, 37(8): 104-114
MA Zhiqiang, WANG Chunyu, JIA Wenchao, DU Baoxiang. Emotion Guided Dialogue Response Generation Model. Journal of Chinese Information Processing. 2023, 37(8): 104-114

参考文献

[1] XU W, GU X, CHEN G, et al. Generating emotional controllable response based on multi-task and dual attention framework[J]. IEEE Access, 2019, 7: 93734-93741.
[2] KONG X, LI B, NEUBIG G, et al. An adversarial approachto high-quality, sentiment-controlled neural dialogue generation[J]. arXiv preprint arXiv:1901.07129, 2019.
[3] PENG D, ZHOU M, LIU C, et al. Human-machine dialogue modelling with the fusion of word-and sentence-level emotions[J]. Knowledge-Based Systems, 2020, 192: 105319.
[4] ZHOU H, HUANG M, ZHANG T, et al. Emotional chatting machine:emotional conversation generation with internal and external memory[J]. arXiv preprint arXiv:1704.01074, 2017.
[5] HASEGAWA T, KAJI N, YOSHINAGA N, et al. Predicting and eliciting addressee's emotion in online dialogue[C]//Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, 2013: 964-972.
[6] LUBIS N, SAKTI S, YOSHINO K, et al. Eliciting positive emotion through affect-sensitive dialogue response generation: A neural network approach[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018.
[7] LUBIS N, SAKTI S, YOSHINO K, et al. Positive emotion elicitation in chat-based dialogue systems[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2019, 27(4): 866-877.
[8] KESHTKAR F, INKPEN D. A pattern-based model for generating text to express emotion[C]//Proceedings of International Conference on Affective Computing and Intelligent Interaction, Beijing, Piscataway: IEEE, 2011: 11-21.
[9] SKOWRON M. Affect listeners: Acquisition of affective states by means of conversational systems[M]//Development of Multimodal Interfaces: Active Listening and Synchrony. Berlin: Springer, 2010: 169-181.
[10] RITTER A, CHERRY C, DOLAN W B. Data-driven response generation in social media[C]//Proceedings of the Conference on Empirical Methods In natural Language Processing. Association for Computational Linguistics, 2011: 583-593.
[11] VINYALS O, LE Q. A neural conversational model[J]. arXiv:1506.05869, 2015.
[12] SHANG L, LU Z, LI H. Neural responding machine for short-text conversation[J]. arXiv preprint arXiv:1503.02364, 2015.
[13] MOU L, SONG Y, YAN R, et al. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation[J]. arXiv preprint arXiv:1607.00970, 2016.
[14] WU Y, WU W, XING C, et al. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots[J]. arXiv:1612.01627, 2016.
[15] LI J, MONROE W, RITTER A, et al. Deep reinforcement learning for dialogue generation[J]. arXiv:1606.01541, 2016.
[16] SUTSKEVER I, VINYALS O, LE Q V. Sequence to sequence learning with neural networks[C]//Proceedings of Advances in Neural Information Processing Systems, Montreal, 2014: 3104-3112.
[17] ZHANG R, WANG Z, MAI D. Building emotional conversation systems using multi-task seq2seq learning[C]//Proceedings of National CCF Conference on Natural Language Processing and Chinese Computing. Springer, Cham, 2017: 612-621.
[18] MA Z, YANG R, DU B, et al. A control unit for emotional conversation generation[J]. IEEE Access, 2020, 8: 43168-43176.
[19] SERBAN I V, SORDONI A, BENGIO Y, et al. Building end-to-end dialogue systems using generative hierarchical neural network models[C]//Proceedings of the 30th AAAI Conference on Artificial Intelligence, 2016: 3776-3783.
[20] SERBAN I V, SORDONI A, LOWE R, et al. A hierarchical latent variable encoder-decoder model for generating dialogues[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017: 3295-3301.
[21] XIE Y, SVIKHNUSHINA E, PU P. A multi-turn emotionally engaging dialog model[J]. arXiv:1908.07816, 2019.
[22] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019, 1(8): 917.
[23] ZHANG Y, SUN S, GALLEY M, et al. Dialogpt: Large-scale generative pre-training for conversational response generation[J]. arXiv preprint arXiv:1911.00536, 2019.
[24] MA Z,JIA W, DU B, et al. An emotion guidance gecision-making model based on dialogue context[C]//Proceedings of the IEEE International Conference on Data Science and Systems, 2021: 51-58.
[25] 杜宝祥,马志强,王春喻,等. 结合微调与重排序的情感可控对话生成方法[J]. 计算机科学与探索: 1-13.
[26] LI J, YAN R. Overview of the NLPCC shared task: Multi-turn human-computer conversations[C]//Proceedings of the CCF International Conference on Natural Language Processing and Chinese Computing. Springer, Cham, 2018: 446-451.
[27] DEVLIN J, CHANG M W, LEE K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
[28] LIU C W, LOWE R, SERBAN I V, et al. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2016: 2122-2132.
[29] 张伟男,张杨子,刘挺. 对话系统评价方法综述[J]. 中国科学(信息科学), 2017, 47(8): 953-966.
[30] TIAN H, GAO C, XIAO X, et al. SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis[J]. arXiv preprint arXiv:2005.05635, 2020.
[31] LIN C Y. Rouge: A package for automatic evaluation of summaries[C]//Proceedings of the Text Summarization Branches out, 2004: 74-81.
[32] RUS V, LINTEAN M. An optimal assessment of natural language student input using word-to-word similarity metrics[C]//Proceedings of the International Conference on Intelligent Tutoring Systems. Springer, Berlin, Heidelberg, 2012: 675-676.
[33] FORGUES G, PINEAU J, LARCHEVQUE J M, et al. Bootstrapping dialog systems with word embeddings[C]//Proceedings of the Nips, Modern Machine Learning and Natural Language Processing Workshop, 2014.
[34] SHAN Y, CUI A, TAN L, et al. Overview of the NLPCC 2019 shared task: Open domain conversation evaluation[C]//Proceedings of the CCF International Conference on Natural Language Processing and Chinese Computing. Springer, Cham, 2019: 829-834.
[35] LI J, GALLEY M, BROCKETT C, et al. Adiversity-promoting objective function for neural conversation models[C]//Proceedings of NAACL-HLT, 2016: 110-119.
[36] WIETING J, BANSAL M, GIMPEL K, et al. Towards universal paraphrastic sentence embeddings[J]. arXiv preprint arXiv:1511.08198, 2015.
[37] RONG X. Word2Vec parameter learning explained[J]. arXiv preprint arXiv:1411.2738, 2014.

基金

国家自然科学基金(61762070,62166029);内蒙古自然科学基金(2019MS06004);内蒙古自治区科技成果转化专项资金项目(2020CG0073);内蒙古自治区科技重大专项(2019ZD015);内蒙古自治区关键技术攻关计划项目(2019GG273);内蒙古自治区研究生科研创新项目(SZ2020073)
PDF(2737 KB)

2162

Accesses

0

Citation

Detail

段落导航
相关文章

/