|
|
A Multi-Granularity Semantic Interaction Understanding Network for Humor Level Recognition |
ZHANG Jinhui1, ZHANG Shaowu1, LIN Hongfei1, FAN Xiaochao1,2, YANG Liang1 |
1.School of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning 116024, China; 2.College of Computer Science and Technology, Xinjiang Normal University, Urumqi, Xinjiang 830054, China |
|
|
Abstract Humor plays an important role in daily communication. Existing works of humor level recognition tend to treat humor text as a whole, ignoring the inner semantic relations of it. Treating humor level recognition as a kind of natural language inference task, this paper divides humor text into two parts: "setup" and "punchline", and captures them with their mutual relations. A multi-granularity semantic interaction understanding network is proposes to capture semantic association and interaction in humor text from both word and clause granularity. We conduct experiments on public humor data set Reddit, and the accuracy of the model on this corpus is improved by 1.3% compared with the previous optimal results.
|
Received: 21 February 2021
|
|
|
|
|
[1] Morse D R. Use of humor to reduce stress and pain and enhance healing in the dentalsetting.[J]. J N J Dent Assoc, 2007, 78(4):32-36. [2] Mihalcea R. Making computers laugh: investigations in automatic humor recognition[C]//Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, 2005: 531-538. [3] Zhang R, Liu N. Recognizing humor on twitter[C]//Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, 2014: 889-898. [4] Blinov V, Bolotova Baranova V, Braslavski P. Large dataset and language model fun-tuning for humor recognition[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 4027-4032. [5] Weller O, SEPPI K. Humor detection: a transformer gets the last laugh[C]//Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 3612-3616. [6] Hossain N, Krumm J, Gamon M. President vows to cut hair: dataset and analysis of creative text editing for humorous headlines[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 133-142. [7] Paulos J A. Mathematics and humor[M]. Chicago, University of Chicago Press, 1980. [8] Suls J M. A two-stage model for the appreciation of jokes and cartoons: an information-processing analysis[J]. The Psychology of Humor: Theoretical Perspectives and Empirical issues, 1972, 1: 81-100. [9] RASKIN V. Semantic mechanisms of humor[C]//Proceedings of the Annual Meeting of the Berkeley Linguistics Society, 1979: 325-335. [10] Yang D,Lavie A, Dyer C, et al. Humor recognition and humor anchor extraction[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2015: 2367-2376. [11] Barbieri F,Saggion H. Automatic detection of irony and humour in twitter[J]. Process Biochemistry, 2014, 40(8):2637-2642. [12] Liu L, Zhang. Modeling sentiment association in discourse for humor recognition[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 586-591. [13] Liu L, Zhang. Exploiting syntactic structures for humor recognition[C]//Proceedings of the 27th International Conference on Computational Linguistics, 2018: 1875-1883. [14] 杨勇,杨亮,邹艳波,等.基于音形义特征和层次注意力机制的幽默识别[J/OL].计算机工程: 2021: 1-12.https://doi.org/10.19678/j.issn.1000-3428.0057138.[2021-01-30]. [15] Bertero D, Fung P. A long short-term memory framework for predicting humor in dialogues[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 130-135. [16] Bertero D, Fung P. Deep learning of audio and language features for humor prediction[C]//Proceedings of the Tenth International Conference on Language Resources and Evaluation, 2016: 496-501. [17] Baziotis C, Pelekis N, Doulkeridis C. DataStories at SemEval-2017 Task 6: Siamese LSTM with attention for humorous text comparison[C]//Proceedings of the 11th International Workshop on Semantic Evaluation, 2017: 390-395. [18] Zhao Z, Cattle A,Papalexakis E, et al. Embedding lexical features via tensor decomposition for small sample humor recognition[C]//Proceedings of the Conference on Empirical Methods in Natural Language, 2019: 6376-6381. [19] Chiruzzo L, Castro S, Rosa A. HAHA 2019 Dataset: a corpus for humor analysis in Spanish[C]//Proceedings of the 12th Language Resources and Evaluation Conference, 2020: 5106-5112. [20] Westbury C, Hollis G. Wriggly,squiffy, lummox, and boobs: What makes some words funny?[J]. Journal of Experimental Psychology: General, 2019, 148(1): 97. [21] Cattle A, Ma X. Effects of semantic relatedness between setups and punchlines in twitter hashtag games[C]//Proceedings of the Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media, 2016: 70-79. [22] Potash P, Romanov A,Rumshisky A. HashtagWars: learning a sense of humor[J]. arXiv: 1612.03216,2016. [23] Xu H, Liu B, Shu L, et al. Double embeddings and CNN-based sequence labeling for aspect extraction[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018: 592-598. [24] Pennington J,Socher R, Manning C. Glove: global vectors for word representation[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014: 1532-1543. [25] Devlin J, Chang M W, Lee K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 4171-4186. [26] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780. [27] Chen Q, Zhu X, Ling Z, et al. Enhanced LSTM for natural language inference[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2016: 1657-1668. [28] Engelthaler T, Hills T T. Humor norms for 4,997 English words[J]. Behavior Research Methods, 2017, 50(1): 1-9. [29] Ma D, Li S, Zhang X, et al. Interactive attention networks for aspect-level sentiment classification[C]//Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2017: 4068-4074. [30] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space[J]. arXiv preprint arXiu: 1301.3781, 2013. [31] Kingma D R, Ba J. Adam: a method for stochastic optimization[J]. arXiv preprint arXiv: 1412.6980, 2014. [32] Yoon K. Convolutional neural networks for sentence classification[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014: 1746-1751. [33] Vaswani A,Shazeer N, Parmar N, et al. Attention is all you need[C]//Proceedings of the Advances in Neural Information Processing Systems. 2017: 5998-6008. |
|
|
|