Survey
CHEN Yulong, FU Qiankun, ZHANG Yue
2021, 35(3): 1-23.
In recent years, neural networks have gradually overtaken classical machine learning models and become the de facto paradigm for natural language processing tasks. Most typical neural networks are capable of dealing with data in Euclidean space. Due to the linguistic nature, however, the language information such as discourse and syntactic information is of graph structures. Therefore, there has been an increasing number of researches that use graph neural networks to explore structures in natural languages. This paper systematically introduces applications of graph neural networks in natural language processing areas. It first discusses the fundamental concepts and introduces three main categories of graph neural networks, namely graph recurrent neural network, graph convolutional network, and graph attention network. Then this paper introduces methods to construct proper graph structures according to different tasks, and to apply graph neural networks to embed those structures. This paper suggests that compared with focusing on novel structures, exploring how to use the key information in specific tasks to create corresponding graphs is more universal and is of more academic value, which can be a promising future research direction.