Abstract:Knowledge representation learning has attracted much attention in natural language processing with encouraging results especially on tasks such as Entity Linking, Relationship Extraction, Question Answering and so on. However, most of the existing models only use the structural information of knowledge graph and cannot handle new entities or entities with few facts very well. This paper proposes a joint knowledge representation model which utilizes both entity description and structural information. Firstly, we introduce convolutional neural network models to encode the entity description. Then, we design the attention mechanism to select the valid information of the text. Moreover, we introduce the position vector as the supplementary information. Finally, a gating mechanism is applied to integrate the structural and textual information into the joint representation. Experimental results show that our models outperform other baselines on link prediction and triplet classification tasks.
[1] Bordes A, Usunier N, Garcia-Duran A, et al. Translating embeddings for modeling multi-relational data[C]//Proceedings of the NIPS'13, 2013: 2787-2795. [2] Zhen Wang, Jianwen Zhang, Jianlin Feng, et al. Knowledge graph and text jointly embedding[C]//Proceedings of the EMNLP'14, 2014: 1591-1601. [3] Huaping Zhong, Jianwen Zhang, Zhen Wang, et al. Aligning knowledge and text embeddings by entity descriptions[C]//Proceedings of the EMNLP'15, 2015: 267-272. [4] Ruobing Xie, Zhiyuan Liu, Jia Jia, et al. Representation learning of knowledge graphs with entity descriptions[C]//Proceedings of the AAAI'16, 2016: 2659-2665. [5] Jiacheng Xu, Xipeng Qiu, Kan Chen, et al. Knowledge graph representation with jointly structural and textual encoding[C]//Proceedings of the IJCAI'17, 2017: 1318-1324. [6] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Proceedings of the NIPS'17, 2017: 6000-6010. [7] Antoine Bordes, Xavier Glorot, Jason Weston, et al. Joint learning of words and meaning representations for open-text semantic parsing[C]//Proceedings of the AISTATS'12, 2012: 127-135. [8] Antoine Bordes, Jason Weston, Ronan Collobert, et al. Learning structured embeddings of knowledge bases[C]//Proceedings of the AAAI'11, 2011: 301-306. [9] Bordes A, Glorot X, Weston J, et al. A semantic matching energy function for learning with multi-relational data[J]. Machine Learning, 2014, 94(2): 233-259. [10] Zhen Wang, Jianwen Zhang, Jianlin Feng, et al. Knowledge graph embedding by translating on hyperplanes[C]//Proceedings of the AAAI'14, 2014: 1112-1119. [11] Yankai Lin, Zhiyuan Liu, Maosong Sun, et al. Learning entity and relation embeddings for knowledge graph completion[C]. In AAAI'15, 2015: 2181-2187. [12] Guoliang Ji, Shizhu He, Liheng Xu, et al. Knowledge graph embedding via dynamic mapping matrix[C]//Proceedings of the ACL'15, 2015: 687-696. [13] Guoliang Ji, Kang Liu, Shizhu He, et al. Knowledge graph completion with adaptive sparse transfer matrix[C]//Proceedings of the AAAI'16, 2016: 985-991. [14] Han Xiao, Minlie Huang, Yu Hao, et al. TransA: An adaptive approach for knowledge graph embedding[J]. arXiv preprint arXiv:1509.05490, 2015. [15] Shizhu He, Kang Liu, Guoliang Ji, et al. Learning to represent knowledge graphs with Gaussian embedding[C]//Proceedings of the CIKM'15, 2015: 623-632. [16] Han Xiao, Minlie Huang, Xiaoyan Zhu. TransG: A generative model for knowledge graph embedding[C]//Proceedings of the ACL'16, 2016: 2316-2325. [17] Richard Socher, Danqi Chen, Christopher D Manning, et al. Reasoning with neural tensor networks for knowledge base completion[C]//Proceedings of the NIPS'13, 2013: 926-934. [18] Dongxu Zhang, Bin Yuan, Dong Wang, et al. Joint semantic relevance learning with text data and graph knowledge[C]//Proceedings of the ACL-IJCNLP'15, 2015: 32-40. [19] Ni Lao, Amarnag Subramanya, Fernando Pereira, et al. Reading the web with learned syntactic-semantic inference rules[C]//Proceedings of the EMNLP-CoNLL'12, 2012: 1017-1026. [20] Toutanova K, Chen D, Pantel P, et al. Representing text for joint embedding of text and knowledge bases[C]//Proceedings of the EMNLP'15, 2015: 1499-1509. [21] Arvind Neelakantan, Benjamin Roth, Andrew McCallum. Compositional vector space models for knowledge base completion[C]//Proceedings of the ACL'15, 2015: 156-166. [22] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space[J]. arXiv preprint arXiv:1301.3781, 2013. [23] Sukhbaatar S, Weston J, Fergus R.End-to-end memory networks[C]//Proceedings of the NIPS'15, 2015: 2440-2448. [24] Miller G A. WordNet: A lexical database for English[J]. Communications of the ACM, 1995, 38(11): 39-41. [25] Bollacker K, Evans C, Paritosh P, et al. Freebase: A collaboratively created graph database for structuring human knowledge[C]//Proceedings of the SIGMOD'08, 2008: 1247-1250.