基于带注意力机制CNN的联合知识表示模型

彭敏,姚亚兰,谢倩倩,高望

PDF(1341 KB)
PDF(1341 KB)
中文信息学报 ›› 2019, Vol. 33 ›› Issue (2) : 51-58.
知识表示与知识获取

基于带注意力机制CNN的联合知识表示模型

  • 彭敏,姚亚兰,谢倩倩,高望
作者信息 +

Knowledge Representation Learning for Joint Structural and Textual Embedding Via Attention-based CNN

  • PENG Min, YAO Yalan, XIE Qianqian, GAO Wang
Author information +
History +

摘要

知识表示学习在自然语言处理领域获得了广泛关注,尤其在实体链指、关系抽取及自动问答等任务上表现优异。然而,大部分已有的表示学习模型仅利用知识库中的结构信息,无法很好地处理新的实体或关联事实极少的实体。为解决该问题,该文提出了引入实体描述信息的联合知识表示模型。该模型先利用卷积神经网络编码实体描述,然后利用注意力机制来选择文本中的有效信息,接着又引入位置向量作为补充信息,最后利用门机制联合结构和文本的向量,形成最终的联合表示。实验表明,该文的模型在链路预测和三元组分类任务上与目前最好的模型性能相近。

Abstract

Knowledge representation learning has attracted much attention in natural language processing with encouraging results especially on tasks such as Entity Linking, Relationship Extraction, Question Answering and so on. However, most of the existing models only use the structural information of knowledge graph and cannot handle new entities or entities with few facts very well. This paper proposes a joint knowledge representation model which utilizes both entity description and structural information. Firstly, we introduce convolutional neural network models to encode the entity description. Then, we design the attention mechanism to select the valid information of the text. Moreover, we introduce the position vector as the supplementary information. Finally, a gating mechanism is applied to integrate the structural and textual information into the joint representation. Experimental results show that our models outperform other baselines on link prediction and triplet classification tasks.

关键词

知识表示学习 / 卷积神经网络 / 注意力机制

Key words

knowledge representation learning / CNN / attention mechanism

引用本文

导出引用
彭敏,姚亚兰,谢倩倩,高望. 基于带注意力机制CNN的联合知识表示模型. 中文信息学报. 2019, 33(2): 51-58
PENG Min, YAO Yalan, XIE Qianqian, GAO Wang. Knowledge Representation Learning for Joint Structural and Textual Embedding Via Attention-based CNN. Journal of Chinese Information Processing. 2019, 33(2): 51-58

参考文献

[1] Bordes A, Usunier N, Garcia-Duran A, et al. Translating embeddings for modeling multi-relational data[C]//Proceedings of the NIPS'13, 2013: 2787-2795.
[2] Zhen Wang, Jianwen Zhang, Jianlin Feng, et al. Knowledge graph and text jointly embedding[C]//Proceedings of the EMNLP'14, 2014: 1591-1601.
[3] Huaping Zhong, Jianwen Zhang, Zhen Wang, et al. Aligning knowledge and text embeddings by entity descriptions[C]//Proceedings of the EMNLP'15, 2015: 267-272.
[4] Ruobing Xie, Zhiyuan Liu, Jia Jia, et al. Representation learning of knowledge graphs with entity descriptions[C]//Proceedings of the AAAI'16, 2016: 2659-2665.
[5] Jiacheng Xu, Xipeng Qiu, Kan Chen, et al. Knowledge graph representation with jointly structural and textual encoding[C]//Proceedings of the IJCAI'17, 2017: 1318-1324.
[6] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Proceedings of the NIPS'17, 2017: 6000-6010.
[7] Antoine Bordes, Xavier Glorot, Jason Weston, et al. Joint learning of words and meaning representations for open-text semantic parsing[C]//Proceedings of the AISTATS'12, 2012: 127-135.
[8] Antoine Bordes, Jason Weston, Ronan Collobert, et al. Learning structured embeddings of knowledge bases[C]//Proceedings of the AAAI'11, 2011: 301-306.
[9] Bordes A, Glorot X, Weston J, et al. A semantic matching energy function for learning with multi-relational data[J]. Machine Learning, 2014, 94(2): 233-259.
[10] Zhen Wang, Jianwen Zhang, Jianlin Feng, et al. Knowledge graph embedding by translating on hyperplanes[C]//Proceedings of the AAAI'14, 2014: 1112-1119.
[11] Yankai Lin, Zhiyuan Liu, Maosong Sun, et al. Learning entity and relation embeddings for knowledge graph completion[C]. In AAAI'15, 2015: 2181-2187.
[12] Guoliang Ji, Shizhu He, Liheng Xu, et al. Knowledge graph embedding via dynamic mapping matrix[C]//Proceedings of the ACL'15, 2015: 687-696.
[13] Guoliang Ji, Kang Liu, Shizhu He, et al. Knowledge graph completion with adaptive sparse transfer matrix[C]//Proceedings of the AAAI'16, 2016: 985-991.
[14] Han Xiao, Minlie Huang, Yu Hao, et al. TransA: An adaptive approach for knowledge graph embedding[J]. arXiv preprint arXiv:1509.05490, 2015.
[15] Shizhu He, Kang Liu, Guoliang Ji, et al. Learning to represent knowledge graphs with Gaussian embedding[C]//Proceedings of the CIKM'15, 2015: 623-632.
[16] Han Xiao, Minlie Huang, Xiaoyan Zhu. TransG: A generative model for knowledge graph embedding[C]//Proceedings of the ACL'16, 2016: 2316-2325.
[17] Richard Socher, Danqi Chen, Christopher D Manning, et al. Reasoning with neural tensor networks for knowledge base completion[C]//Proceedings of the NIPS'13, 2013: 926-934.
[18] Dongxu Zhang, Bin Yuan, Dong Wang, et al. Joint semantic relevance learning with text data and graph knowledge[C]//Proceedings of the ACL-IJCNLP'15, 2015: 32-40.
[19] Ni Lao, Amarnag Subramanya, Fernando Pereira, et al. Reading the web with learned syntactic-semantic inference rules[C]//Proceedings of the EMNLP-CoNLL'12, 2012: 1017-1026.
[20] Toutanova K, Chen D, Pantel P, et al. Representing text for joint embedding of text and knowledge bases[C]//Proceedings of the EMNLP'15, 2015: 1499-1509.
[21] Arvind Neelakantan, Benjamin Roth, Andrew McCallum. Compositional vector space models for knowledge base completion[C]//Proceedings of the ACL'15, 2015: 156-166.
[22] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space[J]. arXiv preprint arXiv:1301.3781, 2013.
[23] Sukhbaatar S, Weston J, Fergus R.End-to-end memory networks[C]//Proceedings of the NIPS'15, 2015: 2440-2448.
[24] Miller G A. WordNet: A lexical database for English[J]. Communications of the ACM, 1995, 38(11): 39-41.
[25] Bollacker K, Evans C, Paritosh P, et al. Freebase: A collaboratively created graph database for structuring human knowledge[C]//Proceedings of the SIGMOD'08, 2008: 1247-1250.

基金

国家自然科学基金(61772382);国家自然科学基金(61472291)
PDF(1341 KB)

876

Accesses

0

Citation

Detail

段落导航
相关文章

/