基于全局对抗负样本的图对比学习方法

岑科廷,沈华伟,曹婍,徐冰冰,程学旗

PDF(2117 KB)
PDF(2117 KB)
中文信息学报 ›› 2024, Vol. 38 ›› Issue (1) : 65-73,85.
语言分析与计算模型

基于全局对抗负样本的图对比学习方法

  • 岑科廷1,2,沈华伟1,2,曹婍1,徐冰冰1,程学旗2,3
作者信息 +

Graph Contrastive Learning with Global Adversarial Negative Examples

  • CEN Keting1,2, SHEN Huawei1,2, CAO Qi1, XU Bingbing1, CHENG Xueqi2,3
Author information +
History +

摘要

图对比学习在无监督节点表示方面取得了巨大成功。该类模型旨在通过拉近同一节点对应的不同增强节点的表示(正样本),推远不同节点的表示(负样本)的方式为每个节点学习表示。其中负样本的选择是图对比学习的一个关键。现有的方法通过随机采样或者根据一些启发式的重要性度量标准为每个节点选择对应的负样本。然而上述方法并不能准确地找到对模型关键的负样本。同时,由于需要为每一个节点选取其对应的负样本,导致高昂的时间开销。为了解决上述问题,该文提出通过对抗学习的方式,为所有节点学习一个全局共享的关键的负样本。在多个基准数据集上的实验结果证明了该方法的效率和有效性。

Abstract

Graph contrastive learning, a successful unsupervised node representation method, aims to learn node representations by pulling the augmented versions of the node together (positive examples), while pushing it with other nodes apart (negative examples). One key component of graph contrastive learning is the choice of negative examples, and existing methods fail accurately finding the challengeable negative examples that are critical to the model. We propose to learn a global negative example for all the nodes, through adversarial learning. Extensive experiment results demonstrate both the efficiency and effectiveness of the proposed model.

关键词

图表示学习 / 图对比学习 / 对抗负样本 / 全局负样本

Key words

graph rrepresentation learning / graph contrastive learning / adversarial negative examples / global negative examples

引用本文

导出引用
岑科廷,沈华伟,曹婍,徐冰冰,程学旗. 基于全局对抗负样本的图对比学习方法. 中文信息学报. 2024, 38(1): 65-73,85
CEN Keting, SHEN Huawei, CAO Qi, XU Bingbing, CHENG Xueqi. Graph Contrastive Learning with Global Adversarial Negative Examples. Journal of Chinese Information Processing. 2024, 38(1): 65-73,85

参考文献

[1] 涂存超, 杨成, 刘知远等. 网络表示学习综述[J]. 中国科学: 信息科学, 2017,047(008):980-996.
[2] CUI P, WANG X, PEI J, et al. A survey on network embedding[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 31(5): 833-852.
[3] ZHU Y, XU Y, YU F, et al. Graph contrastive learning with adaptive augmentation[C]//Proceedings of the Web Conference. Slovenia: ACM, 2021: 2069-2080.
[4] SUBRAMONIAN A. MOTIF-Driven contrastive learning of graph representations[C]//Proceedings of 35th AAAI Conference on Artificial Intelligence. Virtual Event: AAAI Press, 2021: 15980-15981.
[5] SUN Q, LI J, PENG H, et al. SUGAR: Subgraph neural network with reinforcement pooling and self-supervised mutual information mechanism[C]//Proceedings of the Web Conference.Slovenia: ACM, 2021:2081-2091.
[6] MAVROMATIS C, KARYPIS G. Graph InfoClust: Maximizing coarse-grain mutual information in graphs[C]// Proceedings of PAKDD. Virtual Event: Springer, 2021:541-553.
[7] HASSANI K, KHASAHMADI A H. Contrastive multi-view representation learning on graphs[C]// Proceedings of International Conference on Machine Learning. Virtual Event: PMLR, 2020:4116-4126.
[8] PENG Z, HUANG W, LUO M, et al. Graph representation learning via graphical mutual information maximization[C]// Proceedings of the Web Conference. Taiwan: ACM, 2020: 259-270.
[9] YANG Z, DING M, ZHOU C,et al. Understanding negative sampling in graph representation learning[C]//Proceedings of the International Conference on Knowledge Discovery & Data Mining. Virtual Event, CA, USA:ACM, 2020:1666-1676.
[10] HAFIDI H, GHOGH M, CIBLAT P, et al. Negative sampling strategies for contrastive self-supervised learning of graph representations[J].Signal Processing, 2022,190:108310.
[11] YOU Y, CHEN T, SUI Y, et al. Graph contrastive learning with augmentations[C]//Proceedings of Advances in Neural Information Processing Systems. Virtual: 2020.
[12] HU Q, WANG X, HU W, et al.AdCo: Adversarial contrast for efficient learning of unsupervised representations from self-trained negative adversaries[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. virtual: IEEE, 2021: 1074-1083.
[13] HU W, LIU B, GOMES J, et al. Strategies for pre-training graph neural networks[C]// Proceedings of 8th International Conference on Learning Representations. Ethiopia: OpenReview.net, 2020.
[14] KIPF TN, WELLING M. Variational graph auto-encoders[J]. arXiv preprint arXiv:1611.07308, 2016.
[15] WANG D, CUI P, ZHU W. Structuraldeep network embedding[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. USA: ACM, 2016: 1225-1234.
[16] HU Z, DONG Y, WANG K, et al. GPT-GNN: Generative pre-training of graph neural networks[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. USA: ACM, 2020: 1857-1867.
[17] ZHANG Z, YANG H, BU J, et al. ANRL: Attributed network representation learning via deep neural networks[C]//Proceedings of the IJCAI. Stockholm, Sweden: ijcai.org, 2018: 3155-3161.
[18] QIU J, CHEN Q, DONG Y, et al. GCC: Graph contrastive coding for graph neural network pre-training[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. USA: ACM, 2020: 1150-1160.
[19] VELICKOVIC P, FEDUS W, HAMILTON W L, et al. Deep graph infomax[C]// Proceedings of ICLR. USA: OpenReview.net, 2019.
[20] KALANTIDIS Y, SARIYILDIZ MB, PION N,et al. Hard negative mixing for contrastive learning[C]//Proceedings of the Advances in Neural Information Processing Systems. Virtual, 2020.
[21] LI H, WANG X, ZHANG Z,et al. Disentangled contrastive learning on graphs[C]//Proceedings of the Advances in Neural Information Processing Systems. Virtual, 2021.
[22] FENG S, JING B, ZHU Y, et al. Adversarial graph contrastive learning with information regularization[C]//Proceedings of the Web Conference Virtual, 2022.
[23] DAI Q, SHEN X, ZHANG L,et al. Adversarial training methods for network embedding[C]//Proceedings of the World Wide Web Conference. San Francisco, CA, USA: ACM, 2019:329-339.
[24] XU B, SHEN H, CAO Q, et al. Graph wavelet neural network[C/OL]//Proceedings of 7th International Conference on Learning Representations.USA:OpenReview.net, 2019:https://openreview.net/forum?id=H1ewdiR5tQ.
[25] HUANG J, SHEN H, CAO Q, et al. Signed bipartite graph neural networks[C]//Proceedings of the 30th ACM International Conference on Information and Knowledge Management. USA: ACM, 2021.
[26] KIPF TN, WELLING M. Semi-supervised classification with graph convolutional networks[C]// Proceedings of 5th International Conference on Learning Representations. France: OpenReview.net, 2017.
[27] CHEN T,KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]//Proceedings of the International Conference on Machine Learning. Virtual Event: PMLR, 2020: 1597-1607.
[28] HE K, FAN H, WU Y, et al. Momentum contrast for unsupervised visual representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. USA: IEEE, 2020: 9729-9738.
[29] WANG T, ISOLA P. Understanding contrastive representation learning through alignment and uniformity on the hypersphere[C]// Proceedings of the 37th International Conference on Machine Learning. Virtual Event: PMLR, 2020: 9929-9939.     
[30] THAKOOR S, TALLEC C, AZAR M G, et al. Bootstrapped representation learning on graphs[J]. arXiv preprint arXiv:2102.06514, 2021.
[31] PEROZZI B, ALRFOU R, SKIENA S. DeepWalk: Online learning of social representations[C]//Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. USA: ACM, 2014:701-710.
[32] TANG J, QU M, WANG M, et al. LINE: Large-scale information network embedding[C]// Proceedings of the 24th International Conference on World Wide Web. Italy: ACM, 2015: 1067-1077.

基金

国家重点研究与发展计划(2018YFC0825204);国家自然科学基金(U21B2046,62102402);北京智源青年科学家项目(BAAI2019QN0304)
PDF(2117 KB)

798

Accesses

0

Citation

Detail

段落导航
相关文章

/