1. Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs.
- Author
-
Chen, Junyang, Gong, Zhiguo, Wang, Wei, Wang, Cong, Xu, Zhenghua, Lv, Jianming, Li, Xueliang, Wu, Kaishun, and Liu, Weiwen
- Subjects
- *
RANDOM graphs , *DATA mining , *VIRTUAL networks , *DATA visualization , *MACHINE learning - Abstract
Network representation learning (NRL) has far-reaching effects on data mining research, showing its importance in many real-world applications. NRL, also known as network embedding, aims at preserving graph structures in a low-dimensional space. These learned representations can be used for subsequent machine learning tasks, such as vertex classification, link prediction, and data visualization. Recently, graph convolutional network (GCN)-based models, e.g., GraphSAGE, have drawn a lot of attention for their success in inductive NRL. When conducting unsupervised learning on large-scale graphs, some of these models employ negative sampling (NS) for optimization, which encourages a target vertex to be close to its neighbors while being far from its negative samples. However, NS draws negative vertices through a random pattern or based on the degrees of vertices. Thus, the generated samples could be either highly relevant or completely unrelated to the target vertex. Moreover, as the training goes, the gradient of NS objective calculated with the inner product of the unrelated negative samples and the target vertex may become zero, which will lead to learning inferior representations. To address these problems, we propose an adversarial training method tailored for unsupervised inductive NRL on large networks. For efficiently keeping track of high-quality negative samples, we design a caching scheme with sampling and updating strategies that has a wide exploration of vertex proximity while considering training costs. Besides, the proposed method is adaptive to various existing GCN-based models without significantly complicating their optimization process. Extensive experiments show that our proposed method can achieve better performance compared with the state-of-the-art models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF