1. Micro-Supervised Disturbance Learning: A Perspective of Representation Probability Distribution
- Author
-
Jielei Chu, Jing Liu, Hongjun Wang, Hua Meng, Zhiguo Gong, and Tianrui Li
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computational Theory and Mathematics ,Statistics - Machine Learning ,Artificial Intelligence ,Applied Mathematics ,Machine Learning (stat.ML) ,Computer Vision and Pattern Recognition ,Software ,Machine Learning (cs.LG) - Abstract
The instability is shown in the existing methods of representation learning based on Euclidean distance under a broad set of conditions. Furthermore, the scarcity and high cost of labels prompt us to explore more expressive representation learning methods which depends on the labels as few as possible. To address these issues, the small-perturbation ideology is firstly introduced on the representation learning model based on the representation probability distribution. The positive small-perturbation information (SPI) which only depend on two labels of each cluster is used to stimulate the representation probability distribution and then two variant models are proposed to fine-tune the expected representation distribution of RBM, namely, Micro-supervised Disturbance GRBM (Micro-DGRBM) and Micro-supervised Disturbance RBM (Micro-DRBM) models. The Kullback-Leibler (KL) divergence of SPI is minimized in the same cluster to promote the representation probability distributions to become more similar in Contrastive Divergence(CD) learning. In contrast, the KL divergence of SPI is maximized in the different clusters to enforce the representation probability distributions to become more dissimilar in CD learning. To explore the representation learning capability under the continuous stimulation of the SPI, we present a deep Micro-supervised Disturbance Learning (Micro-DL) framework based on the Micro-DGRBM and Micro-DRBM models and compare it with a similar deep structure which has not any external stimulation. Experimental results demonstrate that the proposed deep Micro-DL architecture shows better performance in comparison to the baseline method, the most related shallow models and deep frameworks for clustering., Comment: 14 pages
- Published
- 2023