Back to Search Start Over

Feature Equilibrium: An Adversarial Training Method to Improve Representation Learning

Authors :
Minghui Liu
Meiyi Yang
Jiali Deng
Xuan Cheng
Tianshu Xie
Pan Deng
Haigang Gong
Ming Liu
Xiaomin Wang
Source :
International Journal of Computational Intelligence Systems, Vol 16, Iss 1, Pp 1-12 (2023)
Publication Year :
2023
Publisher :
Springer, 2023.

Abstract

Abstract Over-fitting is a significant threat to the integrity and reliability of deep neural networks with generous parameters. One problem is that the model learned more specific features than general features in the training process. To solve the problem, we propose an adversarial training method to assist the model in strengthening general representation learning. In this method, we make a classification model as a generator G and introduce an unsupervised discriminator D to distinguish the hidden feature of the classification model from real images to limit their spatial distance. Notably, the D will fall into the trap of a perfect discriminator resulting in the gradient of confrontation loss of 0 after overtraining. To avoid this situation, we train the D with a probability $$P_{c}$$ P c . Our proposed method is easy to incorporate into existing frameworks. It has been evaluated under various network architectures over different fields of datasets. Experiments show that this method, under low computational cost, outperforms the benchmark by 1.5–2 points on different datasets. For semantic segmentation on VOC, our proposed method achieves 2.2 points higher mAP.

Details

Language :
English
ISSN :
18756883
Volume :
16
Issue :
1
Database :
Directory of Open Access Journals
Journal :
International Journal of Computational Intelligence Systems
Publication Type :
Academic Journal
Accession number :
edsdoj.3831f29335f41fdb62ecb765e3bfcc0
Document Type :
article
Full Text :
https://doi.org/10.1007/s44196-023-00229-2