1. Unsupervised feature learning-based encoder and adversarial networks
- Author
-
Endang Suryawati, Hilman F. Pardede, Vicky Zilvan, Ade Ramdan, Dikdik Krisnandi, Ana Heryana, R. Sandra Yuwana, R. Budiarianto Suryo Kusumo, Andria Arisal, and Ahmad Afif Supianto
- Subjects
Generative adversarial network ,Unsupervised feature learning ,Autoencoder ,Convolutional neural networks ,Computer engineering. Computer hardware ,TK7885-7895 ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract In this paper, we propose a novel deep learning-based feature learning architecture for object classification. Conventionally, deep learning methods are trained with supervised learning for object classification. But, this would require large amount of training data. Currently there are increasing trends to employ unsupervised learning for deep learning. By doing so, dependency on the availability of large training data could be reduced. One implementation of unsupervised deep learning is for feature learning where the network is designed to “learn” features automatically from data to obtain good representation that then could be used for classification. Autoencoder and generative adversarial networks (GAN) are examples of unsupervised deep learning methods. For GAN however, the trajectories of feature learning may go to unpredicted directions due to random initialization, making it unsuitable for feature learning. To overcome this, a hybrid of encoder and deep convolutional generative adversarial network (DCGAN) architectures, a variant of GAN, are proposed. Encoder is put on top of the Generator networks of GAN to avoid random initialisation. We called our method as EGAN. The output of EGAN is used as features for two deep convolutional neural networks (DCNNs): AlexNet and DenseNet. We evaluate the proposed methods on three types of dataset and the results indicate that better performances are achieved by our proposed method compared to using autoencoder and GAN.
- Published
- 2021
- Full Text
- View/download PDF