Back to Search
Start Over
Layer-by-Layer Knowledge Distillation for Training Simplified Bipolar Morphological Neural Networks.
- Source :
-
Programming & Computer Software . 2023 Suppl 2, Vol. 49, pS108-S114. 7p. - Publication Year :
- 2023
-
Abstract
- Various neuron approximations can be used to reduce the computational complexity of neural networks. One such approximation based on summation and maximum operations is a bipolar morphological neuron. This paper presents an improved structure of the bipolar morphological neuron that enhances its computational efficiency and a new approach to training based on continuous approximations of the maximum and knowledge distillation. Experiments were carried out on the MNIST dataset using a LeNet-like neural network architecture and on the CIFAR10 dataset using a ResNet-22 model architecture. The proposed training method achieves 99.45% classification accuracy on the LeNet-like model (the same accuracy as that provided by the classical network) and 86.69% accuracy on the ResNet-22 model compared with 86.43% accuracy of the classical model. The results show that the proposed method with log-sum-exp (LSE) approximation of the maximum and layer-by-layer knowledge distillation makes it possible to obtain a simplified bipolar morphological network that is not inferior to the classical networks. [ABSTRACT FROM AUTHOR]
- Subjects :
- *COMPUTATIONAL complexity
*ARTIFICIAL neural networks
Subjects
Details
- Language :
- English
- ISSN :
- 03617688
- Volume :
- 49
- Database :
- Academic Search Index
- Journal :
- Programming & Computer Software
- Publication Type :
- Academic Journal
- Accession number :
- 176005911
- Full Text :
- https://doi.org/10.1134/S0361768823100080