51. Local minima found in the subparameter space can be effective for ensembles of deep convolutional neural networks
- Author
-
Zhongxi Zheng, Ning Chen, Yang Wu, Haijun Lv, Yongquan Yang, and Jiayi Zheng
- Subjects
business.industry ,Generalization ,Computer science ,Deep learning ,Computer Science::Neural and Evolutionary Computation ,02 engineering and technology ,01 natural sciences ,Ensemble learning ,Convolutional neural network ,Set (abstract data type) ,Maxima and minima ,Artificial Intelligence ,Position (vector) ,Computer Science::Computer Vision and Pattern Recognition ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,010306 general physics ,business ,Software ,Curse of dimensionality - Abstract
Ensembles of deep convolutional neural networks (CNNs), which integrate multiple deep CNN models to achieve better generalization for an artificial intelligence application, now play an important role in ensemble learning due to the dominant position of deep learning. However, the usage of ensembles of deep CNNs is still not adequate because the increasing complexity of deep CNN architectures and the emerging data with large dimensionality have made the training stage and testing stage of ensembles of deep CNNs inevitably expensive. To alleviate this situation, we propose a new approach that finds multiple models converging to local minima in subparameter space for ensembles of deep CNNs. The subparameter space here refers to the space constructed by a partial selection of parameters, instead of the entire set of parameters, of a deep CNN architecture. We show that local minima found in the subparameter space of a deep CNN architecture can in fact be effective for ensembles of deep CNNs to achieve better generalization. Moreover, finding local minima in the subparameter space of a deep CNN architecture is more affordable at the training stage, and the multiple models at the found local minima can also be selectively fused to achieve better ensemble generalization while limiting the expense to a single deep CNN model at the testing stage. Demonstrations of MobilenetV2, Resnet50 and InceptionV4 (deep CNN architectures from lightweight to complex) on ImageNet, CIFAR-10 and CIFAR-100, respectively, lead us to believe that finding local minima in the subparameter space of a deep CNN architecture could be leveraged to broaden the usage of ensembles of deep CNNs.
- Published
- 2021