1. A New Dataset and Deep Residual Spectral Spatial Network for Hyperspectral Image Classification
- Author
-
Fansheng Chen, Dan Zeng, Yiming Xue, Yueming Wang, and Zhijiang Zhang
- Subjects
Physics and Astronomy (miscellaneous) ,Computer science ,General Mathematics ,0211 other engineering and technologies ,02 engineering and technology ,Overfitting ,Residual ,Convolutional neural network ,Dimension (vector space) ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,sample balanced loss ,021101 geological & geomatics engineering ,deep residual spectral spatial network (DRSSN) ,Pixel ,business.industry ,Shandong Feicheng HSI dataset ,lcsh:Mathematics ,Hyperspectral imaging ,Pattern recognition ,lcsh:QA1-939 ,hyperspectral image (HSI) classification ,Chemistry (miscellaneous) ,Sample size determination ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Feature learning - Abstract
Due to the limited varieties and sizes of existing public hyperspectral image (HSI) datasets, the classification accuracies are higher than 99% with convolutional neural networks (CNNs). In this paper, we presented a new HSI dataset named Shandong Feicheng, whose size and pixel quantity are much larger. It also has a larger intra-class variance and a smaller inter-class variance. State-of-the-art methods were compared on it to verify its diversity. Otherwise, to reduce overfitting caused by the imbalance between high dimension and small quantity of labeled HSI data, existing CNNs for HSI classification are relatively shallow and suffer from low capacity of feature learning. To solve this problem, we proposed an HSI classification framework named deep residual spectral spatial setwork (DRSSN). By using shortcut connection structure, which is an asymmetry structure, DRSSN can be deeper to extract features with better discrimination. In addition, to alleviate insufficient training caused by unbalanced sample sizes between easily and hard classified samples, we proposed a novel training loss function named sample balanced loss, which allocated weights to the losses of samples according to their prediction confidence. Experimental results on two popular datasets and our proposed dataset showed that our proposed network could provide competitive results compared with state-of-the-art methods.
- Published
- 2020
- Full Text
- View/download PDF