Back to Search Start Over

HyperKAN: Kolmogorov-Arnold Networks make Hyperspectral Image Classificators Smarter

Authors :
Lobanov, Valeriy
Firsov, Nikita
Myasnikov, Evgeny
Khabibullin, Roman
Nikonorov, Artem
Publication Year :
2024

Abstract

In traditional neural network architectures, a multilayer perceptron (MLP) is typically employed as a classification block following the feature extraction stage. However, the Kolmogorov-Arnold Network (KAN) presents a promising alternative to MLP, offering the potential to enhance prediction accuracy. In this paper, we propose the replacement of linear and convolutional layers of traditional networks with KAN-based counterparts. These modifications allowed us to significantly increase the per-pixel classification accuracy for hyperspectral remote-sensing images. We modified seven different neural network architectures for hyperspectral image classification and observed a substantial improvement in the classification accuracy across all the networks. The architectures considered in the paper include baseline MLP, state-of-the-art 1D (1DCNN) and 3D convolutional (two different 3DCNN, NM3DCNN), and transformer (SSFTT) architectures, as well as newly proposed M1DCNN. The greatest effect was achieved for convolutional networks working exclusively on spectral data, and the best classification quality was achieved using a KAN-based transformer architecture. All the experiments were conducted using seven openly available hyperspectral datasets. Our code is available at https://github.com/f-neumann77/HyperKAN.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.05278
Document Type :
Working Paper