Back to Search Start Over

Improving STDP-based Visual Feature Learning with Whitening

Authors :
Ioan Marius Bilasco
Pierre Falez
Pierre Tirilly
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
Université de Lille
FOX MIIRE (LIFL)
Laboratoire d'Informatique Fondamentale de Lille (LIFL)
Université de Lille, Sciences et Technologies-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS)-Université de Lille, Sciences et Technologies-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS)
Ecole nationale supérieure Mines-Télécom Lille Douai (IMT Lille Douai)
Institut Mines-Télécom [Paris] (IMT)
Source :
IJCNN, IJCNN 2020-International Joint Conference on Neural Networks, IJCNN 2020-International Joint Conference on Neural Networks, Jul 2020, Glasgow, United Kingdom
Publication Year :
2020
Publisher :
IEEE, 2020.

Abstract

In recent years, spiking neural networks (SNNs) emerge as an alternative to deep neural networks (DNNs). SNNs present a higher computational efficiency – using low-power neuromorphic hardware – and require less labeled data for training – using local and unsupervised learning rules such as spike timing-dependent plasticity (STDP). SNNs have proven their effectiveness in image classification on simple datasets such as MNIST. However, to process natural images, a pre-processing step is required. Difference-of-Gaussians (DoG) filtering is typically used together with on-center / off-center coding, but it results in a loss of information that decreases the classification performance. In this paper, we propose to use whitening as a pre-processing step before learning features with STDP. Experiments on CIFAR-10 show that whitening allows STDP to learn visual features that are visually closer to the ones learned with standard neural networks, with a significantly increased classification performance as compared to DoG filtering. We also propose an approximation of whitening as convolution kernels that is computationally cheaper to learn and more suited to be implemented on neuromorphic hardware. Experiments on CIFAR-10 show that it performs similarly to regular whitening. Cross-dataset experiments on CIFAR-10 and STL-10 also show that it is stable across datasets, making it possible to learn a single whitening transformation to process different datasets.

Details

Database :
OpenAIRE
Journal :
2020 International Joint Conference on Neural Networks (IJCNN)
Accession number :
edsair.doi.dedup.....d513c1a4483375fe17c01c4fd0cdd07c