Back to Search Start Over

Investigating strategies towards adversarially robust time series classification.

Authors :
Abdu-Aguye, Mubarak G.
Gomaa, Walid
Makihara, Yasushi
Yagi, Yasushi
Source :
Pattern Recognition Letters. Apr2022, Vol. 156, p104-111. 8p.
Publication Year :
2022

Abstract

• Classifying time series with Euclidean distance is robust against adversarial attacks. • Time series classifiers using fixed kernels are robust against adversarial attacks. • This is empirically proven on 85 datasets for 2 state-of-the-art adversarial attacks. Deep neural networks have been shown to be vulnerable against specifically-crafted perturbations designed to affect their predictive performance. Such perturbations, formally termed 'adversarial attacks' have been designed for various domains in the literature, most prominently in computer vision and more recently, in time series classification. Therefore there is a need to derive robust strategies to defend deep networks from such attacks. In this work we propose to establish axioms of robustness against adversarial attacks in time series classification. We subsequently design a suitable experimental methodology and empirically validate the hypotheses put forth. Results obtained from our investigations confirm the proposed hypotheses, and provide a strong empirical baseline with a view to mitigating the effects of adversarial attacks in deep time series classification. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01678655
Volume :
156
Database :
Academic Search Index
Journal :
Pattern Recognition Letters
Publication Type :
Academic Journal
Accession number :
156319844
Full Text :
https://doi.org/10.1016/j.patrec.2022.01.023