Back to Search Start Over

Brain programming is immune to adversarial attacks: Towards accurate and robust image classification using symbolic learning.

Authors :
Ibarra-Vazquez, Gerardo
Olague, Gustavo
Chan-Ley, Mariana
Puente, Cesar
Soubervielle-Montalvo, Carlos
Source :
Swarm & Evolutionary Computation; Jun2022, Vol. 71, pN.PAG-N.PAG, 1p
Publication Year :
2022

Abstract

• This article presents brain programming as a methodology for image classification robust to adversarial attacks. • The paper compares three state-of-the art image classification methodologies from the standpoint of accuracy and robustness. • The study of the effect of adversarial attacks on art media categorization beyond machine learning. • The proposal of brain programming as an accurate and robust technique to recognize artwork media. • The analysis of robustness through pair-wise statistical tests based on prediction confidence and multiple comparisons. In recent years, the security concerns about the vulnerability of deep convolutional neural networks to adversarial attacks in slight modifications to the input image almost invisible to human vision make their predictions untrustworthy. Therefore, it is necessary to provide robustness to adversarial examples with an accurate score when developing a new classifier. In this work, we perform a comparative study of the effects of these attacks on the complex problem of art media categorization, which involves a sophisticated analysis of features to classify a fine collection of artworks. We tested a prevailing bag of visual words approach from computer vision, four deep convolutional neural networks (AlexNet, VGG, ResNet, ResNet101), and brain programming. The results showed that brain programming predictions' change in accuracy was below 2% using adversarial examples from the fast gradient sign method. With a multiple-pixel attack, brain programming obtained four out of seven classes without changes and the rest with a maximum error of 4%. Finally, brain programming got four categories without changes using adversarial patches and for the remaining three classes with an accuracy variation of 1%. The statistical analysis confirmed that brain programming predictions' confidence was not significantly different for each pair of clean and adversarial examples in every experiment. These results prove brain programming's robustness against adversarial examples compared to deep convolutional neural networks and the computer vision method for the art media categorization problem. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
22106502
Volume :
71
Database :
Supplemental Index
Journal :
Swarm & Evolutionary Computation
Publication Type :
Academic Journal
Accession number :
156844120
Full Text :
https://doi.org/10.1016/j.swevo.2022.101059