Back to Search Start Over

Learning Activation Functions for Adversarial Attack Resilience in CNNs

Authors :
Salimi, Maghsood
Loni, Mohammad
Sirjani, Marjan
Salimi, Maghsood
Loni, Mohammad
Sirjani, Marjan
Publication Year :
2023

Abstract

Adversarial attacks on convolutional neural networks (CNNs) have been a serious concern in recent years, as they can cause CNNs to produce inaccurate predictions. Through our analysis of training CNNs with adversarial examples, we discovered that this was primarily caused by naïvely selecting ReLU as the default choice for activation functions. In contrast to the focus of recent works on proposing adversarial training methods, we study the feasibility of an innovative alternative: learning novel activation functions to make CNNs more resilient to adversarial attacks. In this paper, we propose a search framework that combines simulated annealing and late acceptance hill-climbing to find activation functions that are more robust against adversarial attacks in CNN architectures. The proposed search method has superior search convergence compared to commonly used baselines. The proposed method improves the resilience to adversarial attacks by achieving up to 17.1%, 22.8%, and 16.6% higher accuracy against BIM, FGSM, and PGD attacks, respectively, over ResNet-18 trained on the CIFAR-10 dataset.

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1416050865
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1007.978-3-031-42505-9_18