220 results on '"black-box attack"'
Search Results
2. Improving the transferability of adversarial examples with path tuning.
- Author
-
Li, Tianyu, Li, Xiaoyu, Ke, Wuping, Tian, Xuwei, Zheng, Desheng, and Lu, Chao
- Subjects
ARTIFICIAL neural networks ,IRREGULAR sampling (Signal processing) ,COMPUTER vision ,CYBERTERRORISM ,ARTIFICIAL intelligence - Abstract
Adversarial attacks pose a significant threat to real-world applications based on deep neural networks (DNNs), especially in security-critical applications. Research has shown that adversarial examples (AEs) generated on a surrogate model can also succeed on a target model, which is known as transferability. Feature-level transfer-based attacks improve the transferability of AEs by disrupting intermediate features. They target the intermediate layer of the model and use feature importance metrics to find these features. However, current methods overfit feature importance metrics to surrogate models, which results in poor sharing of the importance metrics across models and insufficient destruction of deep features. This work demonstrates the trade-off between feature importance metrics and feature corruption generalization, and categorizes feature destructive causes of misclassification. This work proposes a generative framework named PTNAA to guide the destruction of deep features across models, thus improving the transferability of AEs. Specifically, the method introduces path methods into integrated gradients. It selects path functions using only a priori knowledge and approximates neuron attribution using nonuniform sampling. In addition, it measures neurons based on the attribution results and performs feature-level attacks to remove inherent features of the image. Extensive experiments demonstrate the effectiveness of the proposed method. The code is available at https://github.com/lounwb/PTNAA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Harnessing Unsupervised Insights: Enhancing Black-Box Graph Injection Attacks with Graph Contrastive Learning.
- Author
-
Liu, Xiao, Huang, Junjie, Chen, Zihan, Pan, Yi, Xiong, Maoyi, and Zhao, Wentao
- Subjects
GRAPH neural networks ,KNOWLEDGE graphs - Abstract
Adversarial attacks on Graph Neural Networks (GNNs) have emerged as a significant threat to the security of graph learning. Compared with Graph Modification Attacks (GMAs), Graph Injection Attacks (GIAs) are considered more realistic attacks, in which attackers perturb GNN models by injecting a small number of fake nodes. However, most existing black-box GIA methods either require comprehensive knowledge of the dataset and the ground-truth labels or a large number of queries to execute the attack, which is often unfeasible in many scenarios. In this paper, we propose an unsupervised method for leveraging the rich knowledge contained in the graph data themselves to enhance the success rate of graph injection attacks on the initial query. Specifically, we introduce GraphContrastive Learning-based Graph Injection Attack (GCIA), which consists of a node encoder, a reward predictor, and a fake node generator. The Graph Contrastive Learning (GCL)-based node encoder transforms nodes for low-dimensional continuous embedding, the reward predictor acts as a simplified surrogate for the target model, and the fake node generator produces fake nodes and edges based on several carefully designed loss functions, utilizing the node encoder and reward predictor. Extensive results demonstrate that the proposed GCIA method achieves a first query success rate of 91.2% on the Reddit dataset and improves the success rate to over 99.7% after 10 queries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Dual stage black-box adversarial attack against vision transformer.
- Author
-
Wang, Fan, Shao, Mingwen, Meng, Lingzhuang, and Liu, Fukang
- Abstract
Relying on wide receptive fields, Vision Transformers (ViTs) are more robust than Convolutional Neural Networks (CNNs). Consequently, some transfer-based attack methods that perform well on CNNs perform poorly when attacking ViTs. To address the aforementioned issues, we propose dual-stage attack framework named DSA. More specifically, we introduce a dual spatial optimization strategy involving both decision space and feature space optimization to improve the transferability of adversarial examples across different ViTs. Adversarial perturbations are generated by our proposed semi self-integrated module in the first stage and optimized by the feature extractor in the second stage. During this process, our proposed integrated model makes full use of the discriminative information in the deep transformer blocks and achieves significant improvements in transferability. To further enhance the transferability, we design the random perturbation masking module to alleviate the over-fitting of adversarial examples to the surrogate model. We evaluate the transferability of attacks on state-of-the-art ViTs, CNNs, and robustly trained CNNs. Extensive experiments demonstrate that the proposed dual-stage attack can greatly boost transferability between ViTs and from ViTs to CNNs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Invisible Black-Box Backdoor Attack against Deep Cross-Modal Hashing Retrieval.
- Author
-
Wang, Tianshi, Li, Fengling, Zhu, Lei, Li, Jingjing, Zhang, Zheng, and Shen, Heng Tao
- Abstract
The article presents an invisible black-box backdoor attack against deep cross-modal hashing retrieval, addressing its vulnerability to backdoor attacks and the challenges posed by multi-modal data and hash quantization. Topics include developing a trigger generator to craft imperceptible triggers, designing an injection network for embedding triggers into benign samples, and proposing strategies to mitigate the impact of hash quantization on attack performance.
- Published
- 2024
- Full Text
- View/download PDF
6. Improving the Adversarial Transferability of Radio Signal with Denoising, Data Diversity, and Gradient Average
- Author
-
Wu, Lijin, Huang, Jianye, He, Jindong, Lin, Nan, Liao, Feilong, Hou, Jiaye, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Meng, Xiaofeng, editor, Cao, Zhidong, editor, Wu, Suran, editor, Chen, Yang, editor, and Zhan, Xiu-Xiu, editor
- Published
- 2024
- Full Text
- View/download PDF
7. Efficient Local Imperceptible Random Search for Black-Box Adversarial Attacks
- Author
-
Li, Yining, You, Shu, Chen, Yihan, Li, Zhenhua, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Pan, Yijie, editor, and Zhang, Qinhu, editor
- Published
- 2024
- Full Text
- View/download PDF
8. Towards Score-Based Black-Box Adversarial Examples Attack in Real World
- Author
-
Jia, Wei, Liu, Zhenglin, Zhang, Haichun, Yu, Runze, Li, Liaoyuan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Pei, Yan, editor, Ma, Hao Shang, editor, Chan, Yu-Wei, editor, and Jeong, Hwa-Young, editor
- Published
- 2024
- Full Text
- View/download PDF
9. Cipher-Prompt: Towards a Safe Diffusion Model via Learning Cryptographic Prompts
- Author
-
Jiang, Sidong, Wang, Siyuan, Zhang, Rui, Yang, Xi, Huang, Kaizhu, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ren, Jinchang, editor, Hussain, Amir, editor, Liao, Iman Yi, editor, Chen, Rongjun, editor, Huang, Kaizhu, editor, Zhao, Huimin, editor, Liu, Xiaoyong, editor, Ma, Ping, editor, and Maul, Thomas, editor
- Published
- 2024
- Full Text
- View/download PDF
10. A Local Interpretability Model-Based Approach for Black-Box Adversarial Attack
- Author
-
Duan, Yuanjie, Zuo, Xingquan, Huang, Hai, Wu, Binglin, Zhao, Xinchao, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Tan, Ying, editor, and Shi, Yuhui, editor
- Published
- 2024
- Full Text
- View/download PDF
11. Object-Aware Transfer-Based Black-Box Adversarial Attack on Object Detector
- Author
-
Leng, Zhuo, Cheng, Zesen, Wei, Pengxu, Chen, Jie, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Qingshan, editor, Wang, Hanzi, editor, Ma, Zhanyu, editor, Zheng, Weishi, editor, Zha, Hongbin, editor, Chen, Xilin, editor, Wang, Liang, editor, and Ji, Rongrong, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Improving Transferability of Adversarial Attacks with Gaussian Gradient Enhance Momentum
- Author
-
Wang, Jinwei, Wang, Maoyuan, Wu, Hao, Ma, Bin, Luo, Xiangyang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Qingshan, editor, Wang, Hanzi, editor, Ma, Zhanyu, editor, Zheng, Weishi, editor, Zha, Hongbin, editor, Chen, Xilin, editor, Wang, Liang, editor, and Ji, Rongrong, editor
- Published
- 2024
- Full Text
- View/download PDF
13. Minimum Assumption Reconstruction Attacks: Rise of Security and Privacy Threats Against Face Recognition
- Author
-
Li, Dezhi, Park, Hojin, Dong, Xingbo, Lai, YenLung, Zhang, Hui, Teoh, Andrew Beng Jin, Jin, Zhe, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Qingshan, editor, Wang, Hanzi, editor, Ma, Zhanyu, editor, Zheng, Weishi, editor, Zha, Hongbin, editor, Chen, Xilin, editor, Wang, Liang, editor, and Ji, Rongrong, editor
- Published
- 2024
- Full Text
- View/download PDF
14. MTMG: A Framework for Generating Adversarial Examples Targeting Multiple Learning-Based Malware Detection Systems
- Author
-
Jia, Lichen, Yang, Yang, Li, Jiansong, Ding, Hao, Li, Jiajun, Yuan, Ting, Liu, Lei, Jiang, Zihan, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Fenrong, editor, Sadanandan, Arun Anand, editor, Pham, Duc Nghia, editor, Mursanto, Petrus, editor, and Lukose, Dickson, editor
- Published
- 2024
- Full Text
- View/download PDF
15. GANs-Based Model Extraction for Black-Box Backdoor Attack
- Author
-
Fu, Xiurui, Tao, Fazhan, Si, Pengju, Fu, Zhumu, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Sun, Fuchun, editor, Meng, Qinghu, editor, Fu, Zhumu, editor, and Fang, Bin, editor
- Published
- 2024
- Full Text
- View/download PDF
16. Black-box Attack Algorithm for SAR-ATR Deep Neural Networks Based on MI-FGSM
- Author
-
Xuanshen WAN, Wei LIU, Chaoyang NIU, and Wanjie LU
- Subjects
synthetic aperture radar (sar) ,target recognition ,black-box attack ,quasi-hyperbolic momentum (qhm) operator ,speckle noise transformation ,Electricity and magnetism ,QC501-766 - Abstract
The field of Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR) lacks effective black-box attack algorithms. Therefore, this research proposes a migration-based black-box attack algorithm by combining the idea of the Momentum Iterative Fast Gradient Sign Method (MI-FGSM). First, random speckle noise transformation is performed according to the characteristics of SAR images to alleviate model overfitting to the speckle noise and improve the generalization performance of the algorithm. Second, an AdaBelief-Nesterov optimizer is designed to rapidly find the optimal gradient descent direction, and the attack effectiveness of the algorithm is improved through a rapid convergence of the model gradient. Finally, a quasihyperbolic momentum operator is introduced to obtain a stable model gradient descent direction so that the gradient can avoid falling into a local optimum during the rapid convergence and to further enhance the success rate of black-box attacks on adversarial examples. Simulation experiments show that compared with existing adversarial attack algorithms, the proposed algorithm improves the ensemble model black-box attack success rate of mainstream SAR-ATR deep neural networks by 3%~55% and 6.0%~57.5% on the MSTAR and FUSAR-Ship datasets, respectively; the generated adversarial examples are highly concealable.
- Published
- 2024
- Full Text
- View/download PDF
17. Black-box attacks on face recognition via affine-invariant training.
- Author
-
Sun, Bowen, Su, Hang, and Zheng, Shibao
- Subjects
- *
FACE perception , *HUMAN facial recognition software , *ARTIFICIAL neural networks - Abstract
Deep neural network (DNN)-based face recognition has shown impressive performance in verification; however, recent studies reveal a vulnerability in deep face recognition algorithms, making them susceptible to adversarial attacks. Specifically, these attacks can be executed in a black-box manner with limited knowledge about the target network. While this characteristic is practically significant due to hidden model details in reality, it presents challenges such as high query budgets and low success rates. To improve the performance of attacks, we establish the whole framework through affine-invariant training, serving as a substitute for inefficient sampling. We also propose AI-block—a novel module that enhances transferability by introducing generalized priors. Generalization is achieved by creating priors with stable features when sampled over affine transformations. These priors guide attacks, improving efficiency and performance in black-box scenarios. The conversion via AI-block enables the transfer gradients of a surrogate model to be used as effective priors for estimating the gradients of a black-box model. Our method leverages this enhanced transferability to boost both transfer-based and query-based attacks. Extensive experiments conducted on 5 commonly utilized databases and 7 widely employed face recognition models demonstrate a significant improvement of up to 11.9 percentage points in success rates while maintaining comparable or even reduced query times. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. The Noise Blowing-Up Strategy Creates High Quality High Resolution Adversarial Images against Convolutional Neural Networks.
- Author
-
Topal, Ali Osman, Mancellari, Enea, Leprévost, Franck, Avdusinovic, Elmir, and Gillet, Thomas
- Subjects
CONVOLUTIONAL neural networks ,HIGH resolution imaging ,COMPUTER vision ,NOISE - Abstract
Convolutional neural networks (CNNs) serve as powerful tools in computer vision tasks with extensive applications in daily life. However, they are susceptible to adversarial attacks. Still, attacks can be positive for at least two reasons. Firstly, revealing CNNs vulnerabilities prompts efforts to enhance their robustness. Secondly, adversarial images can also be employed to preserve privacy-sensitive information from CNN-based threat models aiming to extract such data from images. For such applications, the construction of high-resolution adversarial images is mandatory in practice. This paper firstly quantifies the speed, adversity, and visual quality challenges involved in the effective construction of high-resolution adversarial images, secondly provides the operational design of a new strategy, called here the noise blowing-up strategy, working for any attack, any scenario, any CNN, any clean image, thirdly validates the strategy via an extensive series of experiments. We performed experiments with 100 high-resolution clean images, exposing them to seven different attacks against 10 CNNs. Our method achieved an overall average success rate of 75% in the targeted scenario and 64% in the untargeted scenario. We revisited the failed cases: a slight modification of our method led to success rates larger than 98.9%. As of today, the noise blowing-up strategy is the first generic approach that successfully solves all three speed, adversity, and visual quality challenges, and therefore effectively constructs high-resolution adversarial images with high-quality requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. DIMBA: discretely masked black-box attack in single object tracking.
- Author
-
Yin, Xiangyu, Ruan, Wenjie, and Fieldsend, Jonathan
- Subjects
DEEP learning ,REINFORCEMENT learning ,VIDEO excerpts ,ALGORITHMS - Abstract
The adversarial attack can force a CNN-based model to produce an incorrect output by craftily manipulating human-imperceptible input. Exploring such perturbations can help us gain a deeper understanding of the vulnerability of neural networks, and provide robustness to deep learning against miscellaneous adversaries. Despite extensive studies focusing on the robustness of image, audio, and NLP, works on adversarial examples of visual object tracking—especially in a black-box manner—are quite lacking. In this paper, we propose a novel adversarial attack method to generate noises for single object tracking under black-box settings, where perturbations are merely added on initialized frames of tracking sequences, which is difficult to be noticed from the perspective of a whole video clip. Specifically, we divide our algorithm into three components and exploit reinforcement learning for localizing important frame patches precisely while reducing unnecessary computational queries overhead. Compared to existing techniques, our method requires less time to perturb videos, but to manipulate competitive or even better adversarial performance. We test our algorithm in both long-term and short-term datasets, including OTB100, VOT2018, UAV123, and LaSOT. Extensive experiments demonstrate the effectiveness of our method on three mainstream types of trackers: discrimination, Siamese-based, and reinforcement learning-based trackers. We release our attack tool, DIMBA, via GitHub https://github.com/TrustAI/DIMBA for use by the community. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Black-box Bayesian adversarial attack with transferable priors.
- Author
-
Zhang, Shudong, Gao, Haichang, Shu, Chao, Cao, Xiwen, Zhou, Yunyi, and He, Jianping
- Subjects
ARTIFICIAL neural networks - Abstract
Deep neural networks are vulnerable to adversarial attacks, even in the black-box setting, where the attacker only has query access to the model. The most popular black-box adversarial attacks usually rely on substitute models or gradient estimation to generate imperceptible adversarial examples, which either suffer from low attack success rates or low query efficiency. In real-world scenarios, it is extremely improbable for an attacker to have unlimited bandwidth to query a target classifier. In this paper, we proposed a query efficient gradient-free score-based attack, named BO-ATP, which combines Bayesian optimization strategy with transfer-based attacks and searches for perturbation in low-dimensional latent space. Different from the gradient-based method, in the search process, our attack makes full use of the prior information obtained from the previous query to sample the next optimal point instead of local gradient approximation. Results on MNIST, CIFAR10, and ImageNet show that even at a low 1000 query budget, we still achieve high attack success rates in both targeted and untargeted attacks, and the query efficiency is dozens of times higher than the previous state-of-the-art attack methods. Furthermore, we show that BO-ATP can successfully attack some state-of-the-art defenses, such as adversarial training. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Harnessing Unsupervised Insights: Enhancing Black-Box Graph Injection Attacks with Graph Contrastive Learning
- Author
-
Xiao Liu, Junjie Huang, Zihan Chen, Yi Pan, Maoyi Xiong, and Wentao Zhao
- Subjects
graph injection attack ,query-based attack ,graph contrastive learning ,black-box attack ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Adversarial attacks on Graph Neural Networks (GNNs) have emerged as a significant threat to the security of graph learning. Compared with Graph Modification Attacks (GMAs), Graph Injection Attacks (GIAs) are considered more realistic attacks, in which attackers perturb GNN models by injecting a small number of fake nodes. However, most existing black-box GIA methods either require comprehensive knowledge of the dataset and the ground-truth labels or a large number of queries to execute the attack, which is often unfeasible in many scenarios. In this paper, we propose an unsupervised method for leveraging the rich knowledge contained in the graph data themselves to enhance the success rate of graph injection attacks on the initial query. Specifically, we introduce GraphContrastive Learning-based Graph Injection Attack (GCIA), which consists of a node encoder, a reward predictor, and a fake node generator. The Graph Contrastive Learning (GCL)-based node encoder transforms nodes for low-dimensional continuous embedding, the reward predictor acts as a simplified surrogate for the target model, and the fake node generator produces fake nodes and edges based on several carefully designed loss functions, utilizing the node encoder and reward predictor. Extensive results demonstrate that the proposed GCIA method achieves a first query success rate of 91.2% on the Reddit dataset and improves the success rate to over 99.7% after 10 queries.
- Published
- 2024
- Full Text
- View/download PDF
22. A black-box attack on fixed-unitary quantum encryption schemes
- Author
-
Pilaszewicz, Cezary, Muth, Lea R., and Margraf, Marian
- Published
- 2024
- Full Text
- View/download PDF
23. Improving the transferability of adversarial examples with separable positive and negative disturbances.
- Author
-
Yan, Yuanjie, Bu, Yuxuan, Shen, Furao, and Zhao, Jian
- Subjects
- *
SUCCESS - Abstract
Adversarial examples demonstrate the vulnerability of white-box models but exhibit weak transferability to black-box models. In image processing, each adversarial example usually consists of original image and disturbance. The disturbances are essential for the adversarial examples, determining the attack success rate on black-box models. To improve the transferability, we propose a new white-box attack method called separable positive and negative disturbance (SPND). SPND optimizes the positive and negative perturbations instead of the adversarial examples. SPND also smooths the search space by replacing constrained disturbances with unconstrained variables, which improves the success rate of attacking the black-box model. Our method outperforms the other attack methods in the MNIST and CIFAR10 datasets. In the ImageNet dataset, the black-box attack success rate of SPND exceeds the optimal CW method by nearly ten percentage points under the perturbation of L ∞ = 0.3 . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Semantic Adversarial Attacks on Face Recognition Through Significant Attributes
- Author
-
Yasmeen M. Khedr, Yifeng Xiong, and Kun He
- Subjects
Adversarial examples ,Image-to-image translation ,Face verification ,Feature fusion ,Black-box attack ,Attack transferability ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Face recognition systems are susceptible to adversarial attacks, where adversarial facial images are generated without awareness of the intrinsic attributes of the images in existing works. They change only a single attribute indiscriminately. To this end, we propose a new Semantic Adversarial Attack using StarGAN (SAA-StarGAN), which manipulates the facial attributes that are significant for each image. Specifically, we apply the cosine similarity or probability score to predict the most significant attributes. In the probability score method, we train the face verification model to perform an attribute prediction task to get a class probability score for each attribute. Then, we calculate the degree of change in the probability value in an image before and after altering the attribute. Therefore, we perform the prediction process and then alter either one or more of the most significant facial attributes under white-box or black-box settings. Experimental results illustrate that SAA-StarGAN outperforms transformation-based, gradient-based, stealthy-based, and patch-based attacks under impersonation and dodging attacks. Besides, our method achieves high attack success rates on various models in the black-box setting. In the end, the experiments confirm that the prediction of the most important attributes significantly impacts the success of adversarial attacks in both white-box and black-box settings and could improve the transferability of the generated adversarial examples.
- Published
- 2023
- Full Text
- View/download PDF
25. Boosting Adversarial Attacks with Improved Sign Method
- Author
-
Guo, Bowen, Yang, Yuxin, Li, Qianmu, Hou, Jun, Rao, Ya, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yang, Xiaochun, editor, Suhartanto, Heru, editor, Wang, Guoren, editor, Wang, Bin, editor, Jiang, Jing, editor, Li, Bing, editor, Zhu, Huaijie, editor, and Cui, Ningning, editor
- Published
- 2023
- Full Text
- View/download PDF
26. Enhancing Adversarial Transferability from the Perspective of Input Loss Landscape
- Author
-
Xu, Yinhu, Chu, Qi, Yuan, Haojie, Luo, Zixiang, Liu, Bin, Yu, Nenghai, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lu, Huchuan, editor, Ouyang, Wanli, editor, Huang, Hui, editor, Lu, Jiwen, editor, Liu, Risheng, editor, Dong, Jing, editor, and Xu, Min, editor
- Published
- 2023
- Full Text
- View/download PDF
27. Improving Transferability Reversible Adversarial Examples Based on Flipping Transformation
- Author
-
Fang, Youqing, Jia, Jingwen, Yang, Yuhai, Lyu, Wanli, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Yu, Zhiwen, editor, Han, Qilong, editor, Wang, Hongzhi, editor, Guo, Bin, editor, Zhou, Xiaokang, editor, Song, Xianhua, editor, and Lu, Zeguang, editor
- Published
- 2023
- Full Text
- View/download PDF
28. Creating High-Resolution Adversarial Images Against Convolutional Neural Networks with the Noise Blowing-Up Method
- Author
-
Leprévost, Franck, Topal, Ali Osman, Mancellari, Enea, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nguyen, Ngoc Thanh, editor, Boonsang, Siridech, editor, Fujita, Hamido, editor, Hnatkowska, Bogumiła, editor, Hong, Tzung-Pei, editor, Pasupa, Kitsuchart, editor, and Selamat, Ali, editor
- Published
- 2023
- Full Text
- View/download PDF
29. Towards a General Black-Box Attack on Tabular Datasets
- Author
-
Pooja, S., Gressel, Gilad, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Chinara, Suchismita, editor, Tripathy, Asis Kumar, editor, Li, Kuan-Ching, editor, Sahoo, Jyoti Prakash, editor, and Mishra, Alekha Kumar, editor
- Published
- 2023
- Full Text
- View/download PDF
30. Enhancing Transferability of Adversarial Audio in Speaker Recognition Systems
- Author
-
Patel, Umang, Bhilare, Shruti, Hati, Avik, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Pertusa, Antonio, editor, Gallego, Antonio Javier, editor, Sánchez, Joan Andreu, editor, and Domingues, Inês, editor
- Published
- 2023
- Full Text
- View/download PDF
31. Deceiving Traffic Sign Recognition with Physical One-Pixel Attacks
- Author
-
Huang, Juncheng, Jiang, Xinnan, Xia, Yi Fei, Guo, Huaqun, editor, McLoughlin, Ian, editor, Chekole, Eyasu Getahun, editor, Lakshmanan, Umayal, editor, Meng, Weizhi, editor, Wang, Peng Cheng, editor, and Lu, Jiqiang, editor
- Published
- 2023
- Full Text
- View/download PDF
32. A Black-Box Attack on Optical Character Recognition Systems
- Author
-
Bayram, Samet, Barner, Kenneth, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Tistarelli, Massimo, editor, Dubey, Shiv Ram, editor, Singh, Satish Kumar, editor, and Jiang, Xiaoyi, editor
- Published
- 2023
- Full Text
- View/download PDF
33. On Effectiveness of the Adversarial Attacks on the Computer Systems of Biomedical Images Classification
- Author
-
Shchetinin, Eugene Yu., Glushkova, Anastasia G., Blinkov, Yury A., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Vishnevskiy, Vladimir M., editor, Samouylov, Konstantin E., editor, and Kozyrev, Dmitry V., editor
- Published
- 2023
- Full Text
- View/download PDF
34. Towards High Transferability on Neural Network for Black-Box Adversarial Attacks
- Author
-
Zhai, Haochen, Zou, Futai, Tang, Junhua, Wu, Yue, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Li, Fengjun, editor, Liang, Kaitai, editor, Lin, Zhiqiang, editor, and Katsikas, Sokratis K., editor
- Published
- 2023
- Full Text
- View/download PDF
35. Improving the transferability of adversarial samples with channel switching.
- Author
-
Ling, Jie, Chen, Xiaohuan, and Luo, Yu
- Subjects
ARTIFICIAL neural networks - Abstract
Deep neural network models are vulnerable to interference from adversarial samples. An alarming issue is that adversarial samples are often transferable, implying that an adversarial sample generated by one model can attack other models. In a black-box setting, the usual approach for attacking is to use the white-box model as a proxy model to generate adversarial samples and use the generated samples to deceive the black-box model. However, these methods have a higher success rate for white-box models and exhibit weak transferability for black-box models. Various methods have been proposed to improve the transferability among which the input transformation-based methods are considered the most effective. However, the potential of a single input image has not been fully exploited in these techniques. In this study, we propose a simple channel switching method called CS-MI-FGSM to obtain the variants of the input image and mix them during momentum updating. Experiments on the ImageNet and the NeurIPS 2017 adversarial competition datasets demonstrated that the proposed method can effectively improve the transferability of adversarial samples. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. 时空梯度迭代的声纹对抗攻击算法STI-FGSM.
- Author
-
李烁, 顾益军, and 谭昊
- Subjects
PROBLEM solving ,SPACETIME ,ALGORITHMS ,SUCCESS - Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
37. A²SC: Adversarial Attacks on Subspace Clustering.
- Author
-
YIKUN XU, XINGXING WEI, PENGWEN DAI, and XIAOCHUN CAO
- Abstract
Many studies demonstrate that supervised learning techniques are vulnerable to adversarial examples. However, adversarial threats in unsupervised learning have not drawn sufficient scholarly attention. In this article, we formally address the unexplored adversarial attacks in the equally important unsupervised clustering field and propose the concept of the adversarial set and adversarial set attack for clustering. To illustrate the basic idea, we design a novel adversarial space-mapping attack algorithm to confuse subspace clustering, one of the mainstream branches of unsupervised clustering. It maps a sample into one wrong class by moving it towards the closest point on the linear subspace of the target class, that is, along the normal of the closest point. This simple single-step algorithm has the power to craft the adversarial set where the image samples can be wrongly clustered, even into the targeted labels. Empirical results on different image datasets verify the effectiveness and superiority of our algorithm. We further show that deep supervised learning algorithms (such as VGG and ResNet) are also vulnerable to our crafted adversarial set, which illustrates the good cross-task transferability of the adversarial set. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. F-MIM: Feature-based Masking Iterative Method to Generate the Adversarial Images against the Face Recognition Systems.
- Author
-
Agrawal, Khushabu and Bhatnagar, Charul
- Subjects
HUMAN facial recognition software ,BIG data ,AIRPORTS ,SHOPPING malls ,DEEP learning - Abstract
Numerous face recognition systems employ deep learning techniques to identify individuals in public areas such as shopping malls, airports, and other high-security zones. However, adversarial attacks are susceptible to deep learning-based systems. The adversarial attacks are intentionally generated by the attacker to mislead the systems. These attacks are imperceptible to the human eye. In this paper, we proposed a feature-based masking iterative method (FMIM) to generate the adversarial images. In this method, we utilize the features of the face to misclassify the models. The proposed approach is based on a black-box attack technique where the attacker does not have the information related to target models. In this black box attack strategy, the face landmark points are modified using the binary masking technique. In the proposed method, we have used the momentum iterative method to increase the transferability of existing attacks. The proposed method is generated using the ArcFace face recognition model that is trained on the Labeled Face in the Wild (LFW) dataset and evaluated the performance of different face recognition models namely ArcFace, MobileFace, MobileNet, CosFace and SphereFace under the dodging and impersonate attack. The F-MIM attack is outperformed in comparison to the existing attacks based on Attack Success Rate evaluation metrics and further improves the transferability. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Adversarial Examples Generation Method Based on Image Color Random Transformation
- Author
-
BAI Zhixu, WANG Hengjun, GUO Kexiang
- Subjects
deep neural network ,adversarial example ,white-box attack ,black-box attack ,migration ,Computer software ,QA76.75-76.765 ,Technology (General) ,T1-995 - Abstract
Although deep neural networks(DNNs) have good performance in most classification tasks,they are vulnerable to adversarial examples,making the security of DNNs questionable.Research designs to generate strongly aggressive adversarial examples can help improve the security and robustness of DNNs.Among the methods for generating adversarial examples,black-box attacks are more practical than white-box attacks,which need to rely on model structural parameters.Black-box attacks are gene-rally based on iterative methods to generate adversarial examples,which are less migratory,leading to a generally low success rate of their black-box attacks.To address this problem,introducing data enhancement techniques in the process of countermeasure example generation to randomly change the color of the original image within a limited range can effectively improve the migration of countermeasure examples,thus increasing the success rate of countermeasure example black box attacks.This method is validated through adversarial attack experiments on ImageNet dataset with normal network and adversarial training network,and the experimental results indicate that the method can effectively improve the mobility of the generated adversarial examples.
- Published
- 2023
- Full Text
- View/download PDF
40. The Noise Blowing-Up Strategy Creates High Quality High Resolution Adversarial Images against Convolutional Neural Networks
- Author
-
Ali Osman Topal, Enea Mancellari, Franck Leprévost, Elmir Avdusinovic, and Thomas Gillet
- Subjects
black-box attack ,convolutional neural network ,evolutionary algorithm ,high-resolution adversarial image ,noise blowing-up ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Convolutional neural networks (CNNs) serve as powerful tools in computer vision tasks with extensive applications in daily life. However, they are susceptible to adversarial attacks. Still, attacks can be positive for at least two reasons. Firstly, revealing CNNs vulnerabilities prompts efforts to enhance their robustness. Secondly, adversarial images can also be employed to preserve privacy-sensitive information from CNN-based threat models aiming to extract such data from images. For such applications, the construction of high-resolution adversarial images is mandatory in practice. This paper firstly quantifies the speed, adversity, and visual quality challenges involved in the effective construction of high-resolution adversarial images, secondly provides the operational design of a new strategy, called here the noise blowing-up strategy, working for any attack, any scenario, any CNN, any clean image, thirdly validates the strategy via an extensive series of experiments. We performed experiments with 100 high-resolution clean images, exposing them to seven different attacks against 10 CNNs. Our method achieved an overall average success rate of 75% in the targeted scenario and 64% in the untargeted scenario. We revisited the failed cases: a slight modification of our method led to success rates larger than 98.9%. As of today, the noise blowing-up strategy is the first generic approach that successfully solves all three speed, adversity, and visual quality challenges, and therefore effectively constructs high-resolution adversarial images with high-quality requirements.
- Published
- 2024
- Full Text
- View/download PDF
41. 优化梯度增强黑盒对抗攻击算法.
- Author
-
刘梦庭 and 凌 捷
- Subjects
ARTIFICIAL neural networks ,OPTIMIZATION algorithms ,BOOSTING algorithms ,CONFIDENCE ,ALGORITHMS ,SUCCESS - Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
42. Black-Box Attack-Based Security Evaluation Framework for Credit Card Fraud Detection Models.
- Author
-
Xiao, Jin, Tian, Yuhang, Jia, Yanlin, Jiang, Xiaoyi, Yu, Lean, and Wang, Shouyang
- Subjects
- *
CREDIT card fraud , *SUPERVISED learning , *FRAUD investigation , *MACHINE learning - Abstract
The security of credit card fraud detection (CCFD) models based on machine learning is important but rarely considered in the existing research. To this end, we propose a black-box attack-based security evaluation framework for CCFD models. Under this framework, the semisupervised learning technique and transfer-based black-box attack are combined to construct two versions of a semisupervised transfer black-box attack algorithm. Moreover, we introduce a new nonlinear optimization model to generate the adversarial examples against CCFD models and a security evaluation index to quantitatively evaluate the security of them. Computing experiments on two real data sets demonstrate that, facing the adversarial examples generated by the proposed attack algorithms, all six supervised models considered largely lose their ability to identify the fraudulent transactions, whereas the two unsupervised models are less affected. This indicates that the CCFD models based on supervised machine learning may possess substantial security risks. In addition, the evaluation results for the security of the models generate important managerial implications that help banks reasonably evaluate and enhance the model security. History: Accepted by Ram Ramesh, Area Editor for Data Science & Machine Learning. Funding: This work was supported in part by the National Natural Science Foundation of China [Grants 72171160 and 71988101], Key Program of National Natural Science Foundation of China and Quebec Research Foundation (NSFC-FRQ) Joint Project [Grant 7191101304], Key Program of NSFC-FRQSC Joint Project [Grant 72061127002], Excellent Youth Foundation of Sichuan Province [Grant 2020JDJQ0021], and National Leading Talent Cultivation Project of Sichuan University [Grant SKSYL2021-03]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1297) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0076) at (http://dx.doi.org/10.5281/zenodo.7631457). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Towards Query-Efficient Black-Box Attacks: A Universal Dual Transferability-Based Framework.
- Author
-
TAO XIANG, HANGCHENG LIU, SHANGWEI GUO, YAN GAN, WENJIAN HE, and XIAOFENG LIAO
- Subjects
- *
ARTIFICIAL neural networks , *LOCAL foods - Abstract
Adversarial attacks have threatened the application of deep neural networks in security-sensitive scenarios. Most existing black-box attacks fool the target model by interacting with it many times and producing global perturbations. However, all pixels are not equally crucial to the target model; thus, indiscriminately treating all pixels will increase query overhead inevitably. In addition, existing black-box attacks take clean samples as start points, which also limits query efficiency. In this article, we propose a novel black-box attack framework, constructed on a strategy of dual transferability (DT), to perturb the discriminative areas of clean examples within limited queries. The first kind of transferability is the transferability of model interpretations. Based on this property, we identify the discriminative areas of clean samples for generating local perturbations. The second is the transferability of adversarial examples, which helps us to produce local pre-perturbations for further improving query efficiency. We achieve the two kinds of transferability through an independent auxiliary model and do not incur extra query overhead. After identifying discriminative areas and generating pre-perturbations, we use the pre-perturbed samples as better start points and further perturb them locally in a black-box manner to search the corresponding adversarial examples. The DT strategy is general; thus, the proposed framework can be applied to different types of black-box attacks. We conduct extensive experiments to show that, under various system settings, our framework can significantly improve the query efficiency of existing black-box attacks and attack success rates. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Efficient Query-based Black-box Attack against Cross-modal Hashing Retrieval.
- Author
-
LEI ZHU, TIANSHI WANG, JINGJING LI, ZHENG ZHANG, JIALIE SHEN, and XINHUA WANG
- Abstract
Deep cross-modal hashing retrieval models inherit the vulnerability of deep neural networks. They are vulnerable to adversarial attacks, especially for the form of subtle perturbations to the inputs. Although many adversarial attack methods have been proposed to handle the robustness of hashing retrieval models, they still suffer from two problems: (1) Most of them are based on the white-box settings, which is usually unrealistic in practical application. (2) Iterative optimization for the generation of adversarial examples in them results in heavy computation. To address these problems, we propose an Efficient Query-based Black-Box Attack (EQB²A) against deep cross-modal hashing retrieval, which can efficiently generate adversarial examples for the black-box attack. Specifically, by sending a few query requests to the attacked retrieval system, the cross-modal retrieval model stealing is performed based on the neighbor relationship between the retrieved results and the query, thus obtaining the knockoffs to substitute the attacked system.A multi-modal knockoffs-driven adversarial generation is proposed to achieve efficient adversarial example generation. While the entire network training converges, EQB²A can efficiently generate adversarial examples by forward-propagation with only given benign images. Experiments show that EQB²A achieves superior attacking performance under the black-box setting. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Transfer-based attack based on image frequency and adversarial subspace.
- Author
-
LI Chaoqun, ZHANG Qilong, YIN Jin, CAO Mingsheng, and SONG Jingkuan
- Subjects
SUCCESS - Abstract
To address the issues such as overfilling of adversarial examples on while-box models and constraints on attackers when searching for adversarial subspaces, a method to improve the Transferability of adversarial examples from the perspectives of frequency domain and searchable adversarial subspaces is proposed. Firstly, in the process of generating adversarial examples, the overfitting effect of adversarial examples on the white-box model is mitigated by reducing the high-frequency components of the image. Secondly, by expanding the searching range of the adversarial subspace to capture more information, the transferability of adversarial examples is improved. It is worthy noting that the proposed method can be combined with existing attacks. A large number of experiments on the ImageNet dataset have verified the effectiveness of the proposed method. The black-box attack success rate of the proposed method is 8.6% (for normal training models) and 18.2% higher (for defensive models), respectively than the attack methods based on fast gradient sign method on average. [ABSTRACT FROM AUTHOR]
- Published
- 2023
46. 黑盒攻击智能识别对抗算法研究现状.
- Author
-
魏健, 宋小庆, and 王钦钊
- Subjects
HUMAN facial recognition software ,DEEP learning ,GENERATIVE adversarial networks ,ALGORITHMS ,DATA modeling ,WORKFLOW ,FACE perception - Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
47. Query-Efficient Decision-Based Black-Box Patch Attack.
- Author
-
Chen, Zhaoyu, Li, Bo, Wu, Shuang, Ding, Shouhong, and Zhang, Wenqiang
- Abstract
Deep neural networks (DNNs) have been showed to be highly vulnerable to imperceptible adversarial perturbations. As a complementary type of adversary, patch attacks that introduce perceptible perturbations to the images have attracted the interest of researchers. Existing patch attacks rely on the architecture of the model or the probabilities of predictions and perform poorly in the decision-based setting, which can still construct a perturbation with the minimal information exposed – the top-1 predicted label. In this work, we first explore the decision-based patch attack. To enhance the attack efficiency, we model the patches using paired key-points and use targeted images as the initialization of patches, and parameter optimizations are all performed on the integer domain. Then, we propose a differential evolutionary algorithm named DevoPatch for query-efficient decision-based patch attacks. Experiments demonstrate that DevoPatch outperforms the state-of-the-art black-box patch attacks in terms of patch area and attack success rate within a given query budget on image classification and face verification. Additionally, we conduct the vulnerability evaluation of ViT and MLP on image classification in the decision-based patch attack setting for the first time. Using DevoPatch, we can evaluate the robustness of models to black-box patch attacks. We believe this method could inspire the design and deployment of robust vision models based on various DNN architectures in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. A strategy creating high-resolution adversarial images against convolutional neural networks and a feasibility study on 10 CNNs
- Author
-
Franck Leprévost, Ali Osman Topal, Elmir Avdusinovic, and Raluca Chitic
- Subjects
Black-box attack ,convolutional neural network ,evolutionary algorithm ,high-resolution adversarial image ,Telecommunication ,TK5101-6720 ,Information technology ,T58.5-58.64 - Abstract
ABSTRACTTo perform image recognition, Convolutional Neural Networks (CNNs) assess any image by first resizing it to its input size. In particular, high-resolution images are scaled down, say to [Formula: see text] for CNNs trained on ImageNet. So far, existing attacks, aiming at creating an adversarial image that a CNN would misclassify while a human would not notice any difference between the modified and unmodified images, proceed by creating adversarial noise in the [Formula: see text] resized domain and not in the high-resolution domain. The complexity of directly attacking high-resolution images leads to challenges in terms of speed, adversity and visual quality, making these attacks infeasible in practice. We design an indirect attack strategy that lifts to the high-resolution domain any existing attack that works efficiently in the CNN's input size domain. Adversarial noise created via this method is of the same size as the original image. We apply this approach to 10 state-of-the-art CNNs trained on ImageNet, with an evolutionary algorithm-based attack. Our method succeeded in 900 out of 1000 trials to create such adversarial images, that CNNs classify with probability [Formula: see text] in the adversarial category. Our indirect attack is the first effective method at creating adversarial images in the high-resolution domain.
- Published
- 2023
- Full Text
- View/download PDF
49. Generating Adversarial Patterns in Facial Recognition with Visual Camouflage
- Author
-
Bao, Qirui, Mei, Haiyang, Wei, Huilin, Lü, Zheng, Wang, Yuxin, and Yang, Xin
- Published
- 2024
- Full Text
- View/download PDF
50. Evading Encrypted Traffic Classifiers by Transferable Adversarial Traffic
- Author
-
Sun, Hanwu, Peng, Chengwei, Sang, Yafei, Li, Shuhao, Zhang, Yongzheng, Zhu, Yujia, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Gao, Honghao, editor, Wang, Xinheng, editor, Wei, Wei, editor, and Dagiuklas, Tasos, editor
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.