Back to Search Start Over

Constraint-based Adversarial Example Synthesis

Authors :
Yu, Fang
Chi, Ya-Yu
Chen, Yu-Fang
Publication Year :
2024

Abstract

In the era of rapid advancements in artificial intelligence (AI), neural network models have achieved notable breakthroughs. However, concerns arise regarding their vulnerability to adversarial attacks. This study focuses on enhancing Concolic Testing, a specialized technique for testing Python programs implementing neural networks. The extended tool, PyCT, now accommodates a broader range of neural network operations, including floating-point and activation function computations. By systematically generating prediction path constraints, the research facilitates the identification of potential adversarial examples. Demonstrating effectiveness across various neural network architectures, the study highlights the vulnerability of Python-based neural network models to adversarial attacks. This research contributes to securing AI-powered applications by emphasizing the need for robust testing methodologies to detect and mitigate potential adversarial threats. It underscores the importance of rigorous testing techniques in fortifying neural network models for reliable applications in Python.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.01219
Document Type :
Working Paper