Search

Showing total 7 results

Search Constraints

Start Over You searched for: Topic adversarial attacks Remove constraint Topic: adversarial attacks Publication Year Range Last 10 years Remove constraint Publication Year Range: Last 10 years Publisher hal ccsd Remove constraint Publisher: hal ccsd
7 results

Search Results

1. Defending Adversarial Examples via DNN Bottleneck Reinforcement

2. SIT: Stochastic Input Transformation to Defend Against Adversarial Attacks on Deep Neural Networks

3. Lower Voltage for Higher Security: Using Voltage Overscaling to Secure Deep Neural Networks

4. Adversarial Attacks in a Multi-view Setting: An Empirical Study of the Adversarial Patches Inter-view Transferability

5. Laplacian networks: bounding indicator function smoothness for neural networks robustness

6. Neuroattack: undermining spiking neural networks security through externally triggered bit-flips

7. A machine learning based approach to detect malicious android apps using discriminant system calls