Search

Showing total 45 results

Search Constraints

Start Over You searched for: Topic adversarial attacks Remove constraint Topic: adversarial attacks Publication Year Range Last 50 years Remove constraint Publication Year Range: Last 50 years Publisher springer international publishing Remove constraint Publisher: springer international publishing
45 results

Search Results

1. Adversarial Attacks and Mitigations on Scene Segmentation of Autonomous Vehicles

3. Two to Trust: AutoML for Safe Modelling and Interpretable Deep Learning for Robustness

4. Pixel Based Adversarial Attacks on Convolutional Neural Network Models

5. Performance Evaluation of Adversarial Attacks on Whole-Graph Embedding Models

6. Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks

7. : Defending Against Adversarial Attacks Using Statistical Hypothesis Testing

11. Attack and Fault Injection in Self-driving Agents on the Carla Simulator – Experience Report

12. A Security-Oriented Architecture for Federated Learning in Cloud Environments

13. Risk Susceptibility of Brain Tumor Classification to Adversarial Attacks

16. Backdoor Mitigation in Deep Neural Networks via Strategic Retraining

18. Are Graph Neural Network Explainers Robust to Graph Noises?

19. Adversarial Robustness of MR Image Reconstruction Under Realistic Perturbations

20. Consistency Regularization Helps Mitigate Robust Overfitting in Adversarial Training

21. Addressing Adversarial Machine Learning Attacks in Smart Healthcare Perspectives

22. AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks

24. Robustness Against Adversarial Attacks Using Dimensionality

25. BreakingBED: Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks

27. Targeting the Most Important Words Across the Entire Corpus in NLP Adversarial Attacks

28. A Lipschitz - Shapley Explainable Defense Methodology Against Adversarial Attacks

29. Analysis of Adversarial Attacks on Face Verification Systems

30. Enhancing Robustness of Graph Convolutional Networks via Dropping Graph Connections

31. Attacking Machine Learning Models for Social Good

32. Benchmarking Adversarial Attacks and Defenses for Time-Series Data

33. Overcoming Stealthy Adversarial Attacks on Power Grid Load Predictions Through Dynamic Data Repair

34. Explainable AI for Inspecting Adversarial Attacks on Deep Neural Networks

36. Robustification of Segmentation Models Against Adversarial Perturbations in Medical Imaging

37. Indirect Local Attacks for Context-Aware Semantic Segmentation Networks

38. Backdoor Attacks in Neural Networks – A Systematic Evaluation on Multiple Traffic Sign Datasets

40. Fighting Adversarial Attacks on Online Abusive Language Moderation

41. Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

42. Ask, Acquire, and Attack: Data-Free UAP Generation Using Class Impressions

43. Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

45. Two to Trust: AutoML for Safe Modelling and Interpretable Deep Learning for Robustness