Search

Your search keyword '"ai security"' showing total 50 results

Search Constraints

Start Over You searched for: Descriptor "ai security" Remove constraint Descriptor: "ai security" Search Limiters Peer Reviewed Remove constraint Search Limiters: Peer Reviewed
50 results on '"ai security"'

Search Results

1. The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure

2. Improving the transferability of adversarial examples with path tuning.

3. The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure.

4. Sparse Backdoor Attack Against Neural Networks.

6. A Primer on Generative Artificial Intelligence.

7. Locality-Based Action-Poisoning Attack against the Continuous Control of an Autonomous Driving Model.

8. A Global Object Disappearance Attack Scenario on Object Detection

9. REN-A.I.: A Video Game for AI Security Education Leveraging Episodic Memory

10. Channel-augmented joint transformation for transferable adversarial attacks.

11. FMSA: a meta-learning framework-based fast model stealing attack technique against intelligent network intrusion detection systems

12. An interpretability security framework for intelligent decision support systems based on saliency map.

13. Privacy preserving for AI-based 3D human pose recovery and retargeting.

14. Toward a Comprehensive Framework for Ensuring Security and Privacy in Artificial Intelligence.

15. 基于样本分布特征的数据投毒防御.

16. Adaptive Backdoor Attack against Deep Neural Networks.

17. CANARY: An Adversarial Robustness Evaluation Platform for Deep Learning Models on Image Classification.

18. FMSA: a meta-learning framework-based fast model stealing attack technique against intelligent network intrusion detection systems.

19. Threats to Training: A Survey of Poisoning Attacks and Defenses on Machine Learning Systems.

20. Backdoor Attack against Face Sketch Synthesis.

21. Formulating Cybersecurity Requirements for Autonomous Ships Using the SQUARE Methodology.

22. An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks

24. Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning.

25. Adversarial Example Generation Method Based on Sensitive Features.

26. An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks.

27. An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

28. Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems.

29. A Cascade Defense Method for Multidomain Adversarial Attacks under Remote Sensing Detection.

30. Using side-channel and quantization vulnerability to recover DNN weights

31. Backdoor Attack against Face Sketch Synthesis

32. Formulating Cybersecurity Requirements for Autonomous Ships Using the SQUARE Methodology

33. LinkBreaker: Breaking the Backdoor-Trigger Link in DNNs via Neurons Consistency Check.

34. VulnerGAN: a backdoor attack through vulnerability amplification against machine learning-based network intrusion detection systems.

35. 5G 專網於 O-RAN 架構下的通訊資安發展趨勢.

36. Improving the transferability of adversarial examples through neighborhood attribution.

37. FDNet: Imperceptible backdoor attacks via frequency domain steganography and negative sampling.

38. Hierarchical hardware trojan for LUT‐based AI devices and its evaluation.

39. Robust Adversarial Attack Against Explainable Deep Classification Models Based on Adversarial Images With Different Patch Sizes and Perturbation Ratios

40. Wireless Communications for Data Security: Efficiency Assessment of Cybersecurity Industry—A Promising Application for UAVs

41. APE-GAN++: An Improved APE-GAN to Eliminate Adversarial Perturbations.

42. Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems

43. Journal of Cybersecurity and Privacy

44. Traffic sign attack via pinpoint region probability estimation network.

45. A Cascade Defense Method for Multidomain Adversarial Attacks under Remote Sensing Detection

46. Stealthy dynamic backdoor attack against neural networks for image classification.

47. Not All Samples Are Born Equal: Towards Effective Clean-Label Backdoor Attacks.

48. Unauthorized AI cannot recognize me: Reversible adversarial example.

49. Semi-supervised robust training with generalized perturbed neighborhood.

50. Physical security of deep learning on edge devices: Comprehensive evaluation of fault injection attack vectors.

Catalog

Books, media, physical & digital resources