Search

Your search keyword '"ai security"' showing total 103 results

Search Constraints

Start Over You searched for: Descriptor "ai security" Remove constraint Descriptor: "ai security"
103 results on '"ai security"'

Search Results

1. Cybersecurity Challenges and Risks in AGI Development and Deployment

2. The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure

3. Improving the transferability of adversarial examples with path tuning.

4. The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure.

5. Sparse Backdoor Attack Against Neural Networks.

6. VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification

7. DFaP: Data Filtering and Purification Against Backdoor Attacks

9. A Primer on Generative Artificial Intelligence.

10. Locality-Based Action-Poisoning Attack against the Continuous Control of an Autonomous Driving Model.

11. A Global Object Disappearance Attack Scenario on Object Detection

12. REN-A.I.: A Video Game for AI Security Education Leveraging Episodic Memory

13. Channel-augmented joint transformation for transferable adversarial attacks.

14. FMSA: a meta-learning framework-based fast model stealing attack technique against intelligent network intrusion detection systems

15. AFLOW: Developing Adversarial Examples Under Extremely Noise-Limited Settings

16. Defending Against Backdoor Attacks by Layer-wise Feature Analysis

17. Detecting and Mitigating Backdoor Attacks with Dynamic and Invisible Triggers

18. An interpretability security framework for intelligent decision support systems based on saliency map.

19. Privacy preserving for AI-based 3D human pose recovery and retargeting.

20. Toward a Comprehensive Framework for Ensuring Security and Privacy in Artificial Intelligence.

21. 基于样本分布特征的数据投毒防御.

22. Adaptive Backdoor Attack against Deep Neural Networks.

23. CANARY: An Adversarial Robustness Evaluation Platform for Deep Learning Models on Image Classification.

24. FMSA: a meta-learning framework-based fast model stealing attack technique against intelligent network intrusion detection systems.

25. Threats to Training: A Survey of Poisoning Attacks and Defenses on Machine Learning Systems.

26. Backdoor Attack against Face Sketch Synthesis.

27. Formulating Cybersecurity Requirements for Autonomous Ships Using the SQUARE Methodology.

28. An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks

30. Query-Efficient Black-Box Adversarial Attack with Random Pattern Noises

31. Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning.

32. Adversarial Example Generation Method Based on Sensitive Features.

33. An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks.

34. TranFuzz: An Ensemble Black-Box Attack Framework Based on Domain Adaptation and Fuzzing

35. Privacy Protection Framework for Credit Data in AI

36. An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

37. Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems.

38. A Cascade Defense Method for Multidomain Adversarial Attacks under Remote Sensing Detection.

39. Backdoor attacks in federated learning with regression

40. Using side-channel and quantization vulnerability to recover DNN weights

41. Backdoor Attack against Face Sketch Synthesis

42. Formulating Cybersecurity Requirements for Autonomous Ships Using the SQUARE Methodology

43. LinkBreaker: Breaking the Backdoor-Trigger Link in DNNs via Neurons Consistency Check.

44. VulnerGAN: a backdoor attack through vulnerability amplification against machine learning-based network intrusion detection systems.

45. 5G 專網於 O-RAN 架構下的通訊資安發展趨勢.

46. Improving the transferability of adversarial examples through neighborhood attribution.

47. FDNet: Imperceptible backdoor attacks via frequency domain steganography and negative sampling.

48. Hierarchical hardware trojan for LUT‐based AI devices and its evaluation.

49. Robustness Analysis on Natural Language Processing Based AI Q&A Robots

50. Robust Adversarial Attack Against Explainable Deep Classification Models Based on Adversarial Images With Different Patch Sizes and Perturbation Ratios

Catalog

Books, media, physical & digital resources