Search

Your search keyword '"Zhang, Leo Yu"' showing total 202 results

Search Constraints

Start Over You searched for: Author "Zhang, Leo Yu" Remove constraint Author: "Zhang, Leo Yu" Search Limiters Available in Library Collection Remove constraint Search Limiters: Available in Library Collection
202 results on '"Zhang, Leo Yu"'

Search Results

1. TrojanRobot: Backdoor Attacks Against Robotic Manipulation in the Physical World

2. Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization

3. DarkSAM: Fooling Segment Anything Model to Segment Nothing

4. BadRobot: Manipulating Embodied LLMs in the Physical World

5. ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification

6. Memorization in deep learning: A survey

7. Large Language Model Watermark Stealing With Mixed Integer Programming

8. IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

9. DarkFed: A Data-Free Backdoor Attack in Federated Learning

10. Algorithmic Fairness: A Tolerance Perspective

11. Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving

12. Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples

13. Towards Model Extraction Attacks in GAN-Based Image Translation via Domain Shift Mitigation

14. Fluent: Round-efficient Secure Aggregation for Private Federated Learning

15. Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks

16. MISA: Unveiling the Vulnerabilities in Split Federated Learning

17. Robust Backdoor Detection for Deep Learning via Topological Evolution Dynamics

18. Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations

19. AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

20. Towards Self-Interpretable Graph-Level Anomaly Detection

21. Turn Passive to Active: A Survey on Active Intellectual Property Protection of Deep Learning Models

22. Client-side Gradient Inversion Against Federated Learning from Poisoning

23. A Four-Pronged Defense Against Byzantine Attacks in Federated Learning

24. Downstream-agnostic Adversarial Examples

25. Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training

26. Catch Me If You Can: A New Low-Rate DDoS Attack Strategy Disguised by Feint

27. Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

28. Masked Language Model Based Textual Adversarial Example Detection

29. PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples

30. M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models

31. Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints

32. BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label

33. Evaluating Membership Inference Through Adversarial Robustness

34. Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

35. Towards Privacy-Preserving Neural Architecture Search

36. Attention Distraction: Watermark Removal Through Continual Learning with Selective Forgetting

37. Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

38. A Survey of PPG's Application in Authentication

39. Defining Security Requirements with the Common Criteria: Applications, Adoptions, and Challenges

40. Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning

42. From Chaos to Pseudo-Randomness: A Case Study on the 2D Coupled Map Lattice

43. FairCMS: Cloud Media Sharing with Fair Copyright Protection

44. Self-Supervised Adversarial Example Detection by Disentangled Representation

49. On the security of a class of diffusion mechanisms for image encryption

Catalog

Books, media, physical & digital resources