Search

Your search keyword '"Zhang, Leo Yu"' showing total 354 results

Search Constraints

Start Over You searched for: Author "Zhang, Leo Yu" Remove constraint Author: "Zhang, Leo Yu"
354 results on '"Zhang, Leo Yu"'

Search Results

1. Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization

2. DarkSAM: Fooling Segment Anything Model to Segment Nothing

3. BadRobot: Manipulating Embodied LLMs in the Physical World

4. ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification

5. Memorization in deep learning: A survey

6. Large Language Model Watermark Stealing With Mixed Integer Programming

7. IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

8. DarkFed: A Data-Free Backdoor Attack in Federated Learning

9. Algorithmic Fairness: A Tolerance Perspective

10. Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving

11. Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples

12. Towards Model Extraction Attacks in GAN-Based Image Translation via Domain Shift Mitigation

13. Fluent: Round-efficient Secure Aggregation for Private Federated Learning

14. Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks

15. MISA: Unveiling the Vulnerabilities in Split Federated Learning

16. Robust Backdoor Detection for Deep Learning via Topological Evolution Dynamics

18. Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations

19. AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

20. Towards Self-Interpretable Graph-Level Anomaly Detection

21. Turn Passive to Active: A Survey on Active Intellectual Property Protection of Deep Learning Models

22. Client-side Gradient Inversion Against Federated Learning from Poisoning

23. A Four-Pronged Defense Against Byzantine Attacks in Federated Learning

24. Downstream-agnostic Adversarial Examples

25. Why Does Little Robustness Help? Understanding and Improving Adversarial Transferability from Surrogate Training

26. Catch Me If You Can: A New Low-Rate DDoS Attack Strategy Disguised by Feint

27. Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

28. Masked Language Model Based Textual Adversarial Example Detection

29. PointAPA: Towards Availability Poisoning Attacks in 3D Point Clouds

30. PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples

31. M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models

32. Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints

33. BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label

34. Evaluating Membership Inference Through Adversarial Robustness

35. Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

36. Towards Privacy-Preserving Neural Architecture Search

37. Attention Distraction: Watermark Removal Through Continual Learning with Selective Forgetting

38. Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

40. A Survey of PPG's Application in Authentication

41. Defining Security Requirements with the Common Criteria: Applications, Adoptions, and Challenges

42. Challenges and Approaches for Mitigating Byzantine Attacks in Federated Learning

45. From Chaos to Pseudo-Randomness: A Case Study on the 2D Coupled Map Lattice

46. FairCMS: Cloud Media Sharing with Fair Copyright Protection

47. Self-Supervised Adversarial Example Detection by Disentangled Representation

48. WiP: Towards Zero Trust Authentication in Critical Industrial Infrastructures with PRISM

49. Predicate Private Set Intersection with Linear Complexity

50. A Generic Enhancer for Backdoor Attacks on Deep Neural Networks

Catalog

Books, media, physical & digital resources