Search

Your search keyword '"Gong, Neil Zhenqiang"' showing total 89 results

Search Constraints

Start Over You searched for: Author "Gong, Neil Zhenqiang" Remove constraint Author: "Gong, Neil Zhenqiang" Database OAIster Remove constraint Database: OAIster
89 results on '"Gong, Neil Zhenqiang"'

Search Results

1. Optimization-based Prompt Injection Attack to LLM-as-a-Judge

2. Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

3. Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

4. Visual Hallucinations of Multi-modal Large Language Models

5. Poisoning Federated Recommender Systems with Fake Users

6. TrustLLM: Trustworthiness in Large Language Models

7. SoK: Gradient Leakage in Federated Learning

8. Watermark-based Detection and Attribution of AI-Generated Content

9. Link Stealing Attacks Against Inductive Graph Neural Networks

10. Model Poisoning Attacks to Federated Learning via Multi-Round Consistency

11. PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees

12. REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service

13. PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts

14. Evading Watermark based Detection of AI-Generated Content

15. PORE: Provably Robust Recommender Systems against Data Poisoning Attacks

16. Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

17. Competitive Advantage Attacks to Decentralized Federated Learning

18. Formalizing and Benchmarking Prompt Injection Attacks and Defenses

19. MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use

20. DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks

21. Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework

22. AFLGuard: Byzantine-robust Asynchronous Federated Learning

23. Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning

24. CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning

25. Addressing Heterogeneity in Federated Learning via Distributional Transformation

26. FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

27. MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

28. FLCert: Provably Secure Federated Learning against Poisoning Attacks

29. Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning

30. FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

31. Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation

32. PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning

33. MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

34. GraphTrack: A Graph-based Cross-Device Tracking Framework

35. StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning

36. HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance

37. Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data

38. 10 Security and Privacy Problems in Large Foundation Models

39. FaceGuard: Proactive Deepfake Detection

40. EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning

41. BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning

42. Understanding the Security of Deepfake Detection

43. Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation

44. Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention

45. PointGuard: Provably Robust 3D Point Cloud Classification

46. Data Poisoning Attacks and Defenses to Crowdsourcing Systems

47. Provably Secure Federated Learning against Malicious Clients

48. Practical Blind Membership Inference Attack via Differential Comparisons

49. Data Poisoning Attacks to Deep Learning Based Recommender Systems

50. Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

Catalog

Books, media, physical & digital resources