Search

Your search keyword '"Gong, Neil Zhenqiang"' showing total 337 results

Search Constraints

Start Over You searched for: Author "Gong, Neil Zhenqiang" Remove constraint Author: "Gong, Neil Zhenqiang"
337 results on '"Gong, Neil Zhenqiang"'

Search Results

1. Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment

2. Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models

3. StringLLM: Understanding the String Processing Capability of Large Language Models

4. Evaluating Large Language Model based Personal Information Extraction and Countermeasures

5. A General Framework for Data-Use Auditing of ML Models

6. Refusing Safe Prompts for Multi-modal Large Language Models

7. Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning

8. Certifiably Robust Image Watermark

9. Self-Cognition in Large Language Models: An Exploratory Study

10. ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods

11. AudioMarkBench: Benchmarking Robustness of Audio Watermarking

12. Link Stealing Attacks Against Inductive Graph Neural Networks

13. Model Poisoning Attacks to Federated Learning via Multi-Round Consistency

14. SoK: Gradient Leakage in Federated Learning

15. Watermark-based Attribution of AI-Generated Content

16. Optimization-based Prompt Injection Attack to LLM-as-a-Judge

17. Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

18. Visual Hallucinations of Multi-modal Large Language Models

19. Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

20. Poisoning Federated Recommender Systems with Fake Users

21. TrustLLM: Trustworthiness in Large Language Models

22. Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

23. Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

24. Competitive Advantage Attacks to Decentralized Federated Learning

25. Formalizing and Benchmarking Prompt Injection Attacks and Defenses

26. MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use

27. DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks

28. Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework

29. PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts

30. Evading Watermark based Detection of AI-Generated Content

31. PORE: Provably Robust Recommender Systems against Data Poisoning Attacks

32. PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees

33. REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service

34. AFLGuard: Byzantine-robust Asynchronous Federated Learning

35. Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning

36. CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning

37. Addressing Heterogeneity in Federated Learning via Distributional Transformation

38. FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

39. MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

40. FLCert: Provably Secure Federated Learning against Poisoning Attacks

41. Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning

42. FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

43. Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation

44. PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning

45. MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

46. GraphTrack: A Graph-based Cross-Device Tracking Framework

47. StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning

48. HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance

49. Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data

50. 10 Security and Privacy Problems in Large Foundation Models

Catalog

Books, media, physical & digital resources