1. A Brief Survey on Safety of Large Language Models
- Author
-
Zhengjie Gao, Xuanzi Liu, Yuanshuai Lan, and Zheng Yang
- Subjects
large language models ,safety ,hallucination ,prompt injection ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and have been widely adopted in various applications such as machine translation, chatbots, text summarization, and so on. However, the use of LLMs has raised concerns about their potential safety and security risks. In this survey, we explore the safety implications of LLMs, including ethical considerations, hallucination, and prompt injection. We also discuss current research efforts to mitigate these risks and identify areas for future research. Our survey provides a comprehensive overview of the safety concerns related to LLMs, which can help researchers and practitioners in the NLP community develop more safe and ethical applications of LLMs.
- Published
- 2024
- Full Text
- View/download PDF