Back to Search Start Over

A Brief Survey on Safety of Large Language Models.

Authors :
Zhengjie Gao
Xuanzi Liu
Yuanshuai Lan
Zheng Yang
Source :
Journal of Computing & Information Technology; Mar2024, Vol. 32 Issue 1, p47-64, 18p
Publication Year :
2024

Abstract

Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and have been widely adopted in various applications such as ma)chine translation, chatbots, text summarization, and so on. However, the use of LLMs has raised concerns about their potential safety and security risks. In this survey, we explore the safety implications of LLMs, including ethical considerations, hallucination, and prompt injection. We also discuss current research efforts to mitigate these risks and identify areas for future research. Our survey provides a comprehen)sive overview of the safety concerns related to LLMs, which can help researchers and practitioners in the NLP community develop more safe and ethical appli)cations of LLMs. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13301136
Volume :
32
Issue :
1
Database :
Supplemental Index
Journal :
Journal of Computing & Information Technology
Publication Type :
Academic Journal
Accession number :
178611738
Full Text :
https://doi.org/10.20532/cit.2024.1005778